CedarBackup3-3.1.6/0002775000175000017500000000000012657665551015516 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/Changelog0000664000175000017500000010615312657665477017343 0ustar pronovicpronovic00000000000000Version 3.1.6 13 Feb 2016 * Fix bug in the file encoding checks for the amazons3sync tool. Version 3.1.5 02 Jan 2016 * Fix or disable a variety of new warnings and suggestions from pylint. Version 3.1.4 11 Aug 2015 * Improvements based on testing in the Debian continuous integration environment. - Make the logging setup process obey the --stack command-line option - Fix logging setup to always create the log file with the proper specified mode - Fix PurgeItemList.removeYoungFiles() so ageInWholeDays can never be negative - Make filesystemtests more portable, with maximum file path always <= 255 bytes Version 3.1.1 04 Aug 2015 * Fix incorrect exception raise without % in util.py, found by accident. * Fix cut-and-paste typo 'iplemented' in cli.py, config.py, and util.py. * Fix bugs in the ByteQuantity changes from v3.1.0, so comparisons work properly. * Adjust amazons3, capacity and split to use ByteQuantity directly, not bytes field. Version 3.1.0 03 Aug 2015 * Enhance ByteQuantity so it can be built from and compared to simple numeric values. * Improve the way the amazons3 extension deals with byte quantities. - Fix configuration to support quantities like "2.5 GB", as in other extensions - Improve logging using displayBytes(), so displayed quantities are more legible Version 3.0.2 30 Jul 2015 * Improvements based on integration testing with my own backup configuration. - Fix problems with pickle by using binary open, protocol=0, fix_imports=True - Be stricter with the way files are opened/closed, relying on the new 'with' idiom - Make sure every file opened for use with executeCommand() uses mode "wb" Version 3.0.1 29 Jul 2015 * Create project in Mercurial at BitBucket, alongside Cedar Backup 2. * Convert to Python 3, using v2.24.2 as the basis for conversion. * Rename files (logs, config, executables) to use cback3 prefix rather than cback. * Remove support for Windows and Cygwin, which was never advertised and rarely tested. * Fix a variety of minor warnings from pylint 1.4.4, most of which also appeared in 2.24.2. * Clean up manpages and add notes about migrating to version 3. * Review user guide, fix broken links, make minor tweaks to wording, etc. * Fix long-standing bugs with pre- and post-action hooks, ported from v2.24.4. Version 2.24.2 05 Jan 2015 * Add optional size-limit configuration for amazons3 extension. Version 2.24.1 07 Oct 2014 * Implement a new tool called cback-amazons3-sync. * Add support for missing --diagnostics flag in cback-span script. Version 2.23.3 03 Oct 2014 * Add new extension amazons3 as an optional replacement for the store action. * Update user manual and INSTALL to clarify a few of the dependencies. * Fix encryption unit test that started failing due to my new GPG key. Version 2.22.0 09 May 2013 * Add eject-related kludges to work around observed behavior. * New config option eject_delay, to slow down open/close * Unlock tray with 'eject -i off' to handle potential problems Version 2.21.1 21 Mar 2013 * Apply patches provided by Jan Medlock as Debian bugs. * Fix typo in manpage (showed -s instead of -D) * Support output from latest /usr/bin/split (' vs. `) Version 2.21.0 12 Oct 2011 * Update CREDITS file to consistently credit all contributers. * Minor tweaks based on PyLint analysis (mostly config changes). * Make ISO image unit tests more robust in writersutiltests.py. - Handle failures with unmount (wait 1 second and try again) - Programmatically disable (and re-enable) the GNOME auto-mounter * Implement configurable recursion for collect action. - Update collect.py to handle recursion (patch by Zoran Bosnjak) - Add new configuration item CollectDir.recursionLevel - Update user manual to discuss new functionality Version 2.20.1 19 Oct 2010 * Fix minor formatting issues in manpages, pointed out by Debian lintian. * Changes required to make code compatible with Python 2.7 - StreamHandler no longer accepts strm= argument (closes: #3079930) - Modify logfile os.fdopen() to be explicit about read/write mode - Fix tests that extract a tarfile twice (exposed by new error behavior) Version 2.20.0 07 Jul 2010 * This is a cleanup release with no functional changes. * Switch to minimum Python version of 2.5 (everyone should have it now). - Make cback script more robust in the case of a bad interpreter version - Change file headers, comments, manual, etc. to reference Python 2.5 - Convert to use @staticmethod rather than x = staticmethod(x) - Change interpreter checks in test.py, cli.py and span.py - Remove Python 2.3-compatible versions of util.nullDevice() and util.Pipe * Configure pylint and execute it against the entire codebase. - Fix a variety of minor warnings and suggestions from pylint - Move unit tests into testcase folder to avoid test.py naming conflict * Remove "Translate [x:y] into [a:b]" debug message for uid/gid translation. * Refactor out util.isRunningAsRoot() to replace scattered os.getuid() calls. * Remove boilerplate comments "As with all of the ... " in config code. * Refactor checkUnique() and parseCommaSeparatedString() from config to util. * Add note in manual about intermittent problems with DVD writer soft links. Version 2.19.6 22 May 2010 * Work around strange stderr file descriptor bugs discovered on Cygwin. * Tweak expected results for tests that fail on Cygwin with Python 2.5.x. * Set up command overrides properly so full test suite works on Debian. * Add refresh_media_delay configuration option and related functionality. Version 2.19.5 10 Jan 2010 * Add customization support, so Debian can use wodim and genisoimage. * SF bug #2929447 - fix cback-span to only ask for media when needed * SF bug #2929446 - add retry logic for writes in cback-span Version 2.19.4 16 Aug 2009 * Add support for the Python 2.6 interpreter. - Use hashlib instead of deprecated sha module when available - Use set type rather than deprecated sets.Set when available - Use tarfile.format rather than deprecated tarfile.posix when available - Fix testGenerateTarfile_002() so expectations match Python 2.6 results Version 2.19.3 29 Mar 2009 * Fix minor epydoc typos, mostly in @sort directives. * Removed support for user manual PDF format (see doc/pdf). Version 2.19.2 08 Dec 2008 * Fix cback-span problem when writing store indicators. Version 2.19.1 15 Nov 2008 * Fix bug when logging strange filenames. Version 2.19.0 05 Oct 2008 * Fix a few typos in the CREDITS file. * Update README to properly reference SourceForge site. * Add option to peer configuration. Version 2.18.0 05 May 2008 * Add the ability to dereference links when following them. - Add util.dereferenceLink() function - Add dereference flag to FilesystemList.addDirContents() - Add CollectDir.dereference attribute - Modify collect action to obey CollectDir.dereference - Update user manual to discuss new attribute Version 2.17.1 26 Apr 2008 * Updated copyright statement slightly. * Updated user manual. - Brought copyright notices up-to-date - Fixed various URLs that didn't reference SourceForge * Fixed problem with link_depth (closes: #1930729). - Can't add links directly, they're implicitly added later by tar - Changed FilesystemList to use includePath=false for recursive links Version 2.17.0 20 Mar 2008 * Change suggested execution index for Capacity extension in manual. * Provide support for application-wide diagnostic reporting. - Add util.Diagnostics class to encapsulate information - Log diagnostics when Cedar Backup first starts - Print diagnostics when running unit tests - Add a new --diagnostics command-line option * Clean up filesystem code that deals with file age, and improve unit tests. - Some platforms apparently cannot set file ages precisely - Change calculateFileAge() to use floats throughout, which is safer - Change removeYoungFiles() to explicitly check on whole days - Put a 1-second fudge factor into unit tests when setting file ages * Fix some unit test failures discovered on Windows XP. - Fix utiltests.TestFunctions.testNullDevice_001() - Fix filesystemtests.TestBackupFileList.testGenerateFitted_004() - Fix typo in filesystemtests.TestFilesystemList.testRemoveLinks_002() Version 2.16.0 18 Mar 2008 * Make name attribute optional in RemotePeer constructor. * Add support for collecting soft links (closes: #1854631). - Add linkDepth parameter to FilesystemList.addDirContents() - Add CollectDir.linkDepth attribute - Modify collect action to obey CollectDir.linkDepth - Update user manual to discuss new attribute - Document "link farm" option for collect configuration * Implement a capacity-checking extension (closes: #1915496). - Add new extension in CedarBackup2/extend/capacity.py - Refactor ByteQuantity out of split.py and into config.py - Add total capacity and utilization to MediaCapacity classes - Update user manual to discuss new extension Version 2.15.3 16 Mar 2008 * Fix testEncodePath_009() to be aware of "UTF-8" encoding. * Fix typos in the PostgreSQL extension section of the manual. * Improve logging when stage action fails (closes: #1854635). * Fix stage action so it works for local users (closes: #1854634). Version 2.15.2 07 Feb 2008 * Updated copyright statements now that code changed in year 2008. * Fix two unit test failures when using Python 2.5 (SF #1861878). - Add new function testtutil.hexFloatLiteralAllowed() - Fix splittests.TestByteQuantity.testConstructor_004() for 0xAC - Fix configtests.TestBlankBehavior.testConstructor_006() for 0xAC Version 2.15.1 19 Dec 2007 * Improve error reporting for managed client action failures. * Make sure that managed client failure does not kill entire backup. * Add appendix "Securing Password-less SSH Connection" to user manual. Version 2.15.0 18 Dec 2007 * Minor documentation tweaks discovered during 3.0 development. * Add support for a new managed backup feature. - Add a new configuration section (PeersConfig) - Change peers configuration in to just override - Modify stage process to take peers list from peers section (if available) - Add new configuration in options and remote peers to support remote shells - Update user manual to discuss managed backup concept and configuration - Add executeRemoteCommand() and executeManagedAction() on peer.RemotePeer Version 2.14.0 19 Sep 2007 * Deal properly with programs that localize their output. - Create new util.sanitizeEnvironment() function to set $LANG=C - Call new sanitizeEnvironment() function inside util.executeCommand() - Change extend/split._splitFile() to be more verbose about problems - Update Extension Architecture Interface to mandate $LANG=C - Add split unit tests to catch any locale-related regressions - Thanks to Lukasz Nowak for initial debugging in split extension Version 2.13.2 10 Jul 2007 * Tweak some docstring markup to work with Epydoc beta 1. * Apply documentation patch from Lukasz K. Nowak. - Document that mysql extension can back up remote databases - Fix typos in extend/sysinfo.py * Clean up some configuration error messages to be clearer. - Make sure that reported errors always include enough information - Add a prefix argument to some of the specialized lists in util.py * Catch invalid regular expressions in config and filesystem code. - Add new util.RegexList list to contain only valid regexes - Use RegexList in config.ConfigDir and config.CollectConfig - Use RegexList in subversion.RepositoryDir and mbox.MboxDir - Throw ValueError on bad regex in FilesystemList remove() methods - Use RegexList in FilesystemList for all lists of patterns Version 2.13.1 29 Mar 2007 * Fix ongoing problems re-initializing previously-written DVDs - Even with -Z, growisofs sometimes wouldn't overwrite DVDs - It turns out that this ONLY happens from cron, not from a terminal - The solution is to use the undocumented option -use-the-force-luke=tty - Also corrected dvdwriter to use option "-dry-run" not "--dry-run" Version 2.13.0 25 Mar 2007 * Change writeIndicator() to raise exception on failure (closes #53). * Change buildNormalizedPath() for leading "." so files won't be hidden * Remove bogus usage of tempfile.NamedTemporaryFile in remote peer. * Refactored some common action code into CedarBackup2.actions.util. * Add unit tests for a variety of basic utility functions (closes: #45). - Error-handling was improved in some utility methods - Fundamentally, behavior should be unchanged * Reimplement DVD capacity calculation (initial code from Dmitry Rutsky). - This is now done using a growisofs dry run, without -Z - The old dvd+rw-mediainfo method was unreliable on some systems - Error-handling behavior on CdWriter was also tweaked for consistency * Add code to check media before writing to it (closes: #5). - Create new check_media store configuration option - Implement new initialize action to initialize rewritable media - Media is initialized by writing an initial session with media label - The store action now always writes a media label as well - Update user manual to discuss the new behavior - Add unit tests for new configuration * Implement an optimized media blanking strategy (closes: #48). - When used, Cedar Backup will only blank media when it runs out of space - Initial implementation and manual text provided by Dmitry Rutsky - Add new blanking_behavior store configuration options - Update user manual to document options and discuss usage - Add unit tests for new configuration Version 2.12.1 26 Feb 2007 * Fix typo in new split section in the user manual. * Fix incorrect call to new writeIndicatorFile() function in stage action. * Add notes in manual on how to find gpg and split commands. Version 2.12.0 23 Feb 2007 * Fix some encrypt unit tests related to config validation * Make util.PathResolverSingleton a new-style class (i.e. inherit from object) * Modify util.changeOwnership() to be a no-op for None user or group * Created new split extension to split large staged files. - Refactored common action utility code into actions/util.py. - Update standard actions, cback-span, and encrypt to use refactored code - Updated user manual to document the new extension and restore process. Version 2.11.0 21 Feb 2007 * Fix log message about SCSI id in writers/dvdwriter.py. * Remove TODO from public distribution (use Bugzilla instead). * Minor changes to mbox functionality (refactoring, test cleanup). * Fix bug in knapsack implementation, masked by poor test suite. * Fix filesystem unit tests that had typos in them and wouldn't work * Reorg user manual to move command-line tools to own chapter (closes: #33) * Add validation for duplicate peer and extension names (closes: #37, #38). * Implement new cback-span command-line tool (closes: #51). - Create new util/cback-span script and CedarBackup2.tools package - Implement guts of script in CedarBackup2/tools/span.py - Add new BackupFileList.generateSpan() method and tests - Refactor other util and filesystem code to make things work - Add new section in user manual to discuss new command * Rework validation requiring least one item to collect (closes: #34). - This is no longer a validation error at the configuration level - Instead, the collect action itself will enforce the rule when it is run * Support a flag in store configuration (closes: #39). - Change StoreConfig, CdWriter and DvdWriter to accept new flag - Update user manual to document new flag, along with warnings about it * Support repository directories in Subversion extension (closes: #46). - Add configuration modeled after - Make configuration value optional and for reference only - Refactor code and deprecate BDBRepository and FSFSRepository - Update user manual to reflect new functionality Version 2.10.1 30 Jan 2007 * Fix a few places that still referred only to CD/CD-RW. * Fix typo in definition of actions.constants.DIGEST_EXTENSION. Version 2.10.0 30 Jan 2007 * Add support for DVD writers and DVD+R/DVD+RW media. - Create new writers.dvdwriter module and DvdWriter class - Support 'dvdwriter' device type, and 'dvd+r' and 'dvd+rw' media types - Rework user manual to properly discuss both CDs and DVDs * Support encrypted staging directories (closes: #33). - Create new 'encrypt' extension and associated unit tests - Document new extension in user manual * Support new action ordering mechanism for extensions. - Extensions can now specify dependencies rather than indexes - Rewrote cli._ActionSet class to use DirectedGraph for dependencies - This functionality is not yet "official"; that will happen later * Refactor and clean up code that implements standard actions. - Split action.py into various other files in the actions package - Move a few of the more generic utility functions into util.py - Preserve public interface via imports in otherwise empty action.py - Change various files to import from the new module locations * Revise and simplify the implied "image writer" interface in CdWriter. - Add the new initializeImage() and addImageEntry() methods - Interface is now initializeImage(), addImageEntry() and writeImage() - Rework actions.store.writeImage() to use new writer interface * Refactor CD writer functionality and clean up code. - Create new writers package to hold all image writers - Move image.py into writers/util.py package - Move most of writer.py into writers/cdwriter.py - Move writer.py validate functions into writers/util.py - Move writertests.py into cdwritertests.py - Move imagetests.py into writersutiltests.py - Preserve public interface via imports in otherwise empty files - Change various files to import from the new module locations * More general code cleanup and minor enhancements. - Modify util/test.py to accept named tests on command line - Fix rebuild action to look at store config instead of stage. - Clean up xmlutil imports in mbox and subversion extensions - Copy Mac OS X (darwin) errors from store action into rebuild action - Check arguments to validateScsiId better (no None path allowed now) - Rename variables in config.py to be more consistent with each other - Add new excludeBasenamePatterns flag to FilesystemList - Add new addSelf flag to FilesystemList.addDirContents() - Create new RegexMatchList class in util.py, and add tests - Create new DirectedGraph class in util.py, and add tests - Create new sortDict() function in util.py, and add tests * Create unit tests for functionality that was not explictly tested before. - ActionHook, PreActionHook, PostActionHook, CommandOverride (config.py) - AbsolutePathList, ObjectTypeList, RestrictedContentList (util.py) Version 2.9.0 18 Dec 2006 * Change mbox extension to use ISO-8601 date format when calling grepmail. * Fix error-handling in generateTarfile() when target dir is missing. * Tweak pycheckrc to find fewer expected errors (from standard library). * Fix Debian bug #403546 by supporting more CD writer configurations. - Be looser with SCSI "methods" allowed in valid SCSI id (update regex) - Make config section's parameter optional - Change CdWriter to support "hardware id" as either SCSI id or device - Implement cdrecord commands in terms of hardware id instead of SCSI id - Add documentation in writer.py to discuss how we talk to hardware - Rework user manual's discussion of how to configure SCSI devices * Update Cedar Backup user manual. - Re-order setup procedures to modify cron at end (Debian #403662) - Fix minor typos and misspellings (Debian #403448 among others) - Add discussion about proper ordering of extension actions Version 2.8.1 04 Sep 2006 * Changes to fix, update and properly build Cedar Backup manual - Change DocBook XSL configuration to use "current" stylesheet - Tweak manual-generation rules to work around XSL toolchain issues - Document where to find grepmail utility in Appendix B - Create missing documentation for mbox exclusions configuration - Bumped copyright dates to show "(c) 2005-2006" where needed - Made minor changes to some sections based on proofreading Version 2.8.0 24 Jun 2006 * Remove outdated comment in xmlutil.py about dependency on PyXML. * Tweak wording in doc/docbook.txt to make it clearer. * Consistently rework "project description" everywhere. * Fix some simple typos in various comments and documentation. * Added recursive flag (default True) to FilesystemList.addDirContents(). * Added flat flag (default False) to BackupFileList.generateTarfile(). * Created mbox extension in CedarBackup2.extend.mbox (closes: #31). - Updated user manual to document the new extension and restore process. * Added PostgreSQL extension in CedarBackup2.extend.postgresql (closes: #32). - This code was contributed by user Antoine Beaupre ("The Anarcat"). - I tweaked it slightly, added configuration tests, and updated the manual. - I have no PostgreSQL databases on which to test the functionality. * Made most unit tests run properly on Windows platform, just for fun. * Re-implement Pipe class (under executeCommand) for Python 2.4+ - After Python 2.4, cross-platform subprocess.Popen class is available - Added some new regression tests for executeCommand to stress new Pipe * Switch to newer version of Docbook XSL stylesheet (1.68.1) - The old stylesheet isn't easily available any more (gone from sf.net) - Unfortunately, the PDF output changed somewhat with the new version * Add support for collecting individual files (closes: #30). - Create new config.CollectFile class for use by other classes - Update config.CollectConfig class to contain a list of collect files - Update config.Config class to parse and emit collect file data - Modified collect process in action.py to handle collect files - Updated user manual to discuss new configuraton Version 2.7.2 22 Dec 2005 * Remove some bogus writer tests that depended on an arbitrary SCSI device. Version 2.7.1 13 Dec 2005 * Tweak the CREDITS file to fix a few typos. * Remove completed tasks in TODO file and reorganize it slightly. * Get rid of sys.exit() calls in util/test.py in favor of simple returns. * Fix implementation of BackupFileList.removeUnchanged(captureDigest=True). - Since version 2.7.0, digest only included backed-up (unchanged) files - This release fixes code so digest is captured for all files in the list - Fixed captureDigest test cases, which were testing for wrong results * Make some more updates to the user manual based on further proof-reading. - Rework description of "midnight boundary" warning slightly in basic.xml - Change "Which Linux Distribution?" to "Which Platform?" in config.xml - Fix a few typos and misspellings in basic.xml Version 2.7.0 30 Oct 2005 * Cleanup some maintainer-only (non-distributed) Makefile rules. * Make changes to standardize file headers with other Cedar Solutions code. * Add debug statements to filesystem code (huge increase in debug log size). * Standardize some config variable names ("parentNode" instead of "parent"). * Fix util/test.py to return proper (non-zero) return status upon failure. * No longer attempt to change ownership of files when not running as root. * Remove regression test for bug #25 (testAddFile_036) 'cause it's not portable. * Modify use of user/password in MySQL extension (suggested by Matthias Urlichs). - Make user and password values optional in Cedar Backup configuration - Add a few regression tests to make sure configuration changes work - Add warning when user or password value(s) are visible in process listing - Document use of /root/.my.cnf or ~/.my.cnf in source code and user manual - Rework discussion of command line, file permissions, etc. in user manual * Optimize incremental backup, and hopefully speed it up a bit (closes: #29). - Change BackupFileList.removeUnchanged() to accept a captureDigest flag - This avoids need to call both generateDigestMap() and removeUnchanged() - Note that interface to removeUnchanged was modified, but not broken * Add support for pre- and post-action command hooks (closes: #27). - Added and sections within - Updated user manual documentation for options configuration section - Create new config.PreActionHook and PostActionHook classes to hold hooks - Added new hooks list field on config.OptionsConfig class - Update ActionSet and ActionItem in cli to handle and execute hooks * Rework and abstract XML functionality, plus remove dependency on PyXML. - Refactor general XML utility code out of config.py into xmlutil.py - Create new isElement() function to eliminate need for Node references - Create new createInputDom(), createOutputDom() and serializeDom() functions - Use minidom XML parser rather than PyExpat.reader (much faster) - Hack together xmlutil.Serializer based on xml.dom.ext.PrettyPrint - Remove references to PyXML in manual's depends.xml and install.xml files - Add notes about PyXML code sourced from Fourthought, Inc. in CREDITS - Rework mysql and subversion unit tests in terms of new functions Version 2.6.1 27 Sep 2005 * Fix broken call to node.hasChildNodes (no parens) in config.py. * Make "pre-existing collect indicator" error more obvious (closes: #26). * Avoid failures for UTF-8 filenames on certain filesystems (closes: #25). * Fix FilesystemList to encode excludeList items, preventing UTF-8 failures. Version 2.6.0 12 Sep 2005 * Remove bogus check for remote collect directory on master (closes: #18). * Fix testEncodePath_009 test failure on UTF-8 filesystems (closes: #19). * Fixed several unit tests related to the CollectConfig class (all typos). * Fix filesystem and action code to properly handle path "/" (closes: #24). * Add extension configuration to cback.conf.sample, to clarify things. * Place starting and ending revision numbers into Subversion dump filenames. * Implement resolver mechanism to support paths to commands (closes: #22). - Added section within configuration - Create new config.CommandOverride class to hold overrides - Added new overrides field on config.OptionsConfig class - Create util.PathResolverSingleton class to encapsulate mappings - Create util.resolveCommand convenience function for code to call - Create and call new _setupPathResolver() function in cli code - Change all _CMD constants to _COMMAND, for consistency * Change Subversion extension to support "fsfs" repositories (closes: #20). - Accept "FSFS" repository in configuration section - Create new FSFSRepository class to represent an FSFS repository - Refactor internal code common to both BDB and FSFS repositories - Add and rework test cases to provide coverage of FSFSRepository * Port to Darwin (Mac OS X) and ensure that all regression tests pass. - Don't run testAddDirContents_072() for Darwin (tarball's invalid there) - Write new ISO mount testing methods in terms of Apple's "hdiutil" utility - Accept Darwin-style SCSI writer devices, i.e. "IOCompactDiscServices" - Tweak existing SCSI id pattern to allow spaces in a few other places - Add new regression tests for validateScsiId() utility function - Add code warnings and documentation in manual and in doc/osx * Update, clean up and extend Cedar Backup User Manual (closes: #21). - Work through document and copy-edit it now that it's matured - Add documentation for new options and subversion config items - Exorcise references to Linux which assumed it was "the" platform - Add platform-specific notes for non-Linux platforms (darwin, BSDs) - Clarify purpose of the 'collect' action on the master - Clarify how actions (i.e. 'store') are optional - Clarify that 'all' does not execute extensions - Add an appendix on restoring backups Version 2.5.0 12 Jul 2005 * Update docs to modify use of "secure" (suggested by Lars Wirzenius). * Removed "Not an Official Debian Package" section in software manual. * Reworked Debian install procedure in manual to reference official packages. * Fix manual's build process to create files with mode 664 rather than 755. * Deal better with date boundaries on the store operation (closes: #17). - Add value in configuration - Add warnMidnite field to the StoreConfig object - Add warning in store process for crossing midnite boundary - Change store --full to have more consistent behavior - Update manual to document changes related to this bug Version 2.4.2 23 Apr 2005 * Fix boundaries log message again, properly this time. * Fix a few other log messages that used "," rather than "%". Version 2.4.1 22 Apr 2005 * Fix minor typos in user manual and source code documentation. * Properly annotate code implemented based on Python 2.3 source. * Add info within CREDITS about Python 2.3 and Docbook XSL licenses. * Fix logging for boundaries values (can't print None[0], duh). Version 2.4.0 02 Apr 2005 * Re-license manual under "GPL with clarifications" to satisfy DFSG. * Rework our unmount solution again to try and fix observed problems. - Sometimes, unmount seems to "work" but leaves things mounted. - This might be because some file is not yet completely closed. - We try to work around this by making repeated unmount attempts. - This logic is now encapsulated in util.mount() and util.unmount(). - This solution should also be more portable to non-Linux systems. Version 2.3.1 23 Mar 2005 * Attempt to deal more gracefully with corrupted media. * Unmount media using -l ("lazy unmount") in consistency check. * Be more verbose about media errors during consistency check. Version 2.3.0 10 Mar 2005 * Make 'extend' package public by listing it in CedarBackup2/__init__.py. * Reimplement digest generation to use incremental method (now ~3x faster). * Tweak manifest to be a little more selective about what's distributed. Version 2.2.0 09 Mar 2005 * Fix bug related to execution of commands with huge output. * Create custom class util.Pipe, inheriting from popen2.Popen4. * Re-implement util.executeCommand() in terms of util.Pipe. * Change ownership of sysinfo files to backup user/group after write. Version 2.1.3 08 Mar 2005 * In sysinfo extension, explicitly path to /sbin/fdisk command. * Modify behavior and logging when optional sysinfo commands are not found. * Add extra logging around boundaries and capacity calculations in writer.py. * In executeCommand, log command using output logger as well as debug level. * Docs now suggest --output in cron command line to aid problem diagnosis. * Fix bug in capacity calculation, this time for media with a single session. * Validate all capacity code against v1.0 code, making changes as needed. * Re-evaluate all capacity-related regression tests against v1.0 code. * Add new regression tests for capacity bugs which weren't already detected. Version 2.1.2 07 Mar 2005 * Fix a few extension error messages with incorrect (missing) arguments. * In sysinfo extension, do not log ls and dpkg output to the debug log. * Fix CdWriter, which reported negative capacity when disc was almost full. * Make displayBytes deal properly with negative values via math.fabs(). * Change displayBytes to default to 2 digits after the decimal point. Version 2.1.1 06 Mar 2005 * Fix bug in setup.py (need to install extensions properly). Version 2.1.0 06 Mar 2005 * Fixed doc/cback.1 .TH line to give proper manpage section. * Updated README to more completely describe what Cedar Backup is. * Fix a few logging statements for the collect action, to be clearer. * Fix regression tests that failed in a Debian pbuilder environment. * Add simple main routine to cli.py, so executing it is the same as cback. * Added optional outputFile and doNotLog parameters to util.executeCommand(). * Display byte quantities in sensible units (i.e. bytes, kB, MB) when logged. * Refactored private code into public in action.py and config.py. * Created MySQL extension in CedarBackup2.extend.mysql. * Created sysinfo extension in CedarBackup2.extend.sysinfo. * Created Subversion extension in CedarBackup2.extend.subversion. * Added regression tests as needed for new extension functionality. * Added Chapter 5, Official Extensions in the user manual. Version 2.0.0 26 Feb 2005 * Complete ground-up rewrite for 2.0.0 release. * See doc/release.txt for more details about changes. Version 1.13 25 Jan 2005 * Fix boundaries calculation when using kernel >= 2.6.8 (closes: #16). * Look for a matching boundaries pattern among all lines, not just the first. Version 1.12 16 Jan 2005 * Add support for ATAPI devices, just like ATA (closes: #15). * SCSI id can now be in the form '[ATA:|ATAPI:]scsibus,target,lun'. Version 1.11 17 Oct 2004 * Add experimental support for new Linux 2.6 ATA CD devices. * SCSI id can now be in the form '[ATA:]scsibus,target,lun'. * Internally, the SCSI id is now stored as a string, not a list. * Cleaned up 'cdrecord' calls in cdr.py to make them consistent. * Fixed a pile of warnings noticed by the latest pychecker. Version 1.10 01 Dec 2003 * Removed extraneous error parameter from cback's version() function. * Changed copyright statement and year; added COPYRIGHT in release.py. * Reworked all file headers to match new Cedar Solutions standard. * Removed __version__ and __date__ values with switch to Subversion. * Convert to tabs in Changelog to make the Vim syntax file happy. * Be more stringent in validating contents of SCSI triplet values. * Fixed bug when using modulo 1 (% 1) in a few places. * Fixed shell-interpolation bug discovered by Rick Low (security hole). * Replace all os.popen() calls with new execute_command() call for safety. Version 1.9 09 Nov 2002 * Packaging changes to allow Debian version to be "normal", not Debian-native. * Added CedarBackup/release.py to contain "upstream" release number. * Added -V,--version option to cback script. * Rewrote parts of Makefile to remove most Debian-specific rules. * Changed Makefile and setup.py to get version info from release.py. * The setup.py script now references /usr/bin/env python, not python2.2. * Debian-related changes will now reside exclusively in debian/changelog. Version 1.8 14 Oct 2002 * Fix bug with the way the default mode is displayed in the help screen. Version 1.7 14 Oct 2002 * Bug fix. Upgrade to Python 2.2.2b1 exposed a flaw in my version-check code. Version 1.6 06 Oct 2002 * Debian packaging cleanup (should have been a Debian-only release 1.5-2). Version 1.5 19 Sep 2002 * Changed cback script to more closely control ownership of logfile. Version 1.4 10 Sep 2002 * Various packaging cleanups. * Fixed code that reported negative capacity on a full disc. * Now blank disc ahead of time if it needs to be blanked. * Moved to Python2.2 for cleaner packaging (True, False, etc.) Version 1.3 20 Aug 2002 * Initial "public" release. ----------------------------------------------------------------------------- vim: set ft=changelog noexpandtab: CedarBackup3-3.1.6/README0000664000175000017500000000253712555004756016372 0ustar pronovicpronovic00000000000000Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Alternately, Cedar Backup can write your backups to the Amazon S3 cloud rather than relying on physical media. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python 3 programming language. For more information, see the Cedar Backup web site: https://bitbucket.org/cedarsolutions/cedar-backup3 This is release 3 of the Cedar Backup package. Release 3 is a Python 3 conversion of release 2, with minimal additional functionality. CedarBackup2 and CedarBackup3 are functionally equivalent and are compatible with one another. It is safe to mix-and-match clients running both versions within the same backup configuration. CedarBackup3-3.1.6/INSTALL0000664000175000017500000000274712555004756016546 0ustar pronovicpronovic00000000000000# vim: set ft=text80: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Project : Cedar Backup, release 3 # Purpose : INSTALL instructions for package # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # This module is distributed in standard Python distutils form. Use: python setup.py --help For more information on how to install it. You must have a Python interpreter version 3.4 or better to use these modules. Some external tools are also required for certain features to work. See the user manual for more details. In the simplest case, you will probably just use: python setup.py install to install to your standard Python site-packages directory. Note that on UNIX systems, you will probably need to do this as root. The documentation and unit tests provided with this distribution are not installed by setup.py. You may put them wherever you would like. You may wish to run the unit tests before actually installing anything. Run them like so: python util/test.py If any unit test reports a failure on your system, please email me the output from the unit test, so I can fix the problem. Please make sure to include the diagnostic information printed out at the beginning of the test run. CedarBackup3-3.1.6/CedarBackup3/0002775000175000017500000000000012657665551017745 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/CedarBackup3/cli.py0000664000175000017500000023161412562435353021061 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides command-line interface implementation. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides command-line interface implementation for the cback3 script. Summary ======= The functionality in this module encapsulates the command-line interface for the cback3 script. The cback3 script itself is very short, basically just an invokation of one function implemented here. That, in turn, makes it simpler to validate the command line interface (for instance, it's easier to run pychecker against a module, and unit tests are easier, too). The objects and functions implemented in this module are probably not useful to any code external to Cedar Backup. Anyone else implementing their own command-line interface would have to reimplement (or at least enhance) all of this anyway. Backwards Compatibility ======================= The command line interface has changed between Cedar Backup 1.x and Cedar Backup 2.x. Some new switches have been added, and the actions have become simple arguments rather than switches (which is a much more standard command line format). Old 1.x command lines are generally no longer valid. @var DEFAULT_CONFIG: The default configuration file. @var DEFAULT_LOGFILE: The default log file path. @var DEFAULT_OWNERSHIP: Default ownership for the logfile. @var DEFAULT_MODE: Default file permissions mode on the logfile. @var VALID_ACTIONS: List of valid actions. @var COMBINE_ACTIONS: List of actions which can be combined with other actions. @var NONCOMBINE_ACTIONS: List of actions which cannot be combined with other actions. @sort: cli, Options, DEFAULT_CONFIG, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP, DEFAULT_MODE, VALID_ACTIONS, COMBINE_ACTIONS, NONCOMBINE_ACTIONS @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import sys import os import logging import getopt from functools import total_ordering # Cedar Backup modules from CedarBackup3.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT from CedarBackup3.customize import customizeOverrides from CedarBackup3.util import DirectedGraph, PathResolverSingleton from CedarBackup3.util import sortDict, splitCommandLine, executeCommand, getFunctionReference from CedarBackup3.util import getUidGid, encodePath, Diagnostics from CedarBackup3.config import Config from CedarBackup3.peer import RemotePeer from CedarBackup3.actions.collect import executeCollect from CedarBackup3.actions.stage import executeStage from CedarBackup3.actions.store import executeStore from CedarBackup3.actions.purge import executePurge from CedarBackup3.actions.rebuild import executeRebuild from CedarBackup3.actions.validate import executeValidate from CedarBackup3.actions.initialize import executeInitialize ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.cli") DISK_LOG_FORMAT = "%(asctime)s --> [%(levelname)-7s] %(message)s" DISK_OUTPUT_FORMAT = "%(message)s" SCREEN_LOG_FORMAT = "%(message)s" SCREEN_LOG_STREAM = sys.stdout DATE_FORMAT = "%Y-%m-%dT%H:%M:%S %Z" DEFAULT_CONFIG = "/etc/cback3.conf" DEFAULT_LOGFILE = "/var/log/cback3.log" DEFAULT_OWNERSHIP = [ "root", "adm", ] DEFAULT_MODE = 0o640 REBUILD_INDEX = 0 # can't run with anything else, anyway VALIDATE_INDEX = 0 # can't run with anything else, anyway INITIALIZE_INDEX = 0 # can't run with anything else, anyway COLLECT_INDEX = 100 STAGE_INDEX = 200 STORE_INDEX = 300 PURGE_INDEX = 400 VALID_ACTIONS = [ "collect", "stage", "store", "purge", "rebuild", "validate", "initialize", "all", ] COMBINE_ACTIONS = [ "collect", "stage", "store", "purge", ] NONCOMBINE_ACTIONS = [ "rebuild", "validate", "initialize", "all", ] SHORT_SWITCHES = "hVbqc:fMNl:o:m:OdsD" LONG_SWITCHES = [ 'help', 'version', 'verbose', 'quiet', 'config=', 'full', 'managed', 'managed-only', 'logfile=', 'owner=', 'mode=', 'output', 'debug', 'stack', 'diagnostics', ] ####################################################################### # Public functions ####################################################################### ################# # cli() function ################# def cli(): """ Implements the command-line interface for the C{cback3} script. Essentially, this is the "main routine" for the cback3 script. It does all of the argument processing for the script, and then sets about executing the indicated actions. As a general rule, only the actions indicated on the command line will be executed. We will accept any of the built-in actions and any of the configured extended actions (which makes action list verification a two- step process). The C{'all'} action has a special meaning: it means that the built-in set of actions (collect, stage, store, purge) will all be executed, in that order. Extended actions will be ignored as part of the C{'all'} action. Raised exceptions always result in an immediate return. Otherwise, we generally return when all specified actions have been completed. Actions are ignored if the help, version or validate flags are set. A different error code is returned for each type of failure: - C{1}: The Python interpreter version is < 3.4 - C{2}: Error processing command-line arguments - C{3}: Error configuring logging - C{4}: Error parsing indicated configuration file - C{5}: Backup was interrupted with a CTRL-C or similar - C{6}: Error executing specified backup actions @note: This function contains a good amount of logging at the INFO level, because this is the right place to document high-level flow of control (i.e. what the command-line options were, what config file was being used, etc.) @note: We assume that anything that I{must} be seen on the screen is logged at the ERROR level. Errors that occur before logging can be configured are written to C{sys.stderr}. @return: Error code as described above. """ try: if list(map(int, [sys.version_info[0], sys.version_info[1]])) < [3, 4]: sys.stderr.write("Python 3 version 3.4 or greater required.\n") return 1 except: # sys.version_info isn't available before 2.0 sys.stderr.write("Python 3 version 3.4 or greater required.\n") return 1 try: options = Options(argumentList=sys.argv[1:]) logger.info("Specified command-line actions: %s", options.actions) except Exception as e: _usage() sys.stderr.write(" *** Error: %s\n" % e) return 2 if options.help: _usage() return 0 if options.version: _version() return 0 if options.diagnostics: _diagnostics() return 0 if options.stacktrace: logfile = setupLogging(options) else: try: logfile = setupLogging(options) except Exception as e: sys.stderr.write("Error setting up logging: %s\n" % e) return 3 logger.info("Cedar Backup run started.") logger.info("Options were [%s]", options) logger.info("Logfile is [%s]", logfile) Diagnostics().logDiagnostics(method=logger.info) if options.config is None: logger.debug("Using default configuration file.") configPath = DEFAULT_CONFIG else: logger.debug("Using user-supplied configuration file.") configPath = options.config executeLocal = True executeManaged = False if options.managedOnly: executeLocal = False executeManaged = True if options.managed: executeManaged = True logger.debug("Execute local actions: %s", executeLocal) logger.debug("Execute managed actions: %s", executeManaged) try: logger.info("Configuration path is [%s]", configPath) config = Config(xmlPath=configPath) customizeOverrides(config) setupPathResolver(config) actionSet = _ActionSet(options.actions, config.extensions, config.options, config.peers, executeManaged, executeLocal) except Exception as e: logger.error("Error reading or handling configuration: %s", e) logger.info("Cedar Backup run completed with status 4.") return 4 if options.stacktrace: actionSet.executeActions(configPath, options, config) else: try: actionSet.executeActions(configPath, options, config) except KeyboardInterrupt: logger.error("Backup interrupted.") logger.info("Cedar Backup run completed with status 5.") return 5 except Exception as e: logger.error("Error executing backup: %s", e) logger.info("Cedar Backup run completed with status 6.") return 6 logger.info("Cedar Backup run completed with status 0.") return 0 ######################################################################## # Action-related class definition ######################################################################## #################### # _ActionItem class #################### @total_ordering class _ActionItem(object): """ Class representing a single action to be executed. This class represents a single named action to be executed, and understands how to execute that action. The built-in actions will use only the options and config values. We also pass in the config path so that extension modules can re-parse configuration if they want to, to add in extra information. This class is also where pre-action and post-action hooks are executed. An action item is instantiated in terms of optional pre- and post-action hook objects (config.ActionHook), which are then executed at the appropriate time (if set). @note: The comparison operators for this class have been implemented to only compare based on the index and SORT_ORDER value, and ignore all other values. This is so that the action set list can be easily sorted first by type (_ActionItem before _ManagedActionItem) and then by index within type. @cvar SORT_ORDER: Defines a sort order to order properly between types. """ SORT_ORDER = 0 def __init__(self, index, name, preHooks, postHooks, function): """ Default constructor. It's OK to pass C{None} for C{index}, C{preHooks} or C{postHooks}, but not for C{name}. @param index: Index of the item (or C{None}). @param name: Name of the action that is being executed. @param preHooks: List of pre-action hooks in terms of an C{ActionHook} object, or C{None}. @param postHooks: List of post-action hooks in terms of an C{ActionHook} object, or C{None}. @param function: Reference to function associated with item. """ self.index = index self.name = name self.preHooks = preHooks self.postHooks = postHooks self.function = function def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. The only thing we compare is the item's index. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.index != other.index: if int(self.index or 0) < int(other.index or 0): return -1 else: return 1 else: if self.SORT_ORDER != other.SORT_ORDER: if int(self.SORT_ORDER or 0) < int(other.SORT_ORDER or 0): return -1 else: return 1 return 0 def executeAction(self, configPath, options, config): """ Executes the action associated with an item, including hooks. See class notes for more details on how the action is executed. @param configPath: Path to configuration file on disk. @param options: Command-line options to be passed to action. @param config: Parsed configuration to be passed to action. @raise Exception: If there is a problem executing the action. """ logger.debug("Executing [%s] action.", self.name) if self.preHooks is not None: for hook in self.preHooks: self._executeHook("pre-action", hook) self._executeAction(configPath, options, config) if self.postHooks is not None: for hook in self.postHooks: self._executeHook("post-action", hook) def _executeAction(self, configPath, options, config): """ Executes the action, specifically the function associated with the action. @param configPath: Path to configuration file on disk. @param options: Command-line options to be passed to action. @param config: Parsed configuration to be passed to action. """ name = "%s.%s" % (self.function.__module__, self.function.__name__) logger.debug("Calling action function [%s], execution index [%d]", name, self.index) self.function(configPath, options, config) def _executeHook(self, type, hook): # pylint: disable=W0622,R0201 """ Executes a hook command via L{util.executeCommand()}. @param type: String describing the type of hook, for logging. @param hook: Hook, in terms of a C{ActionHook} object. """ fields = splitCommandLine(hook.command) logger.debug("Executing %s hook for action [%s]: %s", type, hook.action, fields[0:1]) result = executeCommand(command=fields[0:1], args=fields[1:])[0] if result != 0: raise IOError("Error (%d) executing %s hook for action [%s]: %s" % (result, type, hook.action, fields[0:1])) ########################### # _ManagedActionItem class ########################### @total_ordering class _ManagedActionItem(object): """ Class representing a single action to be executed on a managed peer. This class represents a single named action to be executed, and understands how to execute that action. Actions to be executed on a managed peer rely on peer configuration and on the full-backup flag. All other configuration takes place on the remote peer itself. @note: The comparison operators for this class have been implemented to only compare based on the index and SORT_ORDER value, and ignore all other values. This is so that the action set list can be easily sorted first by type (_ActionItem before _ManagedActionItem) and then by index within type. @cvar SORT_ORDER: Defines a sort order to order properly between types. """ SORT_ORDER = 1 def __init__(self, index, name, remotePeers): """ Default constructor. @param index: Index of the item (or C{None}). @param name: Name of the action that is being executed. @param remotePeers: List of remote peers on which to execute the action. """ self.index = index self.name = name self.remotePeers = remotePeers def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. The only thing we compare is the item's index. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.index != other.index: if int(self.index or 0) < int(other.index or 0): return -1 else: return 1 else: if self.SORT_ORDER != other.SORT_ORDER: if int(self.SORT_ORDER or 0) < int(other.SORT_ORDER or 0): return -1 else: return 1 return 0 def executeAction(self, configPath, options, config): """ Executes the managed action associated with an item. @note: Only options.full is actually used. The rest of the arguments exist to satisfy the ActionItem iterface. @note: Errors here result in a message logged to ERROR, but no thrown exception. The analogy is the stage action where a problem with one host should not kill the entire backup. Since we're logging an error, the administrator will get an email. @param configPath: Path to configuration file on disk. @param options: Command-line options to be passed to action. @param config: Parsed configuration to be passed to action. @raise Exception: If there is a problem executing the action. """ for peer in self.remotePeers: logger.debug("Executing managed action [%s] on peer [%s].", self.name, peer.name) try: peer.executeManagedAction(self.name, options.full) except IOError as e: logger.error(e) # log the message and go on, so we don't kill the backup ################### # _ActionSet class ################### class _ActionSet(object): """ Class representing a set of local actions to be executed. This class does four different things. First, it ensures that the actions specified on the command-line are sensible. The command-line can only list either built-in actions or extended actions specified in configuration. Also, certain actions (in L{NONCOMBINE_ACTIONS}) cannot be combined with other actions. Second, the class enforces an execution order on the specified actions. Any time actions are combined on the command line (either built-in actions or extended actions), we must make sure they get executed in a sensible order. Third, the class ensures that any pre-action or post-action hooks are scheduled and executed appropriately. Hooks are configured by building a dictionary mapping between hook action name and command. Pre-action hooks are executed immediately before their associated action, and post-action hooks are executed immediately after their associated action. Finally, the class properly interleaves local and managed actions so that the same action gets executed first locally and then on managed peers. @sort: __init__, executeActions """ def __init__(self, actions, extensions, options, peers, managed, local): """ Constructor for the C{_ActionSet} class. This is kind of ugly, because the constructor has to set up a lot of data before being able to do anything useful. The following data structures are initialized based on the input: - C{extensionNames}: List of extensions available in configuration - C{preHookMap}: Mapping from action name to list of C{PreActionHook} - C{postHookMap}: Mapping from action name to list of C{PostActionHook} - C{functionMap}: Mapping from action name to Python function - C{indexMap}: Mapping from action name to execution index - C{peerMap}: Mapping from action name to set of C{RemotePeer} - C{actionMap}: Mapping from action name to C{_ActionItem} Once these data structures are set up, the command line is validated to make sure only valid actions have been requested, and in a sensible combination. Then, all of the data is used to build C{self.actionSet}, the set action items to be executed by C{executeActions()}. This list might contain either C{_ActionItem} or C{_ManagedActionItem}. @param actions: Names of actions specified on the command-line. @param extensions: Extended action configuration (i.e. config.extensions) @param options: Options configuration (i.e. config.options) @param peers: Peers configuration (i.e. config.peers) @param managed: Whether to include managed actions in the set @param local: Whether to include local actions in the set @raise ValueError: If one of the specified actions is invalid. """ extensionNames = _ActionSet._deriveExtensionNames(extensions) (preHookMap, postHookMap) = _ActionSet._buildHookMaps(options.hooks) functionMap = _ActionSet._buildFunctionMap(extensions) indexMap = _ActionSet._buildIndexMap(extensions) peerMap = _ActionSet._buildPeerMap(options, peers) actionMap = _ActionSet._buildActionMap(managed, local, extensionNames, functionMap, indexMap, preHookMap, postHookMap, peerMap) _ActionSet._validateActions(actions, extensionNames) self.actionSet = _ActionSet._buildActionSet(actions, actionMap) @staticmethod def _deriveExtensionNames(extensions): """ Builds a list of extended actions that are available in configuration. @param extensions: Extended action configuration (i.e. config.extensions) @return: List of extended action names. """ extensionNames = [] if extensions is not None and extensions.actions is not None: for action in extensions.actions: extensionNames.append(action.name) return extensionNames @staticmethod def _buildHookMaps(hooks): """ Build two mappings from action name to configured C{ActionHook}. @param hooks: List of pre- and post-action hooks (i.e. config.options.hooks) @return: Tuple of (pre hook dictionary, post hook dictionary). """ preHookMap = {} postHookMap = {} if hooks is not None: for hook in hooks: if hook.before: if not hook.action in preHookMap: preHookMap[hook.action] = [] preHookMap[hook.action].append(hook) elif hook.after: if not hook.action in postHookMap: postHookMap[hook.action] = [] postHookMap[hook.action].append(hook) return (preHookMap, postHookMap) @staticmethod def _buildFunctionMap(extensions): """ Builds a mapping from named action to action function. @param extensions: Extended action configuration (i.e. config.extensions) @return: Dictionary mapping action to function. """ functionMap = {} functionMap['rebuild'] = executeRebuild functionMap['validate'] = executeValidate functionMap['initialize'] = executeInitialize functionMap['collect'] = executeCollect functionMap['stage'] = executeStage functionMap['store'] = executeStore functionMap['purge'] = executePurge if extensions is not None and extensions.actions is not None: for action in extensions.actions: functionMap[action.name] = getFunctionReference(action.module, action.function) return functionMap @staticmethod def _buildIndexMap(extensions): """ Builds a mapping from action name to proper execution index. If extensions configuration is C{None}, or there are no configured extended actions, the ordering dictionary will only include the built-in actions and their standard indices. Otherwise, if the extensions order mode is C{None} or C{"index"}, actions will scheduled by explicit index; and if the extensions order mode is C{"dependency"}, actions will be scheduled using a dependency graph. @param extensions: Extended action configuration (i.e. config.extensions) @return: Dictionary mapping action name to integer execution index. """ indexMap = {} if extensions is None or extensions.actions is None or extensions.actions == []: logger.info("Action ordering will use 'index' order mode.") indexMap['rebuild'] = REBUILD_INDEX indexMap['validate'] = VALIDATE_INDEX indexMap['initialize'] = INITIALIZE_INDEX indexMap['collect'] = COLLECT_INDEX indexMap['stage'] = STAGE_INDEX indexMap['store'] = STORE_INDEX indexMap['purge'] = PURGE_INDEX logger.debug("Completed filling in action indices for built-in actions.") logger.info("Action order will be: %s", sortDict(indexMap)) else: if extensions.orderMode is None or extensions.orderMode == "index": logger.info("Action ordering will use 'index' order mode.") indexMap['rebuild'] = REBUILD_INDEX indexMap['validate'] = VALIDATE_INDEX indexMap['initialize'] = INITIALIZE_INDEX indexMap['collect'] = COLLECT_INDEX indexMap['stage'] = STAGE_INDEX indexMap['store'] = STORE_INDEX indexMap['purge'] = PURGE_INDEX logger.debug("Completed filling in action indices for built-in actions.") for action in extensions.actions: indexMap[action.name] = action.index logger.debug("Completed filling in action indices for extended actions.") logger.info("Action order will be: %s", sortDict(indexMap)) else: logger.info("Action ordering will use 'dependency' order mode.") graph = DirectedGraph("dependencies") graph.createVertex("rebuild") graph.createVertex("validate") graph.createVertex("initialize") graph.createVertex("collect") graph.createVertex("stage") graph.createVertex("store") graph.createVertex("purge") for action in extensions.actions: graph.createVertex(action.name) graph.createEdge("collect", "stage") # Collect must run before stage, store or purge graph.createEdge("collect", "store") graph.createEdge("collect", "purge") graph.createEdge("stage", "store") # Stage must run before store or purge graph.createEdge("stage", "purge") graph.createEdge("store", "purge") # Store must run before purge for action in extensions.actions: if action.dependencies.beforeList is not None: for vertex in action.dependencies.beforeList: try: graph.createEdge(action.name, vertex) # actions that this action must be run before except ValueError: logger.error("Dependency [%s] on extension [%s] is unknown.", vertex, action.name) raise ValueError("Unable to determine proper action order due to invalid dependency.") if action.dependencies.afterList is not None: for vertex in action.dependencies.afterList: try: graph.createEdge(vertex, action.name) # actions that this action must be run after except ValueError: logger.error("Dependency [%s] on extension [%s] is unknown.", vertex, action.name) raise ValueError("Unable to determine proper action order due to invalid dependency.") try: ordering = graph.topologicalSort() indexMap = dict([(ordering[i], i+1) for i in range(0, len(ordering))]) logger.info("Action order will be: %s", ordering) except ValueError: logger.error("Unable to determine proper action order due to dependency recursion.") logger.error("Extensions configuration is invalid (check for loops).") raise ValueError("Unable to determine proper action order due to dependency recursion.") return indexMap @staticmethod def _buildActionMap(managed, local, extensionNames, functionMap, indexMap, preHookMap, postHookMap, peerMap): """ Builds a mapping from action name to list of action items. We build either C{_ActionItem} or C{_ManagedActionItem} objects here. In most cases, the mapping from action name to C{_ActionItem} is 1:1. The exception is the "all" action, which is a special case. However, a list is returned in all cases, just for consistency later. Each C{_ActionItem} will be created with a proper function reference and index value for execution ordering. The mapping from action name to C{_ManagedActionItem} is always 1:1. Each managed action item contains a list of peers which the action should be executed. @param managed: Whether to include managed actions in the set @param local: Whether to include local actions in the set @param extensionNames: List of valid extended action names @param functionMap: Dictionary mapping action name to Python function @param indexMap: Dictionary mapping action name to integer execution index @param preHookMap: Dictionary mapping action name to pre hooks (if any) for the action @param postHookMap: Dictionary mapping action name to post hooks (if any) for the action @param peerMap: Dictionary mapping action name to list of remote peers on which to execute the action @return: Dictionary mapping action name to list of C{_ActionItem} objects. """ actionMap = {} for name in extensionNames + VALID_ACTIONS: if name != 'all': # do this one later function = functionMap[name] index = indexMap[name] actionMap[name] = [] if local: (preHooks, postHooks) = _ActionSet._deriveHooks(name, preHookMap, postHookMap) actionMap[name].append(_ActionItem(index, name, preHooks, postHooks, function)) if managed: if name in peerMap: actionMap[name].append(_ManagedActionItem(index, name, peerMap[name])) actionMap['all'] = actionMap['collect'] + actionMap['stage'] + actionMap['store'] + actionMap['purge'] return actionMap @staticmethod def _buildPeerMap(options, peers): """ Build a mapping from action name to list of remote peers. There will be one entry in the mapping for each managed action. If there are no managed peers, the mapping will be empty. Only managed actions will be listed in the mapping. @param options: Option configuration (i.e. config.options) @param peers: Peers configuration (i.e. config.peers) """ peerMap = {} if peers is not None: if peers.remotePeers is not None: for peer in peers.remotePeers: if peer.managed: remoteUser = _ActionSet._getRemoteUser(options, peer) rshCommand = _ActionSet._getRshCommand(options, peer) cbackCommand = _ActionSet._getCbackCommand(options, peer) managedActions = _ActionSet._getManagedActions(options, peer) remotePeer = RemotePeer(peer.name, None, options.workingDir, remoteUser, None, options.backupUser, rshCommand, cbackCommand) if managedActions is not None: for managedAction in managedActions: if managedAction in peerMap: if remotePeer not in peerMap[managedAction]: peerMap[managedAction].append(remotePeer) else: peerMap[managedAction] = [ remotePeer, ] return peerMap @staticmethod def _deriveHooks(action, preHookDict, postHookDict): """ Derive pre- and post-action hooks, if any, associated with named action. @param action: Name of action to look up @param preHookDict: Dictionary mapping pre-action hooks to action name @param postHookDict: Dictionary mapping post-action hooks to action name @return Tuple (preHooks, postHooks) per mapping, with None values if there is no hook. """ preHooks = None postHooks = None if action in preHookDict: preHooks = preHookDict[action] if action in postHookDict: postHooks = postHookDict[action] return (preHooks, postHooks) @staticmethod def _validateActions(actions, extensionNames): """ Validate that the set of specified actions is sensible. Any specified action must either be a built-in action or must be among the extended actions defined in configuration. The actions from within L{NONCOMBINE_ACTIONS} may not be combined with other actions. @param actions: Names of actions specified on the command-line. @param extensionNames: Names of extensions specified in configuration. @raise ValueError: If one or more configured actions are not valid. """ if actions is None or actions == []: raise ValueError("No actions specified.") for action in actions: if action not in VALID_ACTIONS and action not in extensionNames: raise ValueError("Action [%s] is not a valid action or extended action." % action) for action in NONCOMBINE_ACTIONS: if action in actions and actions != [ action, ]: raise ValueError("Action [%s] may not be combined with other actions." % action) @staticmethod def _buildActionSet(actions, actionMap): """ Build set of actions to be executed. The set of actions is built in the proper order, so C{executeActions} can spin through the set without thinking about it. Since we've already validated that the set of actions is sensible, we don't take any precautions here to make sure things are combined properly. If the action is listed, it will be "scheduled" for execution. @param actions: Names of actions specified on the command-line. @param actionMap: Dictionary mapping action name to C{_ActionItem} object. @return: Set of action items in proper order. """ actionSet = [] for action in actions: actionSet.extend(actionMap[action]) actionSet.sort() # sort the actions in order by index return actionSet def executeActions(self, configPath, options, config): """ Executes all actions and extended actions, in the proper order. Each action (whether built-in or extension) is executed in an identical manner. The built-in actions will use only the options and config values. We also pass in the config path so that extension modules can re-parse configuration if they want to, to add in extra information. @param configPath: Path to configuration file on disk. @param options: Command-line options to be passed to action functions. @param config: Parsed configuration to be passed to action functions. @raise Exception: If there is a problem executing the actions. """ logger.debug("Executing local actions.") for actionItem in self.actionSet: actionItem.executeAction(configPath, options, config) @staticmethod def _getRemoteUser(options, remotePeer): """ Gets the remote user associated with a remote peer. Use peer's if possible, otherwise take from options section. @param options: OptionsConfig object, as from config.options @param remotePeer: Configuration-style remote peer object. @return: Name of remote user associated with remote peer. """ if remotePeer.remoteUser is None: return options.backupUser return remotePeer.remoteUser @staticmethod def _getRshCommand(options, remotePeer): """ Gets the RSH command associated with a remote peer. Use peer's if possible, otherwise take from options section. @param options: OptionsConfig object, as from config.options @param remotePeer: Configuration-style remote peer object. @return: RSH command associated with remote peer. """ if remotePeer.rshCommand is None: return options.rshCommand return remotePeer.rshCommand @staticmethod def _getCbackCommand(options, remotePeer): """ Gets the cback command associated with a remote peer. Use peer's if possible, otherwise take from options section. @param options: OptionsConfig object, as from config.options @param remotePeer: Configuration-style remote peer object. @return: cback command associated with remote peer. """ if remotePeer.cbackCommand is None: return options.cbackCommand return remotePeer.cbackCommand @staticmethod def _getManagedActions(options, remotePeer): """ Gets the managed actions list associated with a remote peer. Use peer's if possible, otherwise take from options section. @param options: OptionsConfig object, as from config.options @param remotePeer: Configuration-style remote peer object. @return: Set of managed actions associated with remote peer. """ if remotePeer.managedActions is None: return options.managedActions return remotePeer.managedActions ####################################################################### # Utility functions ####################################################################### #################### # _usage() function #################### def _usage(fd=sys.stderr): """ Prints usage information for the cback3 script. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write(" Usage: cback3 [switches] action(s)\n") fd.write("\n") fd.write(" The following switches are accepted:\n") fd.write("\n") fd.write(" -h, --help Display this usage/help listing\n") fd.write(" -V, --version Display version information\n") fd.write(" -b, --verbose Print verbose output as well as logging to disk\n") fd.write(" -q, --quiet Run quietly (display no output to the screen)\n") fd.write(" -c, --config Path to config file (default: %s)\n" % DEFAULT_CONFIG) fd.write(" -f, --full Perform a full backup, regardless of configuration\n") fd.write(" -M, --managed Include managed clients when executing actions\n") fd.write(" -N, --managed-only Include ONLY managed clients when executing actions\n") fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE) fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1])) fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE) fd.write(" -O, --output Record some sub-command (i.e. cdrecord) output to the log\n") fd.write(" -d, --debug Write debugging information to the log (implies --output)\n") fd.write(" -s, --stack Dump a Python stack trace instead of swallowing exceptions\n") # exactly 80 characters in width! fd.write(" -D, --diagnostics Print runtime diagnostics to the screen and exit\n") fd.write("\n") fd.write(" The following actions may be specified:\n") fd.write("\n") fd.write(" all Take all normal actions (collect, stage, store, purge)\n") fd.write(" collect Take the collect action\n") fd.write(" stage Take the stage action\n") fd.write(" store Take the store action\n") fd.write(" purge Take the purge action\n") fd.write(" rebuild Rebuild \"this week's\" disc if possible\n") fd.write(" validate Validate configuration only\n") fd.write(" initialize Initialize media for use with Cedar Backup\n") fd.write("\n") fd.write(" You may also specify extended actions that have been defined in\n") fd.write(" configuration.\n") fd.write("\n") fd.write(" You must specify at least one action to take. More than one of\n") fd.write(" the \"collect\", \"stage\", \"store\" or \"purge\" actions and/or\n") fd.write(" extended actions may be specified in any arbitrary order; they\n") fd.write(" will be executed in a sensible order. The \"all\", \"rebuild\",\n") fd.write(" \"validate\", and \"initialize\" actions may not be combined with\n") fd.write(" other actions.\n") fd.write("\n") ###################### # _version() function ###################### def _version(fd=sys.stdout): """ Prints version information for the cback3 script. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write(" Cedar Backup version %s, released %s.\n" % (VERSION, DATE)) fd.write("\n") fd.write(" Copyright (c) %s %s <%s>.\n" % (COPYRIGHT, AUTHOR, EMAIL)) fd.write(" See CREDITS for a list of included code and other contributors.\n") fd.write(" This is free software; there is NO warranty. See the\n") fd.write(" GNU General Public License version 2 for copying conditions.\n") fd.write("\n") fd.write(" Use the --help option for usage information.\n") fd.write("\n") ########################## # _diagnostics() function ########################## def _diagnostics(fd=sys.stdout): """ Prints runtime diagnostics information. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write("Diagnostics:\n") fd.write("\n") Diagnostics().printDiagnostics(fd=fd, prefix=" ") fd.write("\n") ########################## # setupLogging() function ########################## def setupLogging(options): """ Set up logging based on command-line options. There are two kinds of logging: flow logging and output logging. Output logging contains information about system commands executed by Cedar Backup, for instance the calls to C{mkisofs} or C{mount}, etc. Flow logging contains error and informational messages used to understand program flow. Flow log messages and output log messages are written to two different loggers target (C{CedarBackup3.log} and C{CedarBackup3.output}). Flow log messages are written at the ERROR, INFO and DEBUG log levels, while output log messages are generally only written at the INFO log level. By default, output logging is disabled. When the C{options.output} or C{options.debug} flags are set, output logging will be written to the configured logfile. Output logging is never written to the screen. By default, flow logging is enabled at the ERROR level to the screen and at the INFO level to the configured logfile. If the C{options.quiet} flag is set, flow logging is enabled at the INFO level to the configured logfile only (i.e. no output will be sent to the screen). If the C{options.verbose} flag is set, flow logging is enabled at the INFO level to both the screen and the configured logfile. If the C{options.debug} flag is set, flow logging is enabled at the DEBUG level to both the screen and the configured logfile. @param options: Command-line options. @type options: L{Options} object @return: Path to logfile on disk. """ logfile = _setupLogfile(options) _setupFlowLogging(logfile, options) _setupOutputLogging(logfile, options) return logfile def _setupLogfile(options): """ Sets up and creates logfile as needed. If the logfile already exists on disk, it will be left as-is, under the assumption that it was created with appropriate ownership and permissions. If the logfile does not exist on disk, it will be created as an empty file. Ownership and permissions will remain at their defaults unless user/group and/or mode are set in the options. We ignore errors setting the indicated user and group. @note: This function is vulnerable to a race condition. If the log file does not exist when the function is run, it will attempt to create the file as safely as possible (using C{O_CREAT}). If two processes attempt to create the file at the same time, then one of them will fail. In practice, this shouldn't really be a problem, but it might happen occassionally if two instances of cback3 run concurrently or if cback3 collides with logrotate or something. @param options: Command-line options. @return: Path to logfile on disk. """ if options.logfile is None: logfile = DEFAULT_LOGFILE else: logfile = options.logfile if not os.path.exists(logfile): mode = DEFAULT_MODE if options.mode is None else options.mode orig = os.umask(0) # Per os.open(), "When computing mode, the current umask value is first masked out" try: fd = os.open(logfile, os.O_RDWR|os.O_CREAT|os.O_APPEND, mode) with os.fdopen(fd, "a+") as f: f.write("") finally: os.umask(orig) try: if options.owner is None or len(options.owner) < 2: (uid, gid) = getUidGid(DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1]) else: (uid, gid) = getUidGid(options.owner[0], options.owner[1]) os.chown(logfile, uid, gid) except: pass return logfile def _setupFlowLogging(logfile, options): """ Sets up flow logging. @param logfile: Path to logfile on disk. @param options: Command-line options. """ flowLogger = logging.getLogger("CedarBackup3.log") flowLogger.setLevel(logging.DEBUG) # let the logger see all messages _setupDiskFlowLogging(flowLogger, logfile, options) _setupScreenFlowLogging(flowLogger, options) def _setupOutputLogging(logfile, options): """ Sets up command output logging. @param logfile: Path to logfile on disk. @param options: Command-line options. """ outputLogger = logging.getLogger("CedarBackup3.output") outputLogger.setLevel(logging.DEBUG) # let the logger see all messages _setupDiskOutputLogging(outputLogger, logfile, options) def _setupDiskFlowLogging(flowLogger, logfile, options): """ Sets up on-disk flow logging. @param flowLogger: Python flow logger object. @param logfile: Path to logfile on disk. @param options: Command-line options. """ formatter = logging.Formatter(fmt=DISK_LOG_FORMAT, datefmt=DATE_FORMAT) handler = logging.FileHandler(logfile, mode="a") handler.setFormatter(formatter) if options.debug: handler.setLevel(logging.DEBUG) else: handler.setLevel(logging.INFO) flowLogger.addHandler(handler) def _setupScreenFlowLogging(flowLogger, options): """ Sets up on-screen flow logging. @param flowLogger: Python flow logger object. @param options: Command-line options. """ formatter = logging.Formatter(fmt=SCREEN_LOG_FORMAT) handler = logging.StreamHandler(SCREEN_LOG_STREAM) handler.setFormatter(formatter) if options.quiet: handler.setLevel(logging.CRITICAL) # effectively turn it off elif options.verbose: if options.debug: handler.setLevel(logging.DEBUG) else: handler.setLevel(logging.INFO) else: handler.setLevel(logging.ERROR) flowLogger.addHandler(handler) def _setupDiskOutputLogging(outputLogger, logfile, options): """ Sets up on-disk command output logging. @param outputLogger: Python command output logger object. @param logfile: Path to logfile on disk. @param options: Command-line options. """ formatter = logging.Formatter(fmt=DISK_OUTPUT_FORMAT, datefmt=DATE_FORMAT) handler = logging.FileHandler(logfile, mode="a") handler.setFormatter(formatter) if options.debug or options.output: handler.setLevel(logging.DEBUG) else: handler.setLevel(logging.CRITICAL) # effectively turn it off outputLogger.addHandler(handler) ############################### # setupPathResolver() function ############################### def setupPathResolver(config): """ Set up the path resolver singleton based on configuration. Cedar Backup's path resolver is implemented in terms of a singleton, the L{PathResolverSingleton} class. This function takes options configuration, converts it into the dictionary form needed by the singleton, and then initializes the singleton. After that, any function that needs to resolve the path of a command can use the singleton. @param config: Configuration @type config: L{Config} object """ mapping = {} if config.options.overrides is not None: for override in config.options.overrides: mapping[override.command] = override.absolutePath singleton = PathResolverSingleton() singleton.fill(mapping) ######################################################################### # Options class definition ######################################################################## @total_ordering class Options(object): ###################### # Class documentation ###################### """ Class representing command-line options for the cback3 script. The C{Options} class is a Python object representation of the command-line options of the cback3 script. The object representation is two-way: a command line string or a list of command line arguments can be used to create an C{Options} object, and then changes to the object can be propogated back to a list of command-line arguments or to a command-line string. An C{Options} object can even be created from scratch programmatically (if you have a need for that). There are two main levels of validation in the C{Options} class. The first is field-level validation. Field-level validation comes into play when a given field in an object is assigned to or updated. We use Python's C{property} functionality to enforce specific validations on field values, and in some places we even use customized list classes to enforce validations on list members. You should expect to catch a C{ValueError} exception when making assignments to fields if you are programmatically filling an object. The second level of validation is post-completion validation. Certain validations don't make sense until an object representation of options is fully "complete". We don't want these validations to apply all of the time, because it would make building up a valid object from scratch a real pain. For instance, we might have to do things in the right order to keep from throwing exceptions, etc. All of these post-completion validations are encapsulated in the L{Options.validate} method. This method can be called at any time by a client, and will always be called immediately after creating a C{Options} object from a command line and before exporting a C{Options} object back to a command line. This way, we get acceptable ease-of-use but we also don't accept or emit invalid command lines. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__ """ ############## # Constructor ############## def __init__(self, argumentList=None, argumentString=None, validate=True): """ Initializes an options object. If you initialize the object without passing either C{argumentList} or C{argumentString}, the object will be empty and will be invalid until it is filled in properly. No reference to the original arguments is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. The argument list is assumed to be a list of arguments, not including the name of the command, something like C{sys.argv[1:]}. If you pass C{sys.argv} instead, things are not going to work. The argument string will be parsed into an argument list by the L{util.splitCommandLine} function (see the documentation for that function for some important notes about its limitations). There is an assumption that the resulting list will be equivalent to C{sys.argv[1:]}, just like C{argumentList}. Unless the C{validate} argument is C{False}, the L{Options.validate} method will be called (with its default arguments) after successfully parsing any passed-in command line. This validation ensures that appropriate actions, etc. have been specified. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in command line, so an exception might still be raised. @note: The command line format is specified by the L{_usage} function. Call L{_usage} to see a usage statement for the cback3 script. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid command line arguments. @param argumentList: Command line for a program. @type argumentList: List of arguments, i.e. C{sys.argv} @param argumentString: Command line for a program. @type argumentString: String, i.e. "cback3 --verbose stage store" @param validate: Validate the command line after parsing it. @type validate: Boolean true/false. @raise getopt.GetoptError: If the command-line arguments could not be parsed. @raise ValueError: If the command-line arguments are invalid. """ self._help = False self._version = False self._verbose = False self._quiet = False self._config = None self._full = False self._managed = False self._managedOnly = False self._logfile = None self._owner = None self._mode = None self._output = False self._debug = False self._stacktrace = False self._diagnostics = False self._actions = None self.actions = [] # initialize to an empty list; remainder are OK if argumentList is not None and argumentString is not None: raise ValueError("Use either argumentList or argumentString, but not both.") if argumentString is not None: argumentList = splitCommandLine(argumentString) if argumentList is not None: self._parseArgumentList(argumentList) if validate: self.validate() ######################### # String representations ######################### def __repr__(self): """ Official string representation for class instance. """ return self.buildArgumentString(validate=False) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() ############################# # Standard comparison method ############################# def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.help != other.help: if self.help < other.help: return -1 else: return 1 if self.version != other.version: if self.version < other.version: return -1 else: return 1 if self.verbose != other.verbose: if self.verbose < other.verbose: return -1 else: return 1 if self.quiet != other.quiet: if self.quiet < other.quiet: return -1 else: return 1 if self.config != other.config: if self.config < other.config: return -1 else: return 1 if self.full != other.full: if self.full < other.full: return -1 else: return 1 if self.managed != other.managed: if self.managed < other.managed: return -1 else: return 1 if self.managedOnly != other.managedOnly: if self.managedOnly < other.managedOnly: return -1 else: return 1 if self.logfile != other.logfile: if str(self.logfile or "") < str(other.logfile or ""): return -1 else: return 1 if self.owner != other.owner: if str(self.owner or "") < str(other.owner or ""): return -1 else: return 1 if self.mode != other.mode: if int(self.mode or 0) < int(other.mode or 0): return -1 else: return 1 if self.output != other.output: if self.output < other.output: return -1 else: return 1 if self.debug != other.debug: if self.debug < other.debug: return -1 else: return 1 if self.stacktrace != other.stacktrace: if self.stacktrace < other.stacktrace: return -1 else: return 1 if self.diagnostics != other.diagnostics: if self.diagnostics < other.diagnostics: return -1 else: return 1 if self.actions != other.actions: if self.actions < other.actions: return -1 else: return 1 return 0 ############# # Properties ############# def _setHelp(self, value): """ Property target used to set the help flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._help = True else: self._help = False def _getHelp(self): """ Property target used to get the help flag. """ return self._help def _setVersion(self, value): """ Property target used to set the version flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._version = True else: self._version = False def _getVersion(self): """ Property target used to get the version flag. """ return self._version def _setVerbose(self, value): """ Property target used to set the verbose flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._verbose = True else: self._verbose = False def _getVerbose(self): """ Property target used to get the verbose flag. """ return self._verbose def _setQuiet(self, value): """ Property target used to set the quiet flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._quiet = True else: self._quiet = False def _getQuiet(self): """ Property target used to get the quiet flag. """ return self._quiet def _setConfig(self, value): """ Property target used to set the config parameter. """ if value is not None: if len(value) < 1: raise ValueError("The config parameter must be a non-empty string.") self._config = value def _getConfig(self): """ Property target used to get the config parameter. """ return self._config def _setFull(self, value): """ Property target used to set the full flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._full = True else: self._full = False def _getFull(self): """ Property target used to get the full flag. """ return self._full def _setManaged(self, value): """ Property target used to set the managed flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._managed = True else: self._managed = False def _getManaged(self): """ Property target used to get the managed flag. """ return self._managed def _setManagedOnly(self, value): """ Property target used to set the managedOnly flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._managedOnly = True else: self._managedOnly = False def _getManagedOnly(self): """ Property target used to get the managedOnly flag. """ return self._managedOnly def _setLogfile(self, value): """ Property target used to set the logfile parameter. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if len(value) < 1: raise ValueError("The logfile parameter must be a non-empty string.") self._logfile = encodePath(value) def _getLogfile(self): """ Property target used to get the logfile parameter. """ return self._logfile def _setOwner(self, value): """ Property target used to set the owner parameter. If not C{None}, the owner must be a C{(user,group)} tuple or list. Strings (and inherited children of strings) are explicitly disallowed. The value will be normalized to a tuple. @raise ValueError: If the value is not valid. """ if value is None: self._owner = None else: if isinstance(value, str): raise ValueError("Must specify user and group tuple for owner parameter.") if len(value) != 2: raise ValueError("Must specify user and group tuple for owner parameter.") if len(value[0]) < 1 or len(value[1]) < 1: raise ValueError("User and group tuple values must be non-empty strings.") self._owner = (value[0], value[1]) def _getOwner(self): """ Property target used to get the owner parameter. The parameter is a tuple of C{(user, group)}. """ return self._owner def _setMode(self, value): """ Property target used to set the mode parameter. """ if value is None: self._mode = None else: try: if isinstance(value, str): value = int(value, 8) else: value = int(value) except TypeError: raise ValueError("Mode must be an octal integer >= 0, i.e. 644.") if value < 0: raise ValueError("Mode must be an octal integer >= 0. i.e. 644.") self._mode = value def _getMode(self): """ Property target used to get the mode parameter. """ return self._mode def _setOutput(self, value): """ Property target used to set the output flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._output = True else: self._output = False def _getOutput(self): """ Property target used to get the output flag. """ return self._output def _setDebug(self, value): """ Property target used to set the debug flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._debug = True else: self._debug = False def _getDebug(self): """ Property target used to get the debug flag. """ return self._debug def _setStacktrace(self, value): """ Property target used to set the stacktrace flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._stacktrace = True else: self._stacktrace = False def _getStacktrace(self): """ Property target used to get the stacktrace flag. """ return self._stacktrace def _setDiagnostics(self, value): """ Property target used to set the diagnostics flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._diagnostics = True else: self._diagnostics = False def _getDiagnostics(self): """ Property target used to get the diagnostics flag. """ return self._diagnostics def _setActions(self, value): """ Property target used to set the actions list. We don't restrict the contents of actions. They're validated somewhere else. @raise ValueError: If the value is not valid. """ if value is None: self._actions = None else: try: saved = self._actions self._actions = [] self._actions.extend(value) except Exception as e: self._actions = saved raise e def _getActions(self): """ Property target used to get the actions list. """ return self._actions help = property(_getHelp, _setHelp, None, "Command-line help (C{-h,--help}) flag.") version = property(_getVersion, _setVersion, None, "Command-line version (C{-V,--version}) flag.") verbose = property(_getVerbose, _setVerbose, None, "Command-line verbose (C{-b,--verbose}) flag.") quiet = property(_getQuiet, _setQuiet, None, "Command-line quiet (C{-q,--quiet}) flag.") config = property(_getConfig, _setConfig, None, "Command-line configuration file (C{-c,--config}) parameter.") full = property(_getFull, _setFull, None, "Command-line full-backup (C{-f,--full}) flag.") managed = property(_getManaged, _setManaged, None, "Command-line managed (C{-M,--managed}) flag.") managedOnly = property(_getManagedOnly, _setManagedOnly, None, "Command-line managed-only (C{-N,--managed-only}) flag.") logfile = property(_getLogfile, _setLogfile, None, "Command-line logfile (C{-l,--logfile}) parameter.") owner = property(_getOwner, _setOwner, None, "Command-line owner (C{-o,--owner}) parameter, as tuple C{(user,group)}.") mode = property(_getMode, _setMode, None, "Command-line mode (C{-m,--mode}) parameter.") output = property(_getOutput, _setOutput, None, "Command-line output (C{-O,--output}) flag.") debug = property(_getDebug, _setDebug, None, "Command-line debug (C{-d,--debug}) flag.") stacktrace = property(_getStacktrace, _setStacktrace, None, "Command-line stacktrace (C{-s,--stack}) flag.") diagnostics = property(_getDiagnostics, _setDiagnostics, None, "Command-line diagnostics (C{-D,--diagnostics}) flag.") actions = property(_getActions, _setActions, None, "Command-line actions list.") ################## # Utility methods ################## def validate(self): """ Validates command-line options represented by the object. Unless C{--help} or C{--version} are supplied, at least one action must be specified. Other validations (as for allowed values for particular options) will be taken care of at assignment time by the properties functionality. @note: The command line format is specified by the L{_usage} function. Call L{_usage} to see a usage statement for the cback3 script. @raise ValueError: If one of the validations fails. """ if not self.help and not self.version and not self.diagnostics: if self.actions is None or len(self.actions) == 0: raise ValueError("At least one action must be specified.") if self.managed and self.managedOnly: raise ValueError("The --managed and --managed-only options may not be combined.") def buildArgumentList(self, validate=True): """ Extracts options into a list of command line arguments. The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument list. Besides that, the argument list is normalized to use the long option names (i.e. --version rather than -V). The resulting list will be suitable for passing back to the constructor in the C{argumentList} parameter. Unlike L{buildArgumentString}, string arguments are not quoted here, because there is no need for it. Unless the C{validate} parameter is C{False}, the L{Options.validate} method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument list will not be extracted. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to extract an invalid command line. @param validate: Validate the options before extracting the command line. @type validate: Boolean true/false. @return: List representation of command-line arguments. @raise ValueError: If options within the object are invalid. """ if validate: self.validate() argumentList = [] if self._help: argumentList.append("--help") if self.version: argumentList.append("--version") if self.verbose: argumentList.append("--verbose") if self.quiet: argumentList.append("--quiet") if self.config is not None: argumentList.append("--config") argumentList.append(self.config) if self.full: argumentList.append("--full") if self.managed: argumentList.append("--managed") if self.managedOnly: argumentList.append("--managed-only") if self.logfile is not None: argumentList.append("--logfile") argumentList.append(self.logfile) if self.owner is not None: argumentList.append("--owner") argumentList.append("%s:%s" % (self.owner[0], self.owner[1])) if self.mode is not None: argumentList.append("--mode") argumentList.append("%o" % self.mode) if self.output: argumentList.append("--output") if self.debug: argumentList.append("--debug") if self.stacktrace: argumentList.append("--stack") if self.diagnostics: argumentList.append("--diagnostics") if self.actions is not None: for action in self.actions: argumentList.append(action) return argumentList def buildArgumentString(self, validate=True): """ Extracts options into a string of command-line arguments. The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument string. Besides that, the argument string is normalized to use the long option names (i.e. --version rather than -V) and to quote all string arguments with double quotes (C{"}). The resulting string will be suitable for passing back to the constructor in the C{argumentString} parameter. Unless the C{validate} parameter is C{False}, the L{Options.validate} method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument string will not be extracted. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to extract an invalid command line. @param validate: Validate the options before extracting the command line. @type validate: Boolean true/false. @return: String representation of command-line arguments. @raise ValueError: If options within the object are invalid. """ if validate: self.validate() argumentString = "" if self._help: argumentString += "--help " if self.version: argumentString += "--version " if self.verbose: argumentString += "--verbose " if self.quiet: argumentString += "--quiet " if self.config is not None: argumentString += "--config \"%s\" " % self.config if self.full: argumentString += "--full " if self.managed: argumentString += "--managed " if self.managedOnly: argumentString += "--managed-only " if self.logfile is not None: argumentString += "--logfile \"%s\" " % self.logfile if self.owner is not None: argumentString += "--owner \"%s:%s\" " % (self.owner[0], self.owner[1]) if self.mode is not None: argumentString += "--mode %o " % self.mode if self.output: argumentString += "--output " if self.debug: argumentString += "--debug " if self.stacktrace: argumentString += "--stack " if self.diagnostics: argumentString += "--diagnostics " if self.actions is not None: for action in self.actions: argumentString += "\"%s\" " % action return argumentString def _parseArgumentList(self, argumentList): """ Internal method to parse a list of command-line arguments. Most of the validation we do here has to do with whether the arguments can be parsed and whether any values which exist are valid. We don't do any validation as to whether required elements exist or whether elements exist in the proper combination (instead, that's the job of the L{validate} method). For any of the options which supply parameters, if the option is duplicated with long and short switches (i.e. C{-l} and a C{--logfile}) then the long switch is used. If the same option is duplicated with the same switch (long or short), then the last entry on the command line is used. @param argumentList: List of arguments to a command. @type argumentList: List of arguments to a command, i.e. C{sys.argv[1:]} @raise ValueError: If the argument list cannot be successfully parsed. """ switches = { } opts, self.actions = getopt.getopt(argumentList, SHORT_SWITCHES, LONG_SWITCHES) for o, a in opts: # push the switches into a hash switches[o] = a if "-h" in switches or "--help" in switches: self.help = True if "-V" in switches or "--version" in switches: self.version = True if "-b" in switches or "--verbose" in switches: self.verbose = True if "-q" in switches or "--quiet" in switches: self.quiet = True if "-c" in switches: self.config = switches["-c"] if "--config" in switches: self.config = switches["--config"] if "-f" in switches or "--full" in switches: self.full = True if "-M" in switches or "--managed" in switches: self.managed = True if "-N" in switches or "--managed-only" in switches: self.managedOnly = True if "-l" in switches: self.logfile = switches["-l"] if "--logfile" in switches: self.logfile = switches["--logfile"] if "-o" in switches: self.owner = switches["-o"].split(":", 1) if "--owner" in switches: self.owner = switches["--owner"].split(":", 1) if "-m" in switches: self.mode = switches["-m"] if "--mode" in switches: self.mode = switches["--mode"] if "-O" in switches or "--output" in switches: self.output = True if "-d" in switches or "--debug" in switches: self.debug = True if "-s" in switches or "--stack" in switches: self.stacktrace = True if "-D" in switches or "--diagnostics" in switches: self.diagnostics = True ######################################################################### # Main routine ######################################################################## if __name__ == "__main__": result = cli() sys.exit(result) CedarBackup3-3.1.6/CedarBackup3/actions/0002775000175000017500000000000012657665551021405 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/CedarBackup3/actions/store.py0000664000175000017500000004243312642030750023074 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Implements the standard 'store' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'store' action. @sort: executeStore, writeImage, writeStoreIndicator, consistencyCheck @author: Kenneth J. Pronovici @author: Dmitry Rutsky """ ######################################################################## # Imported modules ######################################################################## # System modules import sys import os import logging import datetime import tempfile # Cedar Backup modules from CedarBackup3.filesystem import compareContents from CedarBackup3.util import isStartOfWeek from CedarBackup3.util import mount, unmount, displayBytes from CedarBackup3.actions.util import createWriter, checkMediaState, buildMediaLabel, writeIndicatorFile from CedarBackup3.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR, STORE_INDICATOR ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.actions.store") ######################################################################## # Public functions ######################################################################## ########################## # executeStore() function ########################## def executeStore(configPath, options, config): """ Executes the store backup action. @note: The rebuild action and the store action are very similar. The main difference is that while store only stores a single day's staging directory, the rebuild action operates on multiple staging directories. @note: When the store action is complete, we will write a store indicator to the daily staging directory we used, so it's obvious that the store action has completed. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are problems reading or writing files. """ logger.debug("Executing the 'store' action.") if sys.platform == "darwin": logger.warning("Warning: the store action is not fully supported on Mac OS X.") logger.warning("See the Cedar Backup software manual for further information.") if config.options is None or config.store is None: raise ValueError("Store configuration is not properly filled in.") if config.store.checkMedia: checkMediaState(config.store) # raises exception if media is not initialized rebuildMedia = options.full logger.debug("Rebuild media flag [%s]", rebuildMedia) todayIsStart = isStartOfWeek(config.options.startingDay) stagingDirs = _findCorrectDailyDir(options, config) writeImageBlankSafe(config, rebuildMedia, todayIsStart, config.store.blankBehavior, stagingDirs) if config.store.checkData: if sys.platform == "darwin": logger.warning("Warning: consistency check cannot be run successfully on Mac OS X.") logger.warning("See the Cedar Backup software manual for further information.") else: logger.debug("Running consistency check of media.") consistencyCheck(config, stagingDirs) writeStoreIndicator(config, stagingDirs) logger.info("Executed the 'store' action successfully.") ######################## # writeImage() function ######################## def writeImage(config, newDisc, stagingDirs): """ Builds and writes an ISO image containing the indicated stage directories. The generated image will contain each of the staging directories listed in C{stagingDirs}. The directories will be placed into the image at the root by date, so staging directory C{/opt/stage/2005/02/10} will be placed into the disc at C{/2005/02/10}. @note: This function is implemented in terms of L{writeImageBlankSafe}. The C{newDisc} flag is passed in for both C{rebuildMedia} and C{todayIsStart}. @param config: Config object. @param newDisc: Indicates whether the disc should be re-initialized @param stagingDirs: Dictionary mapping directory path to date suffix. @raise ValueError: Under many generic error conditions @raise IOError: If there is a problem writing the image to disc. """ writeImageBlankSafe(config, newDisc, newDisc, None, stagingDirs) ################################# # writeImageBlankSafe() function ################################# def writeImageBlankSafe(config, rebuildMedia, todayIsStart, blankBehavior, stagingDirs): """ Builds and writes an ISO image containing the indicated stage directories. The generated image will contain each of the staging directories listed in C{stagingDirs}. The directories will be placed into the image at the root by date, so staging directory C{/opt/stage/2005/02/10} will be placed into the disc at C{/2005/02/10}. The media will always be written with a media label specific to Cedar Backup. This function is similar to L{writeImage}, but tries to implement a smarter blanking strategy. First, the media is always blanked if the C{rebuildMedia} flag is true. Then, if C{rebuildMedia} is false, blanking behavior and C{todayIsStart} come into effect:: If no blanking behavior is specified, and it is the start of the week, the disc will be blanked If blanking behavior is specified, and either the blank mode is "daily" or the blank mode is "weekly" and it is the start of the week, then the disc will be blanked if it looks like the weekly backup will not fit onto the media. Otherwise, the disc will not be blanked How do we decide whether the weekly backup will fit onto the media? That is what the blanking factor is used for. The following formula is used:: will backup fit? = (bytes available / (1 + bytes required) <= blankFactor The blanking factor will vary from setup to setup, and will probably require some experimentation to get it right. @param config: Config object. @param rebuildMedia: Indicates whether media should be rebuilt @param todayIsStart: Indicates whether today is the starting day of the week @param blankBehavior: Blank behavior from configuration, or C{None} to use default behavior @param stagingDirs: Dictionary mapping directory path to date suffix. @raise ValueError: Under many generic error conditions @raise IOError: If there is a problem writing the image to disc. """ mediaLabel = buildMediaLabel() writer = createWriter(config) writer.initializeImage(True, config.options.workingDir, mediaLabel) # default value for newDisc for stageDir in list(stagingDirs.keys()): logger.debug("Adding stage directory [%s].", stageDir) dateSuffix = stagingDirs[stageDir] writer.addImageEntry(stageDir, dateSuffix) newDisc = _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior) writer.setImageNewDisc(newDisc) writer.writeImage() def _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior): """ Gets a value for the newDisc flag based on blanking factor rules. The blanking factor rules are described above by L{writeImageBlankSafe}. @param writer: Previously configured image writer containing image entries @param rebuildMedia: Indicates whether media should be rebuilt @param todayIsStart: Indicates whether today is the starting day of the week @param blankBehavior: Blank behavior from configuration, or C{None} to use default behavior @return: newDisc flag to be set on writer. """ newDisc = False if rebuildMedia: newDisc = True logger.debug("Setting new disc flag based on rebuildMedia flag.") else: if blankBehavior is None: logger.debug("Default media blanking behavior is in effect.") if todayIsStart: newDisc = True logger.debug("Setting new disc flag based on todayIsStart.") else: # note: validation says we can assume that behavior is fully filled in if it exists at all logger.debug("Optimized media blanking behavior is in effect based on configuration.") if blankBehavior.blankMode == "daily" or (blankBehavior.blankMode == "weekly" and todayIsStart): logger.debug("New disc flag will be set based on blank factor calculation.") blankFactor = float(blankBehavior.blankFactor) logger.debug("Configured blanking factor: %.2f", blankFactor) available = writer.retrieveCapacity().bytesAvailable logger.debug("Bytes available: %s", displayBytes(available)) required = writer.getEstimatedImageSize() logger.debug("Bytes required: %s", displayBytes(required)) ratio = available / (1.0 + required) logger.debug("Calculated ratio: %.2f", ratio) newDisc = (ratio <= blankFactor) logger.debug("%.2f <= %.2f ? %s", ratio, blankFactor, newDisc) else: logger.debug("No blank factor calculation is required based on configuration.") logger.debug("New disc flag [%s].", newDisc) return newDisc ################################# # writeStoreIndicator() function ################################# def writeStoreIndicator(config, stagingDirs): """ Writes a store indicator file into staging directories. The store indicator is written into each of the staging directories when either a store or rebuild action has written the staging directory to disc. @param config: Config object. @param stagingDirs: Dictionary mapping directory path to date suffix. """ for stagingDir in list(stagingDirs.keys()): writeIndicatorFile(stagingDir, STORE_INDICATOR, config.options.backupUser, config.options.backupGroup) ############################## # consistencyCheck() function ############################## def consistencyCheck(config, stagingDirs): """ Runs a consistency check against media in the backup device. It seems that sometimes, it's possible to create a corrupted multisession disc (i.e. one that cannot be read) although no errors were encountered while writing the disc. This consistency check makes sure that the data read from disc matches the data that was used to create the disc. The function mounts the device at a temporary mount point in the working directory, and then compares the indicated staging directories in the staging directory and on the media. The comparison is done via functionality in C{filesystem.py}. If no exceptions are thrown, there were no problems with the consistency check. A positive confirmation of "no problems" is also written to the log with C{info} priority. @warning: The implementation of this function is very UNIX-specific. @param config: Config object. @param stagingDirs: Dictionary mapping directory path to date suffix. @raise ValueError: If the two directories are not equivalent. @raise IOError: If there is a problem working with the media. """ logger.debug("Running consistency check.") mountPoint = tempfile.mkdtemp(dir=config.options.workingDir) try: mount(config.store.devicePath, mountPoint, "iso9660") for stagingDir in list(stagingDirs.keys()): discDir = os.path.join(mountPoint, stagingDirs[stagingDir]) logger.debug("Checking [%s] vs. [%s].", stagingDir, discDir) compareContents(stagingDir, discDir, verbose=True) logger.info("Consistency check completed for [%s]. No problems found.", stagingDir) finally: unmount(mountPoint, True, 5, 1) # try 5 times, and remove mount point when done ######################################################################## # Private utility functions ######################################################################## ######################### # _findCorrectDailyDir() ######################### def _findCorrectDailyDir(options, config): """ Finds the correct daily staging directory to be written to disk. In Cedar Backup v1.0, we assumed that the correct staging directory matched the current date. However, that has problems. In particular, it breaks down if collect is on one side of midnite and stage is on the other, or if certain processes span midnite. For v2.0, I'm trying to be smarter. I'll first check the current day. If that directory is found, it's good enough. If it's not found, I'll look for a valid directory from the day before or day after I{which has not yet been staged, according to the stage indicator file}. The first one I find, I'll use. If I use a directory other than for the current day I{and} C{config.store.warnMidnite} is set, a warning will be put in the log. There is one exception to this rule. If the C{options.full} flag is set, then the special "span midnite" logic will be disabled and any existing store indicator will be ignored. I did this because I think that most users who run C{cback3 --full store} twice in a row expect the command to generate two identical discs. With the other rule in place, running that command twice in a row could result in an error ("no unstored directory exists") or could even cause a completely unexpected directory to be written to disc (if some previous day's contents had not yet been written). @note: This code is probably longer and more verbose than it needs to be, but at least it's straightforward. @param options: Options object. @param config: Config object. @return: Correct staging dir, as a dict mapping directory to date suffix. @raise IOError: If the staging directory cannot be found. """ oneDay = datetime.timedelta(days=1) today = datetime.date.today() yesterday = today - oneDay tomorrow = today + oneDay todayDate = today.strftime(DIR_TIME_FORMAT) yesterdayDate = yesterday.strftime(DIR_TIME_FORMAT) tomorrowDate = tomorrow.strftime(DIR_TIME_FORMAT) todayPath = os.path.join(config.stage.targetDir, todayDate) yesterdayPath = os.path.join(config.stage.targetDir, yesterdayDate) tomorrowPath = os.path.join(config.stage.targetDir, tomorrowDate) todayStageInd = os.path.join(todayPath, STAGE_INDICATOR) yesterdayStageInd = os.path.join(yesterdayPath, STAGE_INDICATOR) tomorrowStageInd = os.path.join(tomorrowPath, STAGE_INDICATOR) todayStoreInd = os.path.join(todayPath, STORE_INDICATOR) yesterdayStoreInd = os.path.join(yesterdayPath, STORE_INDICATOR) tomorrowStoreInd = os.path.join(tomorrowPath, STORE_INDICATOR) if options.full: if os.path.isdir(todayPath) and os.path.exists(todayStageInd): logger.info("Store process will use current day's stage directory [%s]", todayPath) return { todayPath:todayDate } raise IOError("Unable to find staging directory to store (only tried today due to full option).") else: if os.path.isdir(todayPath) and os.path.exists(todayStageInd) and not os.path.exists(todayStoreInd): logger.info("Store process will use current day's stage directory [%s]", todayPath) return { todayPath:todayDate } elif os.path.isdir(yesterdayPath) and os.path.exists(yesterdayStageInd) and not os.path.exists(yesterdayStoreInd): logger.info("Store process will use previous day's stage directory [%s]", yesterdayPath) if config.store.warnMidnite: logger.warning("Warning: store process crossed midnite boundary to find data.") return { yesterdayPath:yesterdayDate } elif os.path.isdir(tomorrowPath) and os.path.exists(tomorrowStageInd) and not os.path.exists(tomorrowStoreInd): logger.info("Store process will use next day's stage directory [%s]", tomorrowPath) if config.store.warnMidnite: logger.warning("Warning: store process crossed midnite boundary to find data.") return { tomorrowPath:tomorrowDate } raise IOError("Unable to find unused staging directory to store (tried today, yesterday, tomorrow).") CedarBackup3-3.1.6/CedarBackup3/actions/collect.py0000664000175000017500000005367112560007327023376 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2011,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Implements the standard 'collect' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'collect' action. @sort: executeCollect @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging import pickle # Cedar Backup modules from CedarBackup3.filesystem import BackupFileList, FilesystemList from CedarBackup3.util import isStartOfWeek, changeOwnership, displayBytes, buildNormalizedPath from CedarBackup3.actions.constants import DIGEST_EXTENSION, COLLECT_INDICATOR from CedarBackup3.actions.util import writeIndicatorFile ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.actions.collect") ######################################################################## # Public functions ######################################################################## ############################ # executeCollect() function ############################ def executeCollect(configPath, options, config): """ Executes the collect backup action. @note: When the collect action is complete, we will write a collect indicator to the collect directory, so it's obvious that the collect action has completed. The stage process uses this indicator to decide whether a peer is ready to be staged. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise TarError: If there is a problem creating a tar file """ logger.debug("Executing the 'collect' action.") if config.options is None or config.collect is None: raise ValueError("Collect configuration is not properly filled in.") if ((config.collect.collectFiles is None or len(config.collect.collectFiles) < 1) and (config.collect.collectDirs is None or len(config.collect.collectDirs) < 1)): raise ValueError("There must be at least one collect file or collect directory.") fullBackup = options.full logger.debug("Full backup flag is [%s]", fullBackup) todayIsStart = isStartOfWeek(config.options.startingDay) resetDigest = fullBackup or todayIsStart logger.debug("Reset digest flag is [%s]", resetDigest) if config.collect.collectFiles is not None: for collectFile in config.collect.collectFiles: logger.debug("Working with collect file [%s]", collectFile.absolutePath) collectMode = _getCollectMode(config, collectFile) archiveMode = _getArchiveMode(config, collectFile) digestPath = _getDigestPath(config, collectFile.absolutePath) tarfilePath = _getTarfilePath(config, collectFile.absolutePath, archiveMode) if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): logger.debug("File meets criteria to be backed up today.") _collectFile(config, collectFile.absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath) else: logger.debug("File will not be backed up, per collect mode.") logger.info("Completed collecting file [%s]", collectFile.absolutePath) if config.collect.collectDirs is not None: for collectDir in config.collect.collectDirs: logger.debug("Working with collect directory [%s]", collectDir.absolutePath) collectMode = _getCollectMode(config, collectDir) archiveMode = _getArchiveMode(config, collectDir) ignoreFile = _getIgnoreFile(config, collectDir) linkDepth = _getLinkDepth(collectDir) dereference = _getDereference(collectDir) recursionLevel = _getRecursionLevel(collectDir) (excludePaths, excludePatterns) = _getExclusions(config, collectDir) if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): logger.debug("Directory meets criteria to be backed up today.") _collectDirectory(config, collectDir.absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, recursionLevel) else: logger.debug("Directory will not be backed up, per collect mode.") logger.info("Completed collecting directory [%s]", collectDir.absolutePath) writeIndicatorFile(config.collect.targetDir, COLLECT_INDICATOR, config.options.backupUser, config.options.backupGroup) logger.info("Executed the 'collect' action successfully.") ######################################################################## # Private utility functions ######################################################################## ########################## # _collectFile() function ########################## def _collectFile(config, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath): """ Collects a configured collect file. The indicated collect file is collected into the indicated tarfile. For files that are collected incrementally, we'll use the indicated digest path and pay attention to the reset digest flag (basically, the reset digest flag ignores any existing digest, but a new digest is always rewritten). The caller must decide what the collect and archive modes are, since they can be on both the collect configuration and the collect file itself. @param config: Config object. @param absolutePath: Absolute path of file to collect. @param tarfilePath: Path to tarfile that should be created. @param collectMode: Collect mode to use. @param archiveMode: Archive mode to use. @param resetDigest: Reset digest flag. @param digestPath: Path to digest file on disk, if needed. """ backupList = BackupFileList() backupList.addFile(absolutePath) _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath) ############################### # _collectDirectory() function ############################### def _collectDirectory(config, absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, recursionLevel): """ Collects a configured collect directory. The indicated collect directory is collected into the indicated tarfile. For directories that are collected incrementally, we'll use the indicated digest path and pay attention to the reset digest flag (basically, the reset digest flag ignores any existing digest, but a new digest is always rewritten). The caller must decide what the collect and archive modes are, since they can be on both the collect configuration and the collect directory itself. @param config: Config object. @param absolutePath: Absolute path of directory to collect. @param collectMode: Collect mode to use. @param archiveMode: Archive mode to use. @param ignoreFile: Ignore file to use. @param linkDepth: Link depth value to use. @param dereference: Dereference flag to use. @param resetDigest: Reset digest flag. @param excludePaths: List of absolute paths to exclude. @param excludePatterns: List of patterns to exclude. @param recursionLevel: Recursion level (zero for no recursion) """ if recursionLevel == 0: # Collect the actual directory because we're at recursion level 0 logger.info("Collecting directory [%s]", absolutePath) tarfilePath = _getTarfilePath(config, absolutePath, archiveMode) digestPath = _getDigestPath(config, absolutePath) backupList = BackupFileList() backupList.ignoreFile = ignoreFile backupList.excludePaths = excludePaths backupList.excludePatterns = excludePatterns backupList.addDirContents(absolutePath, linkDepth=linkDepth, dereference=dereference) _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath) else: # Find all of the immediate subdirectories subdirs = FilesystemList() subdirs.excludeFiles = True subdirs.excludeLinks = True subdirs.excludePaths = excludePaths subdirs.excludePatterns = excludePatterns subdirs.addDirContents(path=absolutePath, recursive=False, addSelf=False) # Back up the subdirectories separately for subdir in subdirs: _collectDirectory(config, subdir, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, recursionLevel-1) excludePaths.append(subdir) # this directory is already backed up, so exclude it # Back up everything that hasn't previously been backed up _collectDirectory(config, absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, 0) ############################ # _executeBackup() function ############################ def _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath): """ Execute the backup process for the indicated backup list. This function exists mainly to consolidate functionality between the L{_collectFile} and L{_collectDirectory} functions. Those functions build the backup list; this function causes the backup to execute properly and also manages usage of the digest file on disk as explained in their comments. For collect files, the digest file will always just contain the single file that is being backed up. This might little wasteful in terms of the number of files that we keep around, but it's consistent and easy to understand. @param config: Config object. @param backupList: List to execute backup for @param absolutePath: Absolute path of directory or file to collect. @param tarfilePath: Path to tarfile that should be created. @param collectMode: Collect mode to use. @param archiveMode: Archive mode to use. @param resetDigest: Reset digest flag. @param digestPath: Path to digest file on disk, if needed. """ if collectMode != 'incr': logger.debug("Collect mode is [%s]; no digest will be used.", collectMode) if len(backupList) == 1 and backupList[0] == absolutePath: # special case for individual file logger.info("Backing up file [%s] (%s).", absolutePath, displayBytes(backupList.totalSize())) else: logger.info("Backing up %d files in [%s] (%s).", len(backupList), absolutePath, displayBytes(backupList.totalSize())) if len(backupList) > 0: backupList.generateTarfile(tarfilePath, archiveMode, True) changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) else: if resetDigest: logger.debug("Based on resetDigest flag, digest will be cleared.") oldDigest = {} else: logger.debug("Based on resetDigest flag, digest will loaded from disk.") oldDigest = _loadDigest(digestPath) (removed, newDigest) = backupList.removeUnchanged(oldDigest, captureDigest=True) logger.debug("Removed %d unchanged files based on digest values.", removed) if len(backupList) == 1 and backupList[0] == absolutePath: # special case for individual file logger.info("Backing up file [%s] (%s).", absolutePath, displayBytes(backupList.totalSize())) else: logger.info("Backing up %d files in [%s] (%s).", len(backupList), absolutePath, displayBytes(backupList.totalSize())) if len(backupList) > 0: backupList.generateTarfile(tarfilePath, archiveMode, True) changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) _writeDigest(config, newDigest, digestPath) ######################### # _loadDigest() function ######################### def _loadDigest(digestPath): """ Loads the indicated digest path from disk into a dictionary. If we can't load the digest successfully (either because it doesn't exist or for some other reason), then an empty dictionary will be returned - but the condition will be logged. @param digestPath: Path to the digest file on disk. @return: Dictionary representing contents of digest path. """ if not os.path.isfile(digestPath): digest = {} logger.debug("Digest [%s] does not exist on disk.", digestPath) else: try: with open(digestPath, "rb") as f: digest = pickle.load(f, fix_imports=True) # be compatible with Python 2 logger.debug("Loaded digest [%s] from disk: %d entries.", digestPath, len(digest)) except Exception as e: digest = {} logger.error("Failed loading digest [%s] from disk: %s", digestPath, e) return digest ########################## # _writeDigest() function ########################## def _writeDigest(config, digest, digestPath): """ Writes the digest dictionary to the indicated digest path on disk. If we can't write the digest successfully for any reason, we'll log the condition but won't throw an exception. @param config: Config object. @param digest: Digest dictionary to write to disk. @param digestPath: Path to the digest file on disk. """ try: with open(digestPath, "wb") as f: pickle.dump(digest, f, 0, fix_imports=True) # be compatible with Python 2 changeOwnership(digestPath, config.options.backupUser, config.options.backupGroup) logger.debug("Wrote new digest [%s] to disk: %d entries.", digestPath, len(digest)) except Exception as e: logger.error("Failed to write digest [%s] to disk: %s", digestPath, e) ######################################################################## # Private attribute "getter" functions ######################################################################## ############################ # getCollectMode() function ############################ def _getCollectMode(config, item): """ Gets the collect mode that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section. @param config: Config object. @param item: C{CollectFile} or C{CollectDir} object @return: Collect mode to use. """ if item.collectMode is None: collectMode = config.collect.collectMode else: collectMode = item.collectMode logger.debug("Collect mode is [%s]", collectMode) return collectMode ############################# # _getArchiveMode() function ############################# def _getArchiveMode(config, item): """ Gets the archive mode that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section. @param config: Config object. @param item: C{CollectFile} or C{CollectDir} object @return: Archive mode to use. """ if item.archiveMode is None: archiveMode = config.collect.archiveMode else: archiveMode = item.archiveMode logger.debug("Archive mode is [%s]", archiveMode) return archiveMode ############################ # _getIgnoreFile() function ############################ def _getIgnoreFile(config, item): """ Gets the ignore file that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section. @param config: Config object. @param item: C{CollectFile} or C{CollectDir} object @return: Ignore file to use. """ if item.ignoreFile is None: ignoreFile = config.collect.ignoreFile else: ignoreFile = item.ignoreFile logger.debug("Ignore file is [%s]", ignoreFile) return ignoreFile ############################ # _getLinkDepth() function ############################ def _getLinkDepth(item): """ Gets the link depth that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of 0 (zero). @param item: C{CollectDir} object @return: Link depth to use. """ if item.linkDepth is None: linkDepth = 0 else: linkDepth = item.linkDepth logger.debug("Link depth is [%d]", linkDepth) return linkDepth ############################ # _getDereference() function ############################ def _getDereference(item): """ Gets the dereference flag that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of False. @param item: C{CollectDir} object @return: Dereference flag to use. """ if item.dereference is None: dereference = False else: dereference = item.dereference logger.debug("Dereference flag is [%s]", dereference) return dereference ################################ # _getRecursionLevel() function ################################ def _getRecursionLevel(item): """ Gets the recursion level that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of 0 (zero). @param item: C{CollectDir} object @return: Recursion level to use. """ if item.recursionLevel is None: recursionLevel = 0 else: recursionLevel = item.recursionLevel logger.debug("Recursion level is [%d]", recursionLevel) return recursionLevel ############################ # _getDigestPath() function ############################ def _getDigestPath(config, absolutePath): """ Gets the digest path associated with a collect directory or file. @param config: Config object. @param absolutePath: Absolute path to generate digest for @return: Absolute path to the digest associated with the collect directory or file. """ normalized = buildNormalizedPath(absolutePath) filename = "%s.%s" % (normalized, DIGEST_EXTENSION) digestPath = os.path.join(config.options.workingDir, filename) logger.debug("Digest path is [%s]", digestPath) return digestPath ############################# # _getTarfilePath() function ############################# def _getTarfilePath(config, absolutePath, archiveMode): """ Gets the tarfile path (including correct extension) associated with a collect directory. @param config: Config object. @param absolutePath: Absolute path to generate tarfile for @param archiveMode: Archive mode to use for this tarfile. @return: Absolute path to the tarfile associated with the collect directory. """ if archiveMode == 'tar': extension = "tar" elif archiveMode == 'targz': extension = "tar.gz" elif archiveMode == 'tarbz2': extension = "tar.bz2" normalized = buildNormalizedPath(absolutePath) filename = "%s.%s" % (normalized, extension) tarfilePath = os.path.join(config.collect.targetDir, filename) logger.debug("Tarfile path is [%s]", tarfilePath) return tarfilePath ############################ # _getExclusions() function ############################ def _getExclusions(config, collectDir): """ Gets exclusions (file and patterns) associated with a collect directory. The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the collect configuration absolute exclude paths and the collect directory's absolute and relative exclude paths. The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the list of patterns from the collect configuration and from the collect directory itself. @param config: Config object. @param collectDir: Collect directory object. @return: Tuple (files, patterns) indicating what to exclude. """ paths = [] if config.collect.absoluteExcludePaths is not None: paths.extend(config.collect.absoluteExcludePaths) if collectDir.absoluteExcludePaths is not None: paths.extend(collectDir.absoluteExcludePaths) if collectDir.relativeExcludePaths is not None: for relativePath in collectDir.relativeExcludePaths: paths.append(os.path.join(collectDir.absolutePath, relativePath)) patterns = [] if config.collect.excludePatterns is not None: patterns.extend(config.collect.excludePatterns) if collectDir.excludePatterns is not None: patterns.extend(collectDir.excludePatterns) logger.debug("Exclude paths: %s", paths) logger.debug("Exclude patterns: %s", patterns) return(paths, patterns) CedarBackup3-3.1.6/CedarBackup3/actions/util.py0000664000175000017500000003174312560007327022722 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Implements action-related utilities # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements action-related utilities @sort: findDailyDirs @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import time import tempfile import logging # Cedar Backup modules from CedarBackup3.filesystem import FilesystemList from CedarBackup3.util import changeOwnership from CedarBackup3.util import deviceMounted from CedarBackup3.writers.util import readMediaLabel from CedarBackup3.writers.cdwriter import CdWriter from CedarBackup3.writers.dvdwriter import DvdWriter from CedarBackup3.writers.cdwriter import MEDIA_CDR_74, MEDIA_CDR_80, MEDIA_CDRW_74, MEDIA_CDRW_80 from CedarBackup3.writers.dvdwriter import MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW from CedarBackup3.config import DEFAULT_MEDIA_TYPE, DEFAULT_DEVICE_TYPE, REWRITABLE_MEDIA_TYPES from CedarBackup3.actions.constants import INDICATOR_PATTERN ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.actions.util") MEDIA_LABEL_PREFIX = "CEDAR BACKUP" ######################################################################## # Public utility functions ######################################################################## ########################### # findDailyDirs() function ########################### def findDailyDirs(stagingDir, indicatorFile): """ Returns a list of all daily staging directories that do not contain the indicated indicator file. @param stagingDir: Configured staging directory (config.targetDir) @return: List of absolute paths to daily staging directories. """ results = FilesystemList() yearDirs = FilesystemList() yearDirs.excludeFiles = True yearDirs.excludeLinks = True yearDirs.addDirContents(path=stagingDir, recursive=False, addSelf=False) for yearDir in yearDirs: monthDirs = FilesystemList() monthDirs.excludeFiles = True monthDirs.excludeLinks = True monthDirs.addDirContents(path=yearDir, recursive=False, addSelf=False) for monthDir in monthDirs: dailyDirs = FilesystemList() dailyDirs.excludeFiles = True dailyDirs.excludeLinks = True dailyDirs.addDirContents(path=monthDir, recursive=False, addSelf=False) for dailyDir in dailyDirs: if os.path.exists(os.path.join(dailyDir, indicatorFile)): logger.debug("Skipping directory [%s]; contains %s.", dailyDir, indicatorFile) else: logger.debug("Adding [%s] to list of daily directories.", dailyDir) results.append(dailyDir) # just put it in the list, no fancy operations return results ########################### # createWriter() function ########################### def createWriter(config): """ Creates a writer object based on current configuration. This function creates and returns a writer based on configuration. This is done to abstract action functionality from knowing what kind of writer is in use. Since all writers implement the same interface, there's no need for actions to care which one they're working with. Currently, the C{cdwriter} and C{dvdwriter} device types are allowed. An exception will be raised if any other device type is used. This function also checks to make sure that the device isn't mounted before creating a writer object for it. Experience shows that sometimes if the device is mounted, we have problems with the backup. We may as well do the check here first, before instantiating the writer. @param config: Config object. @return: Writer that can be used to write a directory to some media. @raise ValueError: If there is a problem getting the writer. @raise IOError: If there is a problem creating the writer object. """ devicePath = config.store.devicePath deviceScsiId = config.store.deviceScsiId driveSpeed = config.store.driveSpeed noEject = config.store.noEject refreshMediaDelay = config.store.refreshMediaDelay ejectDelay = config.store.ejectDelay deviceType = _getDeviceType(config) mediaType = _getMediaType(config) if deviceMounted(devicePath): raise IOError("Device [%s] is currently mounted." % (devicePath)) if deviceType == "cdwriter": return CdWriter(devicePath, deviceScsiId, driveSpeed, mediaType, noEject, refreshMediaDelay, ejectDelay) elif deviceType == "dvdwriter": return DvdWriter(devicePath, deviceScsiId, driveSpeed, mediaType, noEject, refreshMediaDelay, ejectDelay) else: raise ValueError("Device type [%s] is invalid." % deviceType) ################################ # writeIndicatorFile() function ################################ def writeIndicatorFile(targetDir, indicatorFile, backupUser, backupGroup): """ Writes an indicator file into a target directory. @param targetDir: Target directory in which to write indicator @param indicatorFile: Name of the indicator file @param backupUser: User that indicator file should be owned by @param backupGroup: Group that indicator file should be owned by @raise IOException: If there is a problem writing the indicator file """ filename = os.path.join(targetDir, indicatorFile) logger.debug("Writing indicator file [%s].", filename) try: with open(filename, "w") as f: f.write("") changeOwnership(filename, backupUser, backupGroup) except Exception as e: logger.error("Error writing [%s]: %s", filename, e) raise e ############################ # getBackupFiles() function ############################ def getBackupFiles(targetDir): """ Gets a list of backup files in a target directory. Files that match INDICATOR_PATTERN (i.e. C{"cback.store"}, C{"cback.stage"}, etc.) are assumed to be indicator files and are ignored. @param targetDir: Directory to look in @return: List of backup files in the directory @raise ValueError: If the target directory does not exist """ if not os.path.isdir(targetDir): raise ValueError("Target directory [%s] is not a directory or does not exist." % targetDir) fileList = FilesystemList() fileList.excludeDirs = True fileList.excludeLinks = True fileList.excludeBasenamePatterns = INDICATOR_PATTERN fileList.addDirContents(targetDir) return fileList #################### # checkMediaState() #################### def checkMediaState(storeConfig): """ Checks state of the media in the backup device to confirm whether it has been initialized for use with Cedar Backup. We can tell whether the media has been initialized by looking at its media label. If the media label starts with MEDIA_LABEL_PREFIX, then it has been initialized. The check varies depending on whether the media is rewritable or not. For non-rewritable media, we also accept a C{None} media label, since this kind of media cannot safely be initialized. @param storeConfig: Store configuration @raise ValueError: If media is not initialized. """ mediaLabel = readMediaLabel(storeConfig.devicePath) if storeConfig.mediaType in REWRITABLE_MEDIA_TYPES: if mediaLabel is None: raise ValueError("Media has not been initialized: no media label available") elif not mediaLabel.startswith(MEDIA_LABEL_PREFIX): raise ValueError("Media has not been initialized: unrecognized media label [%s]" % mediaLabel) else: if mediaLabel is None: logger.info("Media has no media label; assuming OK since media is not rewritable.") elif not mediaLabel.startswith(MEDIA_LABEL_PREFIX): raise ValueError("Media has not been initialized: unrecognized media label [%s]" % mediaLabel) ######################### # initializeMediaState() ######################### def initializeMediaState(config): """ Initializes state of the media in the backup device so Cedar Backup can recognize it. This is done by writing an mostly-empty image (it contains a "Cedar Backup" directory) to the media with a known media label. @note: Only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup. @param config: Cedar Backup configuration @raise ValueError: If media could not be initialized. @raise ValueError: If the configured media type is not rewritable """ if not config.store.mediaType in REWRITABLE_MEDIA_TYPES: raise ValueError("Only rewritable media types can be initialized.") mediaLabel = buildMediaLabel() writer = createWriter(config) writer.refreshMedia() writer.initializeImage(True, config.options.workingDir, mediaLabel) # always create a new disc tempdir = tempfile.mkdtemp(dir=config.options.workingDir) try: writer.addImageEntry(tempdir, "CedarBackup") writer.writeImage() finally: if os.path.exists(tempdir): try: os.rmdir(tempdir) except: pass #################### # buildMediaLabel() #################### def buildMediaLabel(): """ Builds a media label to be used on Cedar Backup media. @return: Media label as a string. """ currentDate = time.strftime("%d-%b-%Y").upper() return "%s %s" % (MEDIA_LABEL_PREFIX, currentDate) ######################################################################## # Private attribute "getter" functions ######################################################################## ############################ # _getDeviceType() function ############################ def _getDeviceType(config): """ Gets the device type that should be used for storing. Use the configured device type if not C{None}, otherwise use L{config.DEFAULT_DEVICE_TYPE}. @param config: Config object. @return: Device type to be used. """ if config.store.deviceType is None: deviceType = DEFAULT_DEVICE_TYPE else: deviceType = config.store.deviceType logger.debug("Device type is [%s]", deviceType) return deviceType ########################### # _getMediaType() function ########################### def _getMediaType(config): """ Gets the media type that should be used for storing. Use the configured media type if not C{None}, otherwise use C{DEFAULT_MEDIA_TYPE}. Once we figure out what configuration value to use, we return a media type value that is valid in one of the supported writers:: MEDIA_CDR_74 MEDIA_CDRW_74 MEDIA_CDR_80 MEDIA_CDRW_80 MEDIA_DVDPLUSR MEDIA_DVDPLUSRW @param config: Config object. @return: Media type to be used as a writer media type value. @raise ValueError: If the media type is not valid. """ if config.store.mediaType is None: mediaType = DEFAULT_MEDIA_TYPE else: mediaType = config.store.mediaType if mediaType == "cdr-74": logger.debug("Media type is MEDIA_CDR_74.") return MEDIA_CDR_74 elif mediaType == "cdrw-74": logger.debug("Media type is MEDIA_CDRW_74.") return MEDIA_CDRW_74 elif mediaType == "cdr-80": logger.debug("Media type is MEDIA_CDR_80.") return MEDIA_CDR_80 elif mediaType == "cdrw-80": logger.debug("Media type is MEDIA_CDRW_80.") return MEDIA_CDRW_80 elif mediaType == "dvd+r": logger.debug("Media type is MEDIA_DVDPLUSR.") return MEDIA_DVDPLUSR elif mediaType == "dvd+rw": logger.debug("Media type is MEDIA_DVDPLUSRW.") return MEDIA_DVDPLUSRW else: raise ValueError("Media type [%s] is not valid." % mediaType) CedarBackup3-3.1.6/CedarBackup3/actions/initialize.py0000664000175000017500000000621112560007327024076 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Implements the standard 'initialize' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'initialize' action. @sort: executeInitialize @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import logging # Cedar Backup modules from CedarBackup3.actions.util import initializeMediaState ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.actions.initialize") ######################################################################## # Public functions ######################################################################## ############################### # executeInitialize() function ############################### def executeInitialize(configPath, options, config): """ Executes the initialize action. The initialize action initializes the media currently in the writer device so that Cedar Backup can recognize it later. This is an optional step; it's only required if checkMedia is set on the store configuration. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. """ logger.debug("Executing the 'initialize' action.") if config.options is None or config.store is None: raise ValueError("Store configuration is not properly filled in.") initializeMediaState(config) logger.info("Executed the 'initialize' action successfully.") CedarBackup3-3.1.6/CedarBackup3/actions/validate.py0000664000175000017500000002714312560007327023535 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Implements the standard 'validate' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'validate' action. @sort: executeValidate @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging # Cedar Backup modules from CedarBackup3.util import getUidGid, getFunctionReference from CedarBackup3.actions.util import createWriter ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.actions.validate") ######################################################################## # Public functions ######################################################################## ############################# # executeValidate() function ############################# def executeValidate(configPath, options, config): """ Executes the validate action. This action validates each of the individual sections in the config file. This is a "runtime" validation. The config file itself is already valid in a structural sense, so what we check here that is that we can actually use the configuration without any problems. There's a separate validation function for each of the configuration sections. Each validation function returns a true/false indication for whether configuration was valid, and then logs any configuration problems it finds. This way, one pass over configuration indicates most or all of the obvious problems, rather than finding just one problem at a time. Any reported problems will be logged at the ERROR level normally, or at the INFO level if the quiet flag is enabled. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: If some configuration value is invalid. """ logger.debug("Executing the 'validate' action.") if options.quiet: logfunc = logger.info # info so it goes to the log else: logfunc = logger.error # error so it goes to the screen valid = True valid &= _validateReference(config, logfunc) valid &= _validateOptions(config, logfunc) valid &= _validateCollect(config, logfunc) valid &= _validateStage(config, logfunc) valid &= _validateStore(config, logfunc) valid &= _validatePurge(config, logfunc) valid &= _validateExtensions(config, logfunc) if valid: logfunc("Configuration is valid.") else: logfunc("Configuration is not valid.") ######################################################################## # Private utility functions ######################################################################## ####################### # _checkDir() function ####################### def _checkDir(path, writable, logfunc, prefix): """ Checks that the indicated directory is OK. The path must exist, must be a directory, must be readable and executable, and must optionally be writable. @param path: Path to check. @param writable: Check that path is writable. @param logfunc: Function to use for logging errors. @param prefix: Prefix to use on logged errors. @return: True if the directory is OK, False otherwise. """ if not os.path.exists(path): logfunc("%s [%s] does not exist." % (prefix, path)) return False if not os.path.isdir(path): logfunc("%s [%s] is not a directory." % (prefix, path)) return False if not os.access(path, os.R_OK): logfunc("%s [%s] is not readable." % (prefix, path)) return False if not os.access(path, os.X_OK): logfunc("%s [%s] is not executable." % (prefix, path)) return False if writable and not os.access(path, os.W_OK): logfunc("%s [%s] is not writable." % (prefix, path)) return False return True ################################ # _validateReference() function ################################ def _validateReference(config, logfunc): """ Execute runtime validations on reference configuration. We only validate that reference configuration exists at all. @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, false otherwise. """ valid = True if config.reference is None: logfunc("Required reference configuration does not exist.") valid = False return valid ############################## # _validateOptions() function ############################## def _validateOptions(config, logfunc): """ Execute runtime validations on options configuration. The following validations are enforced: - The options section must exist - The working directory must exist and must be writable - The backup user and backup group must exist @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, false otherwise. """ valid = True if config.options is None: logfunc("Required options configuration does not exist.") valid = False else: valid &= _checkDir(config.options.workingDir, True, logfunc, "Working directory") try: getUidGid(config.options.backupUser, config.options.backupGroup) except ValueError: logfunc("Backup user:group [%s:%s] invalid." % (config.options.backupUser, config.options.backupGroup)) valid = False return valid ############################## # _validateCollect() function ############################## def _validateCollect(config, logfunc): """ Execute runtime validations on collect configuration. The following validations are enforced: - The target directory must exist and must be writable - Each of the individual collect directories must exist and must be readable @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, false otherwise. """ valid = True if config.collect is not None: valid &= _checkDir(config.collect.targetDir, True, logfunc, "Collect target directory") if config.collect.collectDirs is not None: for collectDir in config.collect.collectDirs: valid &= _checkDir(collectDir.absolutePath, False, logfunc, "Collect directory") return valid ############################ # _validateStage() function ############################ def _validateStage(config, logfunc): """ Execute runtime validations on stage configuration. The following validations are enforced: - The target directory must exist and must be writable - Each local peer's collect directory must exist and must be readable @note: We currently do not validate anything having to do with remote peers, since we don't have a straightforward way of doing it. It would require adding an rsh command rather than just an rcp command to configuration, and that just doesn't seem worth it right now. @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, False otherwise. """ valid = True if config.stage is not None: valid &= _checkDir(config.stage.targetDir, True, logfunc, "Stage target dir ") if config.stage.localPeers is not None: for peer in config.stage.localPeers: valid &= _checkDir(peer.collectDir, False, logfunc, "Local peer collect dir ") return valid ############################ # _validateStore() function ############################ def _validateStore(config, logfunc): """ Execute runtime validations on store configuration. The following validations are enforced: - The source directory must exist and must be readable - The backup device (path and SCSI device) must be valid @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, False otherwise. """ valid = True if config.store is not None: valid &= _checkDir(config.store.sourceDir, False, logfunc, "Store source directory") try: createWriter(config) except ValueError: logfunc("Backup device [%s] [%s] is not valid." % (config.store.devicePath, config.store.deviceScsiId)) valid = False return valid ############################ # _validatePurge() function ############################ def _validatePurge(config, logfunc): """ Execute runtime validations on purge configuration. The following validations are enforced: - Each purge directory must exist and must be writable @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, False otherwise. """ valid = True if config.purge is not None: if config.purge.purgeDirs is not None: for purgeDir in config.purge.purgeDirs: valid &= _checkDir(purgeDir.absolutePath, True, logfunc, "Purge directory") return valid ################################# # _validateExtensions() function ################################# def _validateExtensions(config, logfunc): """ Execute runtime validations on extensions configuration. The following validations are enforced: - Each indicated extension function must exist. @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, False otherwise. """ valid = True if config.extensions is not None: if config.extensions.actions is not None: for action in config.extensions.actions: try: getFunctionReference(action.module, action.function) except ImportError: logfunc("Unable to find function [%s.%s]." % (action.module, action.function)) valid = False except ValueError: logfunc("Function [%s.%s] is not callable." % (action.module, action.function)) valid = False return valid CedarBackup3-3.1.6/CedarBackup3/actions/__init__.py0000664000175000017500000000326112560007327023476 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Official Cedar Backup Extensions # Purpose : Provides package initialization # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Cedar Backup actions. This package code related to the offical Cedar Backup actions (collect, stage, store, purge, rebuild, and validate). The action modules consist of mostly "glue" code that uses other lower-level functionality to actually implement a backup. There is one module for each high-level backup action, plus a module that provides shared constants. All of the public action function implement the Cedar Backup Extension Architecture Interface, i.e. the same interface that extensions implement. @author: Kenneth J. Pronovici """ ######################################################################## # Package initialization ######################################################################## # Using 'from CedarBackup3.actions import *' will just import the modules listed # in the __all__ variable. __all__ = [ 'constants', 'collect', 'initialize', 'stage', 'store', 'purge', 'util', 'rebuild', 'validate', ] CedarBackup3-3.1.6/CedarBackup3/actions/constants.py0000664000175000017500000000256112560007327023755 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides common constants used by standard actions. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides common constants used by standard actions. @sort: DIR_TIME_FORMAT, DIGEST_EXTENSION, INDICATOR_PATTERN, COLLECT_INDICATOR, STAGE_INDICATOR, STORE_INDICATOR @author: Kenneth J. Pronovici """ ######################################################################## # Module-wide constants and variables ######################################################################## DIR_TIME_FORMAT = "%Y/%m/%d" DIGEST_EXTENSION = "sha" INDICATOR_PATTERN = [ r"cback\..*", ] COLLECT_INDICATOR = "cback.collect" STAGE_INDICATOR = "cback.stage" STORE_INDICATOR = "cback.store" CedarBackup3-3.1.6/CedarBackup3/actions/stage.py0000664000175000017500000003033312642030756023045 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Implements the standard 'stage' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'stage' action. @sort: executeStage @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import time import logging # Cedar Backup modules from CedarBackup3.peer import RemotePeer, LocalPeer from CedarBackup3.util import getUidGid, changeOwnership, isStartOfWeek, isRunningAsRoot from CedarBackup3.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR from CedarBackup3.actions.util import writeIndicatorFile ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.actions.stage") ######################################################################## # Public functions ######################################################################## ########################## # executeStage() function ########################## def executeStage(configPath, options, config): """ Executes the stage backup action. @note: The daily directory is derived once and then we stick with it, just in case a backup happens to span midnite. @note: As portions of the stage action is complete, we will write various indicator files so that it's obvious what actions have been completed. Each peer gets a stage indicator in its collect directory, and then the master gets a stage indicator in its daily staging directory. The store process uses the master's stage indicator to decide whether a directory is ready to be stored. Currently, nothing uses the indicator at each peer, and it exists for reference only. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are problems reading or writing files. """ logger.debug("Executing the 'stage' action.") if config.options is None or config.stage is None: raise ValueError("Stage configuration is not properly filled in.") dailyDir = _getDailyDir(config) localPeers = _getLocalPeers(config) remotePeers = _getRemotePeers(config) allPeers = localPeers + remotePeers stagingDirs = _createStagingDirs(config, dailyDir, allPeers) for peer in allPeers: logger.info("Staging peer [%s].", peer.name) ignoreFailures = _getIgnoreFailuresFlag(options, config, peer) if not peer.checkCollectIndicator(): if not ignoreFailures: logger.error("Peer [%s] was not ready to be staged.", peer.name) else: logger.info("Peer [%s] was not ready to be staged.", peer.name) continue logger.debug("Found collect indicator.") targetDir = stagingDirs[peer.name] if isRunningAsRoot(): # Since we're running as root, we can change ownership ownership = getUidGid(config.options.backupUser, config.options.backupGroup) logger.debug("Using target dir [%s], ownership [%d:%d].", targetDir, ownership[0], ownership[1]) else: # Non-root cannot change ownership, so don't set it ownership = None logger.debug("Using target dir [%s], ownership [None].", targetDir) try: count = peer.stagePeer(targetDir=targetDir, ownership=ownership) # note: utilize effective user's default umask logger.info("Staged %d files for peer [%s].", count, peer.name) peer.writeStageIndicator() except (ValueError, IOError, OSError) as e: logger.error("Error staging [%s]: %s", peer.name, e) writeIndicatorFile(dailyDir, STAGE_INDICATOR, config.options.backupUser, config.options.backupGroup) logger.info("Executed the 'stage' action successfully.") ######################################################################## # Private utility functions ######################################################################## ################################ # _createStagingDirs() function ################################ def _createStagingDirs(config, dailyDir, peers): """ Creates staging directories as required. The main staging directory is the passed in daily directory, something like C{staging/2002/05/23}. Then, individual peers get their own directories, i.e. C{staging/2002/05/23/host}. @param config: Config object. @param dailyDir: Daily staging directory. @param peers: List of all configured peers. @return: Dictionary mapping peer name to staging directory. """ mapping = {} if os.path.isdir(dailyDir): logger.warning("Staging directory [%s] already existed.", dailyDir) else: try: logger.debug("Creating staging directory [%s].", dailyDir) os.makedirs(dailyDir) for path in [ dailyDir, os.path.join(dailyDir, ".."), os.path.join(dailyDir, "..", ".."), ]: changeOwnership(path, config.options.backupUser, config.options.backupGroup) except Exception as e: raise Exception("Unable to create staging directory: %s" % e) for peer in peers: peerDir = os.path.join(dailyDir, peer.name) mapping[peer.name] = peerDir if os.path.isdir(peerDir): logger.warning("Peer staging directory [%s] already existed.", peerDir) else: try: logger.debug("Creating peer staging directory [%s].", peerDir) os.makedirs(peerDir) changeOwnership(peerDir, config.options.backupUser, config.options.backupGroup) except Exception as e: raise Exception("Unable to create staging directory: %s" % e) return mapping ######################################################################## # Private attribute "getter" functions ######################################################################## #################################### # _getIgnoreFailuresFlag() function #################################### def _getIgnoreFailuresFlag(options, config, peer): """ Gets the ignore failures flag based on options, configuration, and peer. @param options: Options object @param config: Configuration object @param peer: Peer to check @return: Whether to ignore stage failures for this peer """ logger.debug("Ignore failure mode for this peer: %s", peer.ignoreFailureMode) if peer.ignoreFailureMode is None or peer.ignoreFailureMode == "none": return False elif peer.ignoreFailureMode == "all": return True else: if options.full or isStartOfWeek(config.options.startingDay): return peer.ignoreFailureMode == "weekly" else: return peer.ignoreFailureMode == "daily" ########################## # _getDailyDir() function ########################## def _getDailyDir(config): """ Gets the daily staging directory. This is just a directory in the form C{staging/YYYY/MM/DD}, i.e. C{staging/2000/10/07}, except it will be an absolute path based on C{config.stage.targetDir}. @param config: Config object @return: Path of daily staging directory. """ dailyDir = os.path.join(config.stage.targetDir, time.strftime(DIR_TIME_FORMAT)) logger.debug("Daily staging directory is [%s].", dailyDir) return dailyDir ############################ # _getLocalPeers() function ############################ def _getLocalPeers(config): """ Return a list of L{LocalPeer} objects based on configuration. @param config: Config object. @return: List of L{LocalPeer} objects. """ localPeers = [] configPeers = None if config.stage.hasPeers(): logger.debug("Using list of local peers from stage configuration.") configPeers = config.stage.localPeers elif config.peers is not None and config.peers.hasPeers(): logger.debug("Using list of local peers from peers configuration.") configPeers = config.peers.localPeers if configPeers is not None: for peer in configPeers: localPeer = LocalPeer(peer.name, peer.collectDir, peer.ignoreFailureMode) localPeers.append(localPeer) logger.debug("Found local peer: [%s]", localPeer.name) return localPeers ############################# # _getRemotePeers() function ############################# def _getRemotePeers(config): """ Return a list of L{RemotePeer} objects based on configuration. @param config: Config object. @return: List of L{RemotePeer} objects. """ remotePeers = [] configPeers = None if config.stage.hasPeers(): logger.debug("Using list of remote peers from stage configuration.") configPeers = config.stage.remotePeers elif config.peers is not None and config.peers.hasPeers(): logger.debug("Using list of remote peers from peers configuration.") configPeers = config.peers.remotePeers if configPeers is not None: for peer in configPeers: remoteUser = _getRemoteUser(config, peer) localUser = _getLocalUser(config) rcpCommand = _getRcpCommand(config, peer) remotePeer = RemotePeer(peer.name, peer.collectDir, config.options.workingDir, remoteUser, rcpCommand, localUser, ignoreFailureMode=peer.ignoreFailureMode) remotePeers.append(remotePeer) logger.debug("Found remote peer: [%s]", remotePeer.name) return remotePeers ############################ # _getRemoteUser() function ############################ def _getRemoteUser(config, remotePeer): """ Gets the remote user associated with a remote peer. Use peer's if possible, otherwise take from options section. @param config: Config object. @param remotePeer: Configuration-style remote peer object. @return: Name of remote user associated with remote peer. """ if remotePeer.remoteUser is None: return config.options.backupUser return remotePeer.remoteUser ########################### # _getLocalUser() function ########################### def _getLocalUser(config): """ Gets the remote user associated with a remote peer. @param config: Config object. @return: Name of local user that should be used """ if not isRunningAsRoot(): return None return config.options.backupUser ############################ # _getRcpCommand() function ############################ def _getRcpCommand(config, remotePeer): """ Gets the RCP command associated with a remote peer. Use peer's if possible, otherwise take from options section. @param config: Config object. @param remotePeer: Configuration-style remote peer object. @return: RCP command associated with remote peer. """ if remotePeer.rcpCommand is None: return config.options.rcpCommand return remotePeer.rcpCommand CedarBackup3-3.1.6/CedarBackup3/actions/purge.py0000664000175000017500000000702012560007327023056 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Implements the standard 'purge' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'purge' action. @sort: executePurge @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import logging # Cedar Backup modules from CedarBackup3.filesystem import PurgeItemList ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.actions.purge") ######################################################################## # Public functions ######################################################################## ########################## # executePurge() function ########################## def executePurge(configPath, options, config): """ Executes the purge backup action. For each configured directory, we create a purge item list, remove from the list anything that's younger than the configured retain days value, and then purge from the filesystem what's left. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions """ logger.debug("Executing the 'purge' action.") if config.options is None or config.purge is None: raise ValueError("Purge configuration is not properly filled in.") if config.purge.purgeDirs is not None: for purgeDir in config.purge.purgeDirs: purgeList = PurgeItemList() purgeList.addDirContents(purgeDir.absolutePath) # add everything within directory purgeList.removeYoungFiles(purgeDir.retainDays) # remove young files *from the list* so they won't be purged purgeList.purgeItems() # remove remaining items from the filesystem logger.info("Executed the 'purge' action successfully.") CedarBackup3-3.1.6/CedarBackup3/actions/rebuild.py0000664000175000017500000001431212642030767023371 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Implements the standard 'rebuild' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'rebuild' action. @sort: executeRebuild @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import sys import os import logging import datetime # Cedar Backup modules from CedarBackup3.util import deriveDayOfWeek from CedarBackup3.actions.util import checkMediaState from CedarBackup3.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR from CedarBackup3.actions.store import writeImage, writeStoreIndicator, consistencyCheck ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.actions.rebuild") ######################################################################## # Public functions ######################################################################## ############################ # executeRebuild() function ############################ def executeRebuild(configPath, options, config): """ Executes the rebuild backup action. This function exists mainly to recreate a disc that has been "trashed" due to media or hardware problems. Note that the "stage complete" indicator isn't checked for this action. Note that the rebuild action and the store action are very similar. The main difference is that while store only stores a single day's staging directory, the rebuild action operates on multiple staging directories. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are problems reading or writing files. """ logger.debug("Executing the 'rebuild' action.") if sys.platform == "darwin": logger.warning("Warning: the rebuild action is not fully supported on Mac OS X.") logger.warning("See the Cedar Backup software manual for further information.") if config.options is None or config.store is None: raise ValueError("Rebuild configuration is not properly filled in.") if config.store.checkMedia: checkMediaState(config.store) # raises exception if media is not initialized stagingDirs = _findRebuildDirs(config) writeImage(config, True, stagingDirs) if config.store.checkData: if sys.platform == "darwin": logger.warning("Warning: consistency check cannot be run successfully on Mac OS X.") logger.warning("See the Cedar Backup software manual for further information.") else: logger.debug("Running consistency check of media.") consistencyCheck(config, stagingDirs) writeStoreIndicator(config, stagingDirs) logger.info("Executed the 'rebuild' action successfully.") ######################################################################## # Private utility functions ######################################################################## ############################## # _findRebuildDirs() function ############################## def _findRebuildDirs(config): """ Finds the set of directories to be included in a disc rebuild. A the rebuild action is supposed to recreate the "last week's" disc. This won't always be possible if some of the staging directories are missing. However, the general procedure is to look back into the past no further than the previous "starting day of week", and then work forward from there trying to find all of the staging directories between then and now that still exist and have a stage indicator. @param config: Config object. @return: Correct staging dir, as a dict mapping directory to date suffix. @raise IOError: If we do not find at least one staging directory. """ stagingDirs = {} start = deriveDayOfWeek(config.options.startingDay) today = datetime.date.today() if today.weekday() >= start: days = today.weekday() - start + 1 else: days = 7 - (start - today.weekday()) + 1 for i in range (0, days): currentDay = today - datetime.timedelta(days=i) dateSuffix = currentDay.strftime(DIR_TIME_FORMAT) stageDir = os.path.join(config.store.sourceDir, dateSuffix) indicator = os.path.join(stageDir, STAGE_INDICATOR) if os.path.isdir(stageDir) and os.path.exists(indicator): logger.info("Rebuild process will include stage directory [%s]", stageDir) stagingDirs[stageDir] = dateSuffix if len(stagingDirs) == 0: raise IOError("Unable to find any staging directories for rebuild process.") return stagingDirs CedarBackup3-3.1.6/CedarBackup3/writers/0002775000175000017500000000000012657665551021444 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/CedarBackup3/writers/util.py0000664000175000017500000006650312560007327022763 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides utilities related to image writers. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides utilities related to image writers. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import logging # Cedar Backup modules from CedarBackup3.util import resolveCommand, executeCommand from CedarBackup3.util import convertSize, UNIT_BYTES, UNIT_SECTORS, encodePath ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.writers.util") MKISOFS_COMMAND = [ "mkisofs", ] VOLNAME_COMMAND = [ "volname", ] ######################################################################## # Functions used to portably validate certain kinds of values ######################################################################## ############################ # validateDevice() function ############################ def validateDevice(device, unittest=False): """ Validates a configured device. The device must be an absolute path, must exist, and must be writable. The unittest flag turns off validation of the device on disk. @param device: Filesystem device path. @param unittest: Indicates whether we're unit testing. @return: Device as a string, for instance C{"/dev/cdrw"} @raise ValueError: If the device value is invalid. @raise ValueError: If some path cannot be encoded properly. """ if device is None: raise ValueError("Device must be filled in.") device = encodePath(device) if not os.path.isabs(device): raise ValueError("Backup device must be an absolute path.") if not unittest and not os.path.exists(device): raise ValueError("Backup device must exist on disk.") if not unittest and not os.access(device, os.W_OK): raise ValueError("Backup device is not writable by the current user.") return device ############################ # validateScsiId() function ############################ def validateScsiId(scsiId): """ Validates a SCSI id string. SCSI id must be a string in the form C{[:]scsibus,target,lun}. For Mac OS X (Darwin), we also accept the form C{IO.*Services[/N]}. @note: For consistency, if C{None} is passed in, C{None} will be returned. @param scsiId: SCSI id for the device. @return: SCSI id as a string, for instance C{"ATA:1,0,0"} @raise ValueError: If the SCSI id string is invalid. """ if scsiId is not None: pattern = re.compile(r"^\s*(.*:)?\s*[0-9][0-9]*\s*,\s*[0-9][0-9]*\s*,\s*[0-9][0-9]*\s*$") if not pattern.search(scsiId): pattern = re.compile(r"^\s*IO.*Services(\/[0-9][0-9]*)?\s*$") if not pattern.search(scsiId): raise ValueError("SCSI id is not in a valid form.") return scsiId ################################ # validateDriveSpeed() function ################################ def validateDriveSpeed(driveSpeed): """ Validates a drive speed value. Drive speed must be an integer which is >= 1. @note: For consistency, if C{None} is passed in, C{None} will be returned. @param driveSpeed: Speed at which the drive writes. @return: Drive speed as an integer @raise ValueError: If the drive speed value is invalid. """ if driveSpeed is None: return None try: intSpeed = int(driveSpeed) except TypeError: raise ValueError("Drive speed must be an integer >= 1.") if intSpeed < 1: raise ValueError("Drive speed must an integer >= 1.") return intSpeed ######################################################################## # General writer-related utility functions ######################################################################## ############################ # readMediaLabel() function ############################ def readMediaLabel(devicePath): """ Reads the media label (volume name) from the indicated device. The volume name is read using the C{volname} command. @param devicePath: Device path to read from @return: Media label as a string, or None if there is no name or it could not be read. """ args = [ devicePath, ] command = resolveCommand(VOLNAME_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) if result != 0: return None if output is None or len(output) < 1: return None return output[0].rstrip() ######################################################################## # IsoImage class definition ######################################################################## class IsoImage(object): ###################### # Class documentation ###################### """ Represents an ISO filesystem image. Summary ======= This object represents an ISO 9660 filesystem image. It is implemented in terms of the C{mkisofs} program, which has been ported to many operating systems and platforms. A "sensible subset" of the C{mkisofs} functionality is made available through the public interface, allowing callers to set a variety of basic options such as publisher id, application id, etc. as well as specify exactly which files and directories they want included in their image. By default, the image is created using the Rock Ridge protocol (using the C{-r} option to C{mkisofs}) because Rock Ridge discs are generally more useful on UN*X filesystems than standard ISO 9660 images. However, callers can fall back to the default C{mkisofs} functionality by setting the C{useRockRidge} instance variable to C{False}. Note, however, that this option is not well-tested. Where Files and Directories are Placed in the Image =================================================== Although this class is implemented in terms of the C{mkisofs} program, its standard "image contents" semantics are slightly different than the original C{mkisofs} semantics. The difference is that files and directories are added to the image with some additional information about their source directory kept intact. As an example, suppose you add the file C{/etc/profile} to your image and you do not configure a graft point. The file C{/profile} will be created in the image. The behavior for directories is similar. For instance, suppose that you add C{/etc/X11} to the image and do not configure a graft point. In this case, the directory C{/X11} will be created in the image, even if the original C{/etc/X11} directory is empty. I{This behavior differs from the standard C{mkisofs} behavior!} If a graft point is configured, it will be used to modify the point at which a file or directory is added into an image. Using the examples from above, let's assume you set a graft point of C{base} when adding C{/etc/profile} and C{/etc/X11} to your image. In this case, the file C{/base/profile} and the directory C{/base/X11} would be added to the image. I feel that this behavior is more consistent than the original C{mkisofs} behavior. However, to be fair, it is not quite as flexible, and some users might not like it. For this reason, the C{contentsOnly} parameter to the L{addEntry} method can be used to revert to the original behavior if desired. @sort: __init__, addEntry, getEstimatedSize, _getEstimatedSize, writeImage, _buildDirEntries _buildGeneralArgs, _buildSizeArgs, _buildWriteArgs, device, boundaries, graftPoint, useRockRidge, applicationId, biblioFile, publisherId, preparerId, volumeId """ ############## # Constructor ############## def __init__(self, device=None, boundaries=None, graftPoint=None): """ Initializes an empty ISO image object. Only the most commonly-used configuration items can be set using this constructor. If you have a need to change the others, do so immediately after creating your object. The device and boundaries values are both required in order to write multisession discs. If either is missing or C{None}, a multisession disc will not be written. The boundaries tuple is in terms of ISO sectors, as built by an image writer class and returned in a L{writer.MediaCapacity} object. @param device: Name of the device that the image will be written to @type device: Either be a filesystem path or a SCSI address @param boundaries: Session boundaries as required by C{mkisofs} @type boundaries: Tuple C{(last_sess_start,next_sess_start)} as returned from C{cdrecord -msinfo}, or C{None} @param graftPoint: Default graft point for this page. @type graftPoint: String representing a graft point path (see L{addEntry}). """ self._device = None self._boundaries = None self._graftPoint = None self._useRockRidge = True self._applicationId = None self._biblioFile = None self._publisherId = None self._preparerId = None self._volumeId = None self.entries = { } self.device = device self.boundaries = boundaries self.graftPoint = graftPoint self.useRockRidge = True self.applicationId = None self.biblioFile = None self.publisherId = None self.preparerId = None self.volumeId = None logger.debug("Created new ISO image object.") ############# # Properties ############# def _setDevice(self, value): """ Property target used to set the device value. If not C{None}, the value can be either an absolute path or a SCSI id. @raise ValueError: If the value is not valid """ try: if value is None: self._device = None else: if os.path.isabs(value): self._device = value else: self._device = validateScsiId(value) except ValueError: raise ValueError("Device must either be an absolute path or a valid SCSI id.") def _getDevice(self): """ Property target used to get the device value. """ return self._device def _setBoundaries(self, value): """ Property target used to set the boundaries tuple. If not C{None}, the value must be a tuple of two integers. @raise ValueError: If the tuple values are not integers. @raise IndexError: If the tuple does not contain enough elements. """ if value is None: self._boundaries = None else: self._boundaries = (int(value[0]), int(value[1])) def _getBoundaries(self): """ Property target used to get the boundaries value. """ return self._boundaries def _setGraftPoint(self, value): """ Property target used to set the graft point. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The graft point must be a non-empty string.") self._graftPoint = value def _getGraftPoint(self): """ Property target used to get the graft point. """ return self._graftPoint def _setUseRockRidge(self, value): """ Property target used to set the use RockRidge flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._useRockRidge = True else: self._useRockRidge = False def _getUseRockRidge(self): """ Property target used to get the use RockRidge flag. """ return self._useRockRidge def _setApplicationId(self, value): """ Property target used to set the application id. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The application id must be a non-empty string.") self._applicationId = value def _getApplicationId(self): """ Property target used to get the application id. """ return self._applicationId def _setBiblioFile(self, value): """ Property target used to set the biblio file. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The biblio file must be a non-empty string.") self._biblioFile = value def _getBiblioFile(self): """ Property target used to get the biblio file. """ return self._biblioFile def _setPublisherId(self, value): """ Property target used to set the publisher id. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The publisher id must be a non-empty string.") self._publisherId = value def _getPublisherId(self): """ Property target used to get the publisher id. """ return self._publisherId def _setPreparerId(self, value): """ Property target used to set the preparer id. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The preparer id must be a non-empty string.") self._preparerId = value def _getPreparerId(self): """ Property target used to get the preparer id. """ return self._preparerId def _setVolumeId(self, value): """ Property target used to set the volume id. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The volume id must be a non-empty string.") self._volumeId = value def _getVolumeId(self): """ Property target used to get the volume id. """ return self._volumeId device = property(_getDevice, _setDevice, None, "Device that image will be written to (device path or SCSI id).") boundaries = property(_getBoundaries, _setBoundaries, None, "Session boundaries as required by C{mkisofs}.") graftPoint = property(_getGraftPoint, _setGraftPoint, None, "Default image-wide graft point (see L{addEntry} for details).") useRockRidge = property(_getUseRockRidge, _setUseRockRidge, None, "Indicates whether to use RockRidge (default is C{True}).") applicationId = property(_getApplicationId, _setApplicationId, None, "Optionally specifies the ISO header application id value.") biblioFile = property(_getBiblioFile, _setBiblioFile, None, "Optionally specifies the ISO bibliographic file name.") publisherId = property(_getPublisherId, _setPublisherId, None, "Optionally specifies the ISO header publisher id value.") preparerId = property(_getPreparerId, _setPreparerId, None, "Optionally specifies the ISO header preparer id value.") volumeId = property(_getVolumeId, _setVolumeId, None, "Optionally specifies the ISO header volume id value.") ######################### # General public methods ######################### def addEntry(self, path, graftPoint=None, override=False, contentsOnly=False): """ Adds an individual file or directory into the ISO image. The path must exist and must be a file or a directory. By default, the entry will be placed into the image at the root directory, but this behavior can be overridden using the C{graftPoint} parameter or instance variable. You can use the C{contentsOnly} behavior to revert to the "original" C{mkisofs} behavior for adding directories, which is to add only the items within the directory, and not the directory itself. @note: Things get I{odd} if you try to add a directory to an image that will be written to a multisession disc, and the same directory already exists in an earlier session on that disc. Not all of the data gets written. You really wouldn't want to do this anyway, I guess. @note: An exception will be thrown if the path has already been added to the image, unless the C{override} parameter is set to C{True}. @note: The method C{graftPoints} parameter overrides the object-wide instance variable. If neither the method parameter or object-wide value is set, the path will be written at the image root. The graft point behavior is determined by the value which is in effect I{at the time this method is called}, so you I{must} set the object-wide value before calling this method for the first time, or your image may not be consistent. @note: You I{cannot} use the local C{graftPoint} parameter to "turn off" an object-wide instance variable by setting it to C{None}. Python's default argument functionality buys us a lot, but it can't make this method psychic. :) @param path: File or directory to be added to the image @type path: String representing a path on disk @param graftPoint: Graft point to be used when adding this entry @type graftPoint: String representing a graft point path, as described above @param override: Override an existing entry with the same path. @type override: Boolean true/false @param contentsOnly: Add directory contents only (standard C{mkisofs} behavior). @type contentsOnly: Boolean true/false @raise ValueError: If path is not a file or directory, or does not exist. @raise ValueError: If the path has already been added, and override is not set. @raise ValueError: If a path cannot be encoded properly. """ path = encodePath(path) if not override: if path in list(self.entries.keys()): raise ValueError("Path has already been added to the image.") if os.path.islink(path): raise ValueError("Path must not be a link.") if os.path.isdir(path): if graftPoint is not None: if contentsOnly: self.entries[path] = graftPoint else: self.entries[path] = os.path.join(graftPoint, os.path.basename(path)) elif self.graftPoint is not None: if contentsOnly: self.entries[path] = self.graftPoint else: self.entries[path] = os.path.join(self.graftPoint, os.path.basename(path)) else: if contentsOnly: self.entries[path] = None else: self.entries[path] = os.path.basename(path) elif os.path.isfile(path): if graftPoint is not None: self.entries[path] = graftPoint elif self.graftPoint is not None: self.entries[path] = self.graftPoint else: self.entries[path] = None else: raise ValueError("Path must be a file or a directory.") def getEstimatedSize(self): """ Returns the estimated size (in bytes) of the ISO image. This is implemented via the C{-print-size} option to C{mkisofs}, so it might take a bit of time to execute. However, the result is as accurate as we can get, since it takes into account all of the ISO overhead, the true cost of directories in the structure, etc, etc. @return: Estimated size of the image, in bytes. @raise IOError: If there is a problem calling C{mkisofs}. @raise ValueError: If there are no filesystem entries in the image """ if len(list(self.entries.keys())) == 0: raise ValueError("Image does not contain any entries.") return self._getEstimatedSize(self.entries) def _getEstimatedSize(self, entries): """ Returns the estimated size (in bytes) for the passed-in entries dictionary. @return: Estimated size of the image, in bytes. @raise IOError: If there is a problem calling C{mkisofs}. """ args = self._buildSizeArgs(entries) command = resolveCommand(MKISOFS_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) if result != 0: raise IOError("Error (%d) executing mkisofs command to estimate size." % result) if len(output) != 1: raise IOError("Unable to parse mkisofs output.") try: sectors = float(output[0]) size = convertSize(sectors, UNIT_SECTORS, UNIT_BYTES) return size except: raise IOError("Unable to parse mkisofs output.") def writeImage(self, imagePath): """ Writes this image to disk using the image path. @param imagePath: Path to write image out as @type imagePath: String representing a path on disk @raise IOError: If there is an error writing the image to disk. @raise ValueError: If there are no filesystem entries in the image @raise ValueError: If a path cannot be encoded properly. """ imagePath = encodePath(imagePath) if len(list(self.entries.keys())) == 0: raise ValueError("Image does not contain any entries.") args = self._buildWriteArgs(self.entries, imagePath) command = resolveCommand(MKISOFS_COMMAND) (result, output) = executeCommand(command, args, returnOutput=False) if result != 0: raise IOError("Error (%d) executing mkisofs command to build image." % result) ######################################### # Methods used to build mkisofs commands ######################################### @staticmethod def _buildDirEntries(entries): """ Uses an entries dictionary to build a list of directory locations for use by C{mkisofs}. We build a list of entries that can be passed to C{mkisofs}. Each entry is either raw (if no graft point was configured) or in graft-point form as described above (if a graft point was configured). The dictionary keys are the path names, and the values are the graft points, if any. @param entries: Dictionary of image entries (i.e. self.entries) @return: List of directory locations for use by C{mkisofs} """ dirEntries = [] for key in list(entries.keys()): if entries[key] is None: dirEntries.append(key) else: dirEntries.append("%s/=%s" % (entries[key].strip("/"), key)) return dirEntries def _buildGeneralArgs(self): """ Builds a list of general arguments to be passed to a C{mkisofs} command. The various instance variables (C{applicationId}, etc.) are filled into the list of arguments if they are set. By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] if self.applicationId is not None: args.append("-A") args.append(self.applicationId) if self.biblioFile is not None: args.append("-biblio") args.append(self.biblioFile) if self.publisherId is not None: args.append("-publisher") args.append(self.publisherId) if self.preparerId is not None: args.append("-p") args.append(self.preparerId) if self.volumeId is not None: args.append("-V") args.append(self.volumeId) return args def _buildSizeArgs(self, entries): """ Builds a list of arguments to be passed to a C{mkisofs} command. The various instance variables (C{applicationId}, etc.) are filled into the list of arguments if they are set. The command will be built to just return size output (a simple count of sectors via the C{-print-size} option), rather than an image file on disk. By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested. @param entries: Dictionary of image entries (i.e. self.entries) @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = self._buildGeneralArgs() args.append("-print-size") args.append("-graft-points") if self.useRockRidge: args.append("-r") if self.device is not None and self.boundaries is not None: args.append("-C") args.append("%d,%d" % (self.boundaries[0], self.boundaries[1])) args.append("-M") args.append(self.device) args.extend(self._buildDirEntries(entries)) return args def _buildWriteArgs(self, entries, imagePath): """ Builds a list of arguments to be passed to a C{mkisofs} command. The various instance variables (C{applicationId}, etc.) are filled into the list of arguments if they are set. The command will be built to write an image to disk. By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested. @param entries: Dictionary of image entries (i.e. self.entries) @param imagePath: Path to write image out as @type imagePath: String representing a path on disk @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = self._buildGeneralArgs() args.append("-graft-points") if self.useRockRidge: args.append("-r") args.append("-o") args.append(imagePath) if self.device is not None and self.boundaries is not None: args.append("-C") args.append("%d,%d" % (self.boundaries[0], self.boundaries[1])) args.append("-M") args.append(self.device) args.extend(self._buildDirEntries(entries)) return args CedarBackup3-3.1.6/CedarBackup3/writers/cdwriter.py0000664000175000017500000015163312642030773023632 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides functionality related to CD writer devices. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides functionality related to CD writer devices. @sort: MediaDefinition, MediaCapacity, CdWriter, MEDIA_CDRW_74, MEDIA_CDR_74, MEDIA_CDRW_80, MEDIA_CDR_80 @var MEDIA_CDRW_74: Constant representing 74-minute CD-RW media. @var MEDIA_CDR_74: Constant representing 74-minute CD-R media. @var MEDIA_CDRW_80: Constant representing 80-minute CD-RW media. @var MEDIA_CDR_80: Constant representing 80-minute CD-R media. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import logging import tempfile import time # Cedar Backup modules from CedarBackup3.util import resolveCommand, executeCommand from CedarBackup3.util import convertSize, displayBytes, encodePath from CedarBackup3.util import UNIT_SECTORS, UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES from CedarBackup3.writers.util import validateDevice, validateScsiId, validateDriveSpeed from CedarBackup3.writers.util import IsoImage ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.writers.cdwriter") MEDIA_CDRW_74 = 1 MEDIA_CDR_74 = 2 MEDIA_CDRW_80 = 3 MEDIA_CDR_80 = 4 CDRECORD_COMMAND = [ "cdrecord", ] EJECT_COMMAND = [ "eject", ] MKISOFS_COMMAND = [ "mkisofs", ] ######################################################################## # MediaDefinition class definition ######################################################################## class MediaDefinition(object): """ Class encapsulating information about CD media definitions. The following media types are accepted: - C{MEDIA_CDR_74}: 74-minute CD-R media (650 MB capacity) - C{MEDIA_CDRW_74}: 74-minute CD-RW media (650 MB capacity) - C{MEDIA_CDR_80}: 80-minute CD-R media (700 MB capacity) - C{MEDIA_CDRW_80}: 80-minute CD-RW media (700 MB capacity) Note that all of the capacities associated with a media definition are in terms of ISO sectors (C{util.ISO_SECTOR_SIZE)}. @sort: __init__, mediaType, rewritable, initialLeadIn, leadIn, capacity """ def __init__(self, mediaType): """ Creates a media definition for the indicated media type. @param mediaType: Type of the media, as discussed above. @raise ValueError: If the media type is unknown or unsupported. """ self._mediaType = None self._rewritable = False self._initialLeadIn = 0. self._leadIn = 0.0 self._capacity = 0.0 self._setValues(mediaType) def _setValues(self, mediaType): """ Sets values based on media type. @param mediaType: Type of the media, as discussed above. @raise ValueError: If the media type is unknown or unsupported. """ if mediaType not in [MEDIA_CDR_74, MEDIA_CDRW_74, MEDIA_CDR_80, MEDIA_CDRW_80]: raise ValueError("Invalid media type %d." % mediaType) self._mediaType = mediaType self._initialLeadIn = 11400.0 # per cdrecord's documentation self._leadIn = 6900.0 # per cdrecord's documentation if self._mediaType == MEDIA_CDR_74: self._rewritable = False self._capacity = convertSize(650.0, UNIT_MBYTES, UNIT_SECTORS) elif self._mediaType == MEDIA_CDRW_74: self._rewritable = True self._capacity = convertSize(650.0, UNIT_MBYTES, UNIT_SECTORS) elif self._mediaType == MEDIA_CDR_80: self._rewritable = False self._capacity = convertSize(700.0, UNIT_MBYTES, UNIT_SECTORS) elif self._mediaType == MEDIA_CDRW_80: self._rewritable = True self._capacity = convertSize(700.0, UNIT_MBYTES, UNIT_SECTORS) def _getMediaType(self): """ Property target used to get the media type value. """ return self._mediaType def _getRewritable(self): """ Property target used to get the rewritable flag value. """ return self._rewritable def _getInitialLeadIn(self): """ Property target used to get the initial lead-in value. """ return self._initialLeadIn def _getLeadIn(self): """ Property target used to get the lead-in value. """ return self._leadIn def _getCapacity(self): """ Property target used to get the capacity value. """ return self._capacity mediaType = property(_getMediaType, None, None, doc="Configured media type.") rewritable = property(_getRewritable, None, None, doc="Boolean indicating whether the media is rewritable.") initialLeadIn = property(_getInitialLeadIn, None, None, doc="Initial lead-in required for first image written to media.") leadIn = property(_getLeadIn, None, None, doc="Lead-in required on successive images written to media.") capacity = property(_getCapacity, None, None, doc="Total capacity of the media before any required lead-in.") ######################################################################## # MediaCapacity class definition ######################################################################## class MediaCapacity(object): """ Class encapsulating information about CD media capacity. Space used includes the required media lead-in (unless the disk is unused). Space available attempts to provide a picture of how many bytes are available for data storage, including any required lead-in. The boundaries value is either C{None} (if multisession discs are not supported or if the disc has no boundaries) or in exactly the form provided by C{cdrecord -msinfo}. It can be passed as-is to the C{IsoImage} class. @sort: __init__, bytesUsed, bytesAvailable, boundaries, totalCapacity, utilized """ def __init__(self, bytesUsed, bytesAvailable, boundaries): """ Initializes a capacity object. @raise IndexError: If the boundaries tuple does not have enough elements. @raise ValueError: If the boundaries values are not integers. @raise ValueError: If the bytes used and available values are not floats. """ self._bytesUsed = float(bytesUsed) self._bytesAvailable = float(bytesAvailable) if boundaries is None: self._boundaries = None else: self._boundaries = (int(boundaries[0]), int(boundaries[1])) def __str__(self): """ Informal string representation for class instance. """ return "utilized %s of %s (%.2f%%)" % (displayBytes(self.bytesUsed), displayBytes(self.totalCapacity), self.utilized) def _getBytesUsed(self): """ Property target to get the bytes-used value. """ return self._bytesUsed def _getBytesAvailable(self): """ Property target to get the bytes-available value. """ return self._bytesAvailable def _getBoundaries(self): """ Property target to get the boundaries tuple. """ return self._boundaries def _getTotalCapacity(self): """ Property target to get the total capacity (used + available). """ return self.bytesUsed + self.bytesAvailable def _getUtilized(self): """ Property target to get the percent of capacity which is utilized. """ if self.bytesAvailable <= 0.0: return 100.0 elif self.bytesUsed <= 0.0: return 0.0 return (self.bytesUsed / self.totalCapacity) * 100.0 bytesUsed = property(_getBytesUsed, None, None, doc="Space used on disc, in bytes.") bytesAvailable = property(_getBytesAvailable, None, None, doc="Space available on disc, in bytes.") boundaries = property(_getBoundaries, None, None, doc="Session disc boundaries, in terms of ISO sectors.") totalCapacity = property(_getTotalCapacity, None, None, doc="Total capacity of the disc, in bytes.") utilized = property(_getUtilized, None, None, "Percentage of the total capacity which is utilized.") ######################################################################## # _ImageProperties class definition ######################################################################## class _ImageProperties(object): """ Simple value object to hold image properties for C{DvdWriter}. """ def __init__(self): self.newDisc = False self.tmpdir = None self.mediaLabel = None self.entries = None # dict mapping path to graft point ######################################################################## # CdWriter class definition ######################################################################## class CdWriter(object): ###################### # Class documentation ###################### """ Class representing a device that knows how to write CD media. Summary ======= This is a class representing a device that knows how to write CD media. It provides common operations for the device, such as ejecting the media, writing an ISO image to the media, or checking for the current media capacity. It also provides a place to store device attributes, such as whether the device supports writing multisession discs, etc. This class is implemented in terms of the C{eject} and C{cdrecord} programs, both of which should be available on most UN*X platforms. Image Writer Interface ====================== The following methods make up the "image writer" interface shared with other kinds of writers (such as DVD writers):: __init__ initializeImage() addImageEntry() writeImage() setImageNewDisc() retrieveCapacity() getEstimatedImageSize() Only these methods will be used by other Cedar Backup functionality that expects a compatible image writer. The media attribute is also assumed to be available. Media Types =========== This class knows how to write to two different kinds of media, represented by the following constants: - C{MEDIA_CDR_74}: 74-minute CD-R media (650 MB capacity) - C{MEDIA_CDRW_74}: 74-minute CD-RW media (650 MB capacity) - C{MEDIA_CDR_80}: 80-minute CD-R media (700 MB capacity) - C{MEDIA_CDRW_80}: 80-minute CD-RW media (700 MB capacity) Most hardware can read and write both 74-minute and 80-minute CD-R and CD-RW media. Some older drives may only be able to write CD-R media. The difference between the two is that CD-RW media can be rewritten (erased), while CD-R media cannot be. I do not support any other configurations for a couple of reasons. The first is that I've never tested any other kind of media. The second is that anything other than 74 or 80 minute is apparently non-standard. Device Attributes vs. Media Attributes ====================================== A given writer instance has two different kinds of attributes associated with it, which I call device attributes and media attributes. Device attributes are things which can be determined without looking at the media, such as whether the drive supports writing multisession disks or has a tray. Media attributes are attributes which vary depending on the state of the media, such as the remaining capacity on a disc. In general, device attributes are available via instance variables and are constant over the life of an object, while media attributes can be retrieved through method calls. Talking to Hardware =================== This class needs to talk to CD writer hardware in two different ways: through cdrecord to actually write to the media, and through the filesystem to do things like open and close the tray. Historically, CdWriter has interacted with cdrecord using the scsiId attribute, and with most other utilities using the device attribute. This changed somewhat in Cedar Backup 2.9.0. When Cedar Backup was first written, the only way to interact with cdrecord was by using a SCSI device id. IDE devices were mapped to pseudo-SCSI devices through the kernel. Later, extended SCSI "methods" arrived, and it became common to see C{ATA:1,0,0} or C{ATAPI:0,0,0} as a way to address IDE hardware. By late 2006, C{ATA} and C{ATAPI} had apparently been deprecated in favor of just addressing the IDE device directly by name, i.e. C{/dev/cdrw}. Because of this latest development, it no longer makes sense to require a CdWriter to be created with a SCSI id -- there might not be one. So, the passed-in SCSI id is now optional. Also, there is now a hardwareId attribute. This attribute is filled in with either the SCSI id (if provided) or the device (otherwise). The hardware id is the value that will be passed to cdrecord in the C{dev=} argument. Testing ======= It's rather difficult to test this code in an automated fashion, even if you have access to a physical CD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to. Because of this, much of the implementation below is in terms of static methods that are supposed to take defined actions based on their arguments. Public methods are then implemented in terms of a series of calls to simplistic static methods. This way, we can test as much as possible of the functionality via testing the static methods, while hoping that if the static methods are called appropriately, things will work properly. It's not perfect, but it's much better than no testing at all. @sort: __init__, isRewritable, _retrieveProperties, retrieveCapacity, _getBoundaries, _calculateCapacity, openTray, closeTray, refreshMedia, writeImage, _blankMedia, _parsePropertiesOutput, _parseBoundariesOutput, _buildOpenTrayArgs, _buildCloseTrayArgs, _buildPropertiesArgs, _buildBoundariesArgs, _buildBlankArgs, _buildWriteArgs, device, scsiId, hardwareId, driveSpeed, media, deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject, initializeImage, addImageEntry, writeImage, setImageNewDisc, getEstimatedImageSize """ ############## # Constructor ############## def __init__(self, device, scsiId=None, driveSpeed=None, mediaType=MEDIA_CDRW_74, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False): """ Initializes a CD writer object. The current user must have write access to the device at the time the object is instantiated, or an exception will be thrown. However, no media-related validation is done, and in fact there is no need for any media to be in the drive until one of the other media attribute-related methods is called. The various instance variables such as C{deviceType}, C{deviceVendor}, etc. might be C{None}, if we're unable to parse this specific information from the C{cdrecord} output. This information is just for reference. The SCSI id is optional, but the device path is required. If the SCSI id is passed in, then the hardware id attribute will be taken from the SCSI id. Otherwise, the hardware id will be taken from the device. If cdrecord improperly detects whether your writer device has a tray and can be safely opened and closed, then pass in C{noEject=False}. This will override the properties and the device will never be ejected. @note: The C{unittest} parameter should never be set to C{True} outside of Cedar Backup code. It is intended for use in unit testing Cedar Backup internals and has no other sensible purpose. @param device: Filesystem device associated with this writer. @type device: Absolute path to a filesystem device, i.e. C{/dev/cdrw} @param scsiId: SCSI id for the device (optional). @type scsiId: If provided, SCSI id in the form C{[:]scsibus,target,lun} @param driveSpeed: Speed at which the drive writes. @type driveSpeed: Use C{2} for 2x device, etc. or C{None} to use device default. @param mediaType: Type of the media that is assumed to be in the drive. @type mediaType: One of the valid media type as discussed above. @param noEject: Overrides properties to indicate that the device does not support eject. @type noEject: Boolean true/false @param refreshMediaDelay: Refresh media delay to use, if any @type refreshMediaDelay: Number of seconds, an integer >= 0 @param ejectDelay: Eject delay to use, if any @type ejectDelay: Number of seconds, an integer >= 0 @param unittest: Turns off certain validations, for use in unit testing. @type unittest: Boolean true/false @raise ValueError: If the device is not valid for some reason. @raise ValueError: If the SCSI id is not in a valid form. @raise ValueError: If the drive speed is not an integer >= 1. @raise IOError: If device properties could not be read for some reason. """ self._image = None # optionally filled in by initializeImage() self._device = validateDevice(device, unittest) self._scsiId = validateScsiId(scsiId) self._driveSpeed = validateDriveSpeed(driveSpeed) self._media = MediaDefinition(mediaType) self._noEject = noEject self._refreshMediaDelay = refreshMediaDelay self._ejectDelay = ejectDelay if not unittest: (self._deviceType, self._deviceVendor, self._deviceId, self._deviceBufferSize, self._deviceSupportsMulti, self._deviceHasTray, self._deviceCanEject) = self._retrieveProperties() ############# # Properties ############# def _getDevice(self): """ Property target used to get the device value. """ return self._device def _getScsiId(self): """ Property target used to get the SCSI id value. """ return self._scsiId def _getHardwareId(self): """ Property target used to get the hardware id value. """ if self._scsiId is None: return self._device return self._scsiId def _getDriveSpeed(self): """ Property target used to get the drive speed. """ return self._driveSpeed def _getMedia(self): """ Property target used to get the media description. """ return self._media def _getDeviceType(self): """ Property target used to get the device type. """ return self._deviceType def _getDeviceVendor(self): """ Property target used to get the device vendor. """ return self._deviceVendor def _getDeviceId(self): """ Property target used to get the device id. """ return self._deviceId def _getDeviceBufferSize(self): """ Property target used to get the device buffer size. """ return self._deviceBufferSize def _getDeviceSupportsMulti(self): """ Property target used to get the device-support-multi flag. """ return self._deviceSupportsMulti def _getDeviceHasTray(self): """ Property target used to get the device-has-tray flag. """ return self._deviceHasTray def _getDeviceCanEject(self): """ Property target used to get the device-can-eject flag. """ return self._deviceCanEject def _getRefreshMediaDelay(self): """ Property target used to get the configured refresh media delay, in seconds. """ return self._refreshMediaDelay def _getEjectDelay(self): """ Property target used to get the configured eject delay, in seconds. """ return self._ejectDelay device = property(_getDevice, None, None, doc="Filesystem device name for this writer.") scsiId = property(_getScsiId, None, None, doc="SCSI id for the device, in the form C{[:]scsibus,target,lun}.") hardwareId = property(_getHardwareId, None, None, doc="Hardware id for this writer, either SCSI id or device path.") driveSpeed = property(_getDriveSpeed, None, None, doc="Speed at which the drive writes.") media = property(_getMedia, None, None, doc="Definition of media that is expected to be in the device.") deviceType = property(_getDeviceType, None, None, doc="Type of the device, as returned from C{cdrecord -prcap}.") deviceVendor = property(_getDeviceVendor, None, None, doc="Vendor of the device, as returned from C{cdrecord -prcap}.") deviceId = property(_getDeviceId, None, None, doc="Device identification, as returned from C{cdrecord -prcap}.") deviceBufferSize = property(_getDeviceBufferSize, None, None, doc="Size of the device's write buffer, in bytes.") deviceSupportsMulti = property(_getDeviceSupportsMulti, None, None, doc="Indicates whether device supports multisession discs.") deviceHasTray = property(_getDeviceHasTray, None, None, doc="Indicates whether the device has a media tray.") deviceCanEject = property(_getDeviceCanEject, None, None, doc="Indicates whether the device supports ejecting its media.") refreshMediaDelay = property(_getRefreshMediaDelay, None, None, doc="Refresh media delay, in seconds.") ejectDelay = property(_getEjectDelay, None, None, doc="Eject delay, in seconds.") ################################################# # Methods related to device and media attributes ################################################# def isRewritable(self): """Indicates whether the media is rewritable per configuration.""" return self._media.rewritable def _retrieveProperties(self): """ Retrieves properties for a device from C{cdrecord}. The results are returned as a tuple of the object device attributes as returned from L{_parsePropertiesOutput}: C{(deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject)}. @return: Results tuple as described above. @raise IOError: If there is a problem talking to the device. """ args = CdWriter._buildPropertiesArgs(self.hardwareId) command = resolveCommand(CDRECORD_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) if result != 0: raise IOError("Error (%d) executing cdrecord command to get properties." % result) return CdWriter._parsePropertiesOutput(output) def retrieveCapacity(self, entireDisc=False, useMulti=True): """ Retrieves capacity for the current media in terms of a C{MediaCapacity} object. If C{entireDisc} is passed in as C{True} the capacity will be for the entire disc, as if it were to be rewritten from scratch. If the drive does not support writing multisession discs or if C{useMulti} is passed in as C{False}, the capacity will also be as if the disc were to be rewritten from scratch, but the indicated boundaries value will be C{None}. The same will happen if the disc cannot be read for some reason. Otherwise, the capacity (including the boundaries) will represent whatever space remains on the disc to be filled by future sessions. @param entireDisc: Indicates whether to return capacity for entire disc. @type entireDisc: Boolean true/false @param useMulti: Indicates whether a multisession disc should be assumed, if possible. @type useMulti: Boolean true/false @return: C{MediaCapacity} object describing the capacity of the media. @raise IOError: If the media could not be read for some reason. """ boundaries = self._getBoundaries(entireDisc, useMulti) return CdWriter._calculateCapacity(self._media, boundaries) def _getBoundaries(self, entireDisc=False, useMulti=True): """ Gets the ISO boundaries for the media. If C{entireDisc} is passed in as C{True} the boundaries will be C{None}, as if the disc were to be rewritten from scratch. If the drive does not support writing multisession discs, the returned value will be C{None}. The same will happen if the disc can't be read for some reason. Otherwise, the returned value will be represent the boundaries of the disc's current contents. The results are returned as a tuple of (lower, upper) as needed by the C{IsoImage} class. Note that these values are in terms of ISO sectors, not bytes. Clients should generally consider the boundaries value opaque, however. @param entireDisc: Indicates whether to return capacity for entire disc. @type entireDisc: Boolean true/false @param useMulti: Indicates whether a multisession disc should be assumed, if possible. @type useMulti: Boolean true/false @return: Boundaries tuple or C{None}, as described above. @raise IOError: If the media could not be read for some reason. """ if not self._deviceSupportsMulti: logger.debug("Device does not support multisession discs; returning boundaries None.") return None elif not useMulti: logger.debug("Use multisession flag is False; returning boundaries None.") return None elif entireDisc: logger.debug("Entire disc flag is True; returning boundaries None.") return None else: args = CdWriter._buildBoundariesArgs(self.hardwareId) command = resolveCommand(CDRECORD_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) if result != 0: logger.debug("Error (%d) executing cdrecord command to get capacity.", result) logger.warning("Unable to read disc (might not be initialized); returning boundaries of None.") return None boundaries = CdWriter._parseBoundariesOutput(output) if boundaries is None: logger.debug("Returning disc boundaries: None") else: logger.debug("Returning disc boundaries: (%d, %d)", boundaries[0], boundaries[1]) return boundaries @staticmethod def _calculateCapacity(media, boundaries): """ Calculates capacity for the media in terms of boundaries. If C{boundaries} is C{None} or the lower bound is 0 (zero), then the capacity will be for the entire disc minus the initial lead in. Otherwise, capacity will be as if the caller wanted to add an additional session to the end of the existing data on the disc. @param media: MediaDescription object describing the media capacity. @param boundaries: Session boundaries as returned from L{_getBoundaries}. @return: C{MediaCapacity} object describing the capacity of the media. """ if boundaries is None or boundaries[1] == 0: logger.debug("Capacity calculations are based on a complete disc rewrite.") sectorsAvailable = media.capacity - media.initialLeadIn if sectorsAvailable < 0: sectorsAvailable = 0.0 bytesUsed = 0.0 bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) else: logger.debug("Capacity calculations are based on a new ISO session.") sectorsAvailable = media.capacity - boundaries[1] - media.leadIn if sectorsAvailable < 0: sectorsAvailable = 0.0 bytesUsed = convertSize(boundaries[1], UNIT_SECTORS, UNIT_BYTES) bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) logger.debug("Used [%s], available [%s].", displayBytes(bytesUsed), displayBytes(bytesAvailable)) return MediaCapacity(bytesUsed, bytesAvailable, boundaries) ####################################################### # Methods used for working with the internal ISO image ####################################################### def initializeImage(self, newDisc, tmpdir, mediaLabel=None): """ Initializes the writer's associated ISO image. This method initializes the C{image} instance variable so that the caller can use the C{addImageEntry} method. Once entries have been added, the C{writeImage} method can be called with no arguments. @param newDisc: Indicates whether the disc should be re-initialized @type newDisc: Boolean true/false. @param tmpdir: Temporary directory to use if needed @type tmpdir: String representing a directory path on disk @param mediaLabel: Media label to be applied to the image, if any @type mediaLabel: String, no more than 25 characters long """ self._image = _ImageProperties() self._image.newDisc = newDisc self._image.tmpdir = encodePath(tmpdir) self._image.mediaLabel = mediaLabel self._image.entries = {} # mapping from path to graft point (if any) def addImageEntry(self, path, graftPoint): """ Adds a filepath entry to the writer's associated ISO image. The contents of the filepath -- but not the path itself -- will be added to the image at the indicated graft point. If you don't want to use a graft point, just pass C{None}. @note: Before calling this method, you must call L{initializeImage}. @param path: File or directory to be added to the image @type path: String representing a path on disk @param graftPoint: Graft point to be used when adding this entry @type graftPoint: String representing a graft point path, as described above @raise ValueError: If initializeImage() was not previously called """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") if not os.path.exists(path): raise ValueError("Path [%s] does not exist." % path) self._image.entries[path] = graftPoint def setImageNewDisc(self, newDisc): """ Resets (overrides) the newDisc flag on the internal image. @param newDisc: New disc flag to set @raise ValueError: If initializeImage() was not previously called """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") self._image.newDisc = newDisc def getEstimatedImageSize(self): """ Gets the estimated size of the image associated with the writer. @return: Estimated size of the image, in bytes. @raise IOError: If there is a problem calling C{mkisofs}. @raise ValueError: If initializeImage() was not previously called """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") image = IsoImage() for path in list(self._image.entries.keys()): image.addEntry(path, self._image.entries[path], override=False, contentsOnly=True) return image.getEstimatedSize() ###################################### # Methods which expose device actions ###################################### def openTray(self): """ Opens the device's tray and leaves it open. This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. If the writer was constructed with C{noEject=True}, then this is a no-op. Starting with Debian wheezy on my backup hardware, I started seeing consistent problems with the eject command. I couldn't tell whether these problems were due to the device management system or to the new kernel (3.2.0). Initially, I saw simple eject failures, possibly because I was opening and closing the tray too quickly. I worked around that behavior with the new ejectDelay flag. Later, I sometimes ran into issues after writing an image to a disc: eject would give errors like "unable to eject, last error: Inappropriate ioctl for device". Various sources online (like Ubuntu bug #875543) suggested that the drive was being locked somehow, and that the workaround was to run 'eject -i off' to unlock it. Sure enough, that fixed the problem for me, so now it's a normal error-handling strategy. @raise IOError: If there is an error talking to the device. """ if not self._noEject: if self._deviceHasTray and self._deviceCanEject: args = CdWriter._buildOpenTrayArgs(self._device) result = executeCommand(EJECT_COMMAND, args)[0] if result != 0: logger.debug("Eject failed; attempting kludge of unlocking the tray before retrying.") self.unlockTray() result = executeCommand(EJECT_COMMAND, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to open tray (failed even after unlocking tray)." % result) logger.debug("Kludge was apparently successful.") if self.ejectDelay is not None: logger.debug("Per configuration, sleeping %d seconds after opening tray.", self.ejectDelay) time.sleep(self.ejectDelay) def unlockTray(self): """ Unlocks the device's tray. @raise IOError: If there is an error talking to the device. """ args = CdWriter._buildUnlockTrayArgs(self._device) command = resolveCommand(EJECT_COMMAND) result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to unlock tray." % result) def closeTray(self): """ Closes the device's tray. This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. If the writer was constructed with C{noEject=True}, then this is a no-op. @raise IOError: If there is an error talking to the device. """ if not self._noEject: if self._deviceHasTray and self._deviceCanEject: args = CdWriter._buildCloseTrayArgs(self._device) command = resolveCommand(EJECT_COMMAND) result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to close tray." % result) def refreshMedia(self): """ Opens and then immediately closes the device's tray, to refresh the device's idea of the media. Sometimes, a device gets confused about the state of its media. Often, all it takes to solve the problem is to eject the media and then immediately reload it. (There are also configurable eject and refresh media delays which can be applied, for situations where this makes a difference.) This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. The configured delays still apply, though. @raise IOError: If there is an error talking to the device. """ self.openTray() self.closeTray() self.unlockTray() # on some systems, writing a disc leaves the tray locked, yikes! if self.refreshMediaDelay is not None: logger.debug("Per configuration, sleeping %d seconds to stabilize media state.", self.refreshMediaDelay) time.sleep(self.refreshMediaDelay) logger.debug("Media refresh complete; hopefully media state is stable now.") def writeImage(self, imagePath=None, newDisc=False, writeMulti=True): """ Writes an ISO image to the media in the device. If C{newDisc} is passed in as C{True}, we assume that the entire disc will be overwritten, and the media will be blanked before writing it if possible (i.e. if the media is rewritable). If C{writeMulti} is passed in as C{True}, then a multisession disc will be written if possible (i.e. if the drive supports writing multisession discs). if C{imagePath} is passed in as C{None}, then the existing image configured with C{initializeImage} will be used. Under these circumstances, the passed-in C{newDisc} flag will be ignored. By default, we assume that the disc can be written multisession and that we should append to the current contents of the disc. In any case, the ISO image must be generated appropriately (i.e. must take into account any existing session boundaries, etc.) @param imagePath: Path to an ISO image on disk, or C{None} to use writer's image @type imagePath: String representing a path on disk @param newDisc: Indicates whether the entire disc will overwritten. @type newDisc: Boolean true/false. @param writeMulti: Indicates whether a multisession disc should be written, if possible. @type writeMulti: Boolean true/false @raise ValueError: If the image path is not absolute. @raise ValueError: If some path cannot be encoded properly. @raise IOError: If the media could not be written to for some reason. @raise ValueError: If no image is passed in and initializeImage() was not previously called """ if imagePath is None: if self._image is None: raise ValueError("Must call initializeImage() before using this method with no image path.") try: imagePath = self._createImage() self._writeImage(imagePath, writeMulti, self._image.newDisc) finally: if imagePath is not None and os.path.exists(imagePath): try: os.unlink(imagePath) except: pass else: imagePath = encodePath(imagePath) if not os.path.isabs(imagePath): raise ValueError("Image path must be absolute.") self._writeImage(imagePath, writeMulti, newDisc) def _createImage(self): """ Creates an ISO image based on configuration in self._image. @return: Path to the newly-created ISO image on disk. @raise IOError: If there is an error writing the image to disk. @raise ValueError: If there are no filesystem entries in the image @raise ValueError: If a path cannot be encoded properly. """ path = None capacity = self.retrieveCapacity(entireDisc=self._image.newDisc) image = IsoImage(self.device, capacity.boundaries) image.volumeId = self._image.mediaLabel # may be None, which is also valid for key in list(self._image.entries.keys()): image.addEntry(key, self._image.entries[key], override=False, contentsOnly=True) size = image.getEstimatedSize() logger.info("Image size will be %s.", displayBytes(size)) available = capacity.bytesAvailable logger.debug("Media capacity: %s", displayBytes(available)) if size > available: logger.error("Image [%s] does not fit in available capacity [%s].", displayBytes(size), displayBytes(available)) raise IOError("Media does not contain enough capacity to store image.") try: (handle, path) = tempfile.mkstemp(dir=self._image.tmpdir) try: os.close(handle) except: pass image.writeImage(path) logger.debug("Completed creating image [%s].", path) return path except Exception as e: if path is not None and os.path.exists(path): try: os.unlink(path) except: pass raise e def _writeImage(self, imagePath, writeMulti, newDisc): """ Write an ISO image to disc using cdrecord. The disc is blanked first if C{newDisc} is C{True}. @param imagePath: Path to an ISO image on disk @param writeMulti: Indicates whether a multisession disc should be written, if possible. @param newDisc: Indicates whether the entire disc will overwritten. """ if newDisc: self._blankMedia() args = CdWriter._buildWriteArgs(self.hardwareId, imagePath, self._driveSpeed, writeMulti and self._deviceSupportsMulti) command = resolveCommand(CDRECORD_COMMAND) result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing command to write disc." % result) self.refreshMedia() def _blankMedia(self): """ Blanks the media in the device, if the media is rewritable. @raise IOError: If the media could not be written to for some reason. """ if self.isRewritable(): args = CdWriter._buildBlankArgs(self.hardwareId) command = resolveCommand(CDRECORD_COMMAND) result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing command to blank disc." % result) self.refreshMedia() ####################################### # Methods used to parse command output ####################################### @staticmethod def _parsePropertiesOutput(output): """ Parses the output from a C{cdrecord} properties command. The C{output} parameter should be a list of strings as returned from C{executeCommand} for a C{cdrecord} command with arguments as from C{_buildPropertiesArgs}. The list of strings will be parsed to yield information about the properties of the device. The output is expected to be a huge long list of strings. Unfortunately, the strings aren't in a completely regular format. However, the format of individual lines seems to be regular enough that we can look for specific values. Two kinds of parsing take place: one kind of parsing picks out out specific values like the device id, device vendor, etc. The other kind of parsing just sets a boolean flag C{True} if a matching line is found. All of the parsing is done with regular expressions. Right now, pretty much nothing in the output is required and we should parse an empty document successfully (albeit resulting in a device that can't eject, doesn't have a tray and doesnt't support multisession discs). I had briefly considered erroring out if certain lines weren't found or couldn't be parsed, but that seems like a bad idea given that most of the information is just for reference. The results are returned as a tuple of the object device attributes: C{(deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject)}. @param output: Output from a C{cdrecord -prcap} command. @return: Results tuple as described above. @raise IOError: If there is problem parsing the output. """ deviceType = None deviceVendor = None deviceId = None deviceBufferSize = None deviceSupportsMulti = False deviceHasTray = False deviceCanEject = False typePattern = re.compile(r"(^Device type\s*:\s*)(.*)(\s*)(.*$)") vendorPattern = re.compile(r"(^Vendor_info\s*:\s*'\s*)(.*?)(\s*')(.*$)") idPattern = re.compile(r"(^Identifikation\s*:\s*'\s*)(.*?)(\s*')(.*$)") bufferPattern = re.compile(r"(^\s*Buffer size in KB:\s*)(.*?)(\s*$)") multiPattern = re.compile(r"^\s*Does read multi-session.*$") trayPattern = re.compile(r"^\s*Loading mechanism type: tray.*$") ejectPattern = re.compile(r"^\s*Does support ejection.*$") for line in output: if typePattern.search(line): deviceType = typePattern.search(line).group(2) logger.info("Device type is [%s].", deviceType) elif vendorPattern.search(line): deviceVendor = vendorPattern.search(line).group(2) logger.info("Device vendor is [%s].", deviceVendor) elif idPattern.search(line): deviceId = idPattern.search(line).group(2) logger.info("Device id is [%s].", deviceId) elif bufferPattern.search(line): try: sectors = int(bufferPattern.search(line).group(2)) deviceBufferSize = convertSize(sectors, UNIT_KBYTES, UNIT_BYTES) logger.info("Device buffer size is [%d] bytes.", deviceBufferSize) except TypeError: pass elif multiPattern.search(line): deviceSupportsMulti = True logger.info("Device does support multisession discs.") elif trayPattern.search(line): deviceHasTray = True logger.info("Device has a tray.") elif ejectPattern.search(line): deviceCanEject = True logger.info("Device can eject its media.") return (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) @staticmethod def _parseBoundariesOutput(output): """ Parses the output from a C{cdrecord} capacity command. The C{output} parameter should be a list of strings as returned from C{executeCommand} for a C{cdrecord} command with arguments as from C{_buildBoundaryArgs}. The list of strings will be parsed to yield information about the capacity of the media in the device. Basically, we expect the list of strings to include just one line, a pair of values. There isn't supposed to be whitespace, but we allow it anyway in the regular expression. Any lines below the one line we parse are completely ignored. It would be a good idea to ignore C{stderr} when executing the C{cdrecord} command that generates output for this method, because sometimes C{cdrecord} spits out kernel warnings about the actual output. The results are returned as a tuple of (lower, upper) as needed by the C{IsoImage} class. Note that these values are in terms of ISO sectors, not bytes. Clients should generally consider the boundaries value opaque, however. @note: If the boundaries output can't be parsed, we return C{None}. @param output: Output from a C{cdrecord -msinfo} command. @return: Boundaries tuple as described above. @raise IOError: If there is problem parsing the output. """ if len(output) < 1: logger.warning("Unable to read disc (might not be initialized); returning full capacity.") return None boundaryPattern = re.compile(r"(^\s*)([0-9]*)(\s*,\s*)([0-9]*)(\s*$)") parsed = boundaryPattern.search(output[0]) if not parsed: raise IOError("Unable to parse output of boundaries command.") try: boundaries = ( int(parsed.group(2)), int(parsed.group(4)) ) except TypeError: raise IOError("Unable to parse output of boundaries command.") return boundaries ################################# # Methods used to build commands ################################# @staticmethod def _buildOpenTrayArgs(device): """ Builds a list of arguments to be passed to a C{eject} command. The arguments will cause the C{eject} command to open the tray and eject the media. No validation is done by this method as to whether this action actually makes sense. @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append(device) return args @staticmethod def _buildUnlockTrayArgs(device): """ Builds a list of arguments to be passed to a C{eject} command. The arguments will cause the C{eject} command to unlock the tray. @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-i") args.append("off") args.append(device) return args @staticmethod def _buildCloseTrayArgs(device): """ Builds a list of arguments to be passed to a C{eject} command. The arguments will cause the C{eject} command to close the tray and reload the media. No validation is done by this method as to whether this action actually makes sense. @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-t") args.append(device) return args @staticmethod def _buildPropertiesArgs(hardwareId): """ Builds a list of arguments to be passed to a C{cdrecord} command. The arguments will cause the C{cdrecord} command to ask the device for a list of its capacities via the C{-prcap} switch. @param hardwareId: Hardware id for the device (either SCSI id or device path) @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-prcap") args.append("dev=%s" % hardwareId) return args @staticmethod def _buildBoundariesArgs(hardwareId): """ Builds a list of arguments to be passed to a C{cdrecord} command. The arguments will cause the C{cdrecord} command to ask the device for the current multisession boundaries of the media using the C{-msinfo} switch. @param hardwareId: Hardware id for the device (either SCSI id or device path) @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-msinfo") args.append("dev=%s" % hardwareId) return args @staticmethod def _buildBlankArgs(hardwareId, driveSpeed=None): """ Builds a list of arguments to be passed to a C{cdrecord} command. The arguments will cause the C{cdrecord} command to blank the media in the device identified by C{hardwareId}. No validation is done by this method as to whether the action makes sense (i.e. to whether the media even can be blanked). @param hardwareId: Hardware id for the device (either SCSI id or device path) @param driveSpeed: Speed at which the drive writes. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-v") args.append("blank=fast") if driveSpeed is not None: args.append("speed=%d" % driveSpeed) args.append("dev=%s" % hardwareId) return args @staticmethod def _buildWriteArgs(hardwareId, imagePath, driveSpeed=None, writeMulti=True): """ Builds a list of arguments to be passed to a C{cdrecord} command. The arguments will cause the C{cdrecord} command to write the indicated ISO image (C{imagePath}) to the media in the device identified by C{hardwareId}. The C{writeMulti} argument controls whether to write a multisession disc. No validation is done by this method as to whether the action makes sense (i.e. to whether the device even can write multisession discs, for instance). @param hardwareId: Hardware id for the device (either SCSI id or device path) @param imagePath: Path to an ISO image on disk. @param driveSpeed: Speed at which the drive writes. @param writeMulti: Indicates whether to write a multisession disc. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-v") if driveSpeed is not None: args.append("speed=%d" % driveSpeed) args.append("dev=%s" % hardwareId) if writeMulti: args.append("-multi") args.append("-data") args.append(imagePath) return args CedarBackup3-3.1.6/CedarBackup3/writers/__init__.py0000664000175000017500000000243312560007327023535 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Official Cedar Backup Extensions # Purpose : Provides package initialization # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Cedar Backup writers. This package consolidates all of the modules that implenent "image writer" functionality, including utilities and specific writer implementations. @author: Kenneth J. Pronovici """ ######################################################################## # Package initialization ######################################################################## # Using 'from CedarBackup3.writers import *' will just import the modules listed # in the __all__ variable. __all__ = [ 'util', 'cdwriter', 'dvdwriter', ] CedarBackup3-3.1.6/CedarBackup3/writers/dvdwriter.py0000664000175000017500000012001012642031005023767 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007-2008,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides functionality related to DVD writer devices. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides functionality related to DVD writer devices. @sort: MediaDefinition, DvdWriter, MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW @var MEDIA_DVDPLUSR: Constant representing DVD+R media. @var MEDIA_DVDPLUSRW: Constant representing DVD+RW media. @author: Kenneth J. Pronovici @author: Dmitry Rutsky """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import logging import tempfile import time # Cedar Backup modules from CedarBackup3.writers.util import IsoImage from CedarBackup3.util import resolveCommand, executeCommand from CedarBackup3.util import convertSize, displayBytes, encodePath from CedarBackup3.util import UNIT_SECTORS, UNIT_BYTES, UNIT_GBYTES from CedarBackup3.writers.util import validateDevice, validateDriveSpeed ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.writers.dvdwriter") MEDIA_DVDPLUSR = 1 MEDIA_DVDPLUSRW = 2 GROWISOFS_COMMAND = [ "growisofs", ] EJECT_COMMAND = [ "eject", ] ######################################################################## # MediaDefinition class definition ######################################################################## class MediaDefinition(object): """ Class encapsulating information about DVD media definitions. The following media types are accepted: - C{MEDIA_DVDPLUSR}: DVD+R media (4.4 GB capacity) - C{MEDIA_DVDPLUSRW}: DVD+RW media (4.4 GB capacity) Note that the capacity attribute returns capacity in terms of ISO sectors (C{util.ISO_SECTOR_SIZE)}. This is for compatibility with the CD writer functionality. The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes of 1024*1024*1024 bytes per gigabyte. @sort: __init__, mediaType, rewritable, capacity """ def __init__(self, mediaType): """ Creates a media definition for the indicated media type. @param mediaType: Type of the media, as discussed above. @raise ValueError: If the media type is unknown or unsupported. """ self._mediaType = None self._rewritable = False self._capacity = 0.0 self._setValues(mediaType) def _setValues(self, mediaType): """ Sets values based on media type. @param mediaType: Type of the media, as discussed above. @raise ValueError: If the media type is unknown or unsupported. """ if mediaType not in [MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW, ]: raise ValueError("Invalid media type %d." % mediaType) self._mediaType = mediaType if self._mediaType == MEDIA_DVDPLUSR: self._rewritable = False self._capacity = convertSize(4.4, UNIT_GBYTES, UNIT_SECTORS) # 4.4 "true" GB = 4.7 "marketing" GB elif self._mediaType == MEDIA_DVDPLUSRW: self._rewritable = True self._capacity = convertSize(4.4, UNIT_GBYTES, UNIT_SECTORS) # 4.4 "true" GB = 4.7 "marketing" GB def _getMediaType(self): """ Property target used to get the media type value. """ return self._mediaType def _getRewritable(self): """ Property target used to get the rewritable flag value. """ return self._rewritable def _getCapacity(self): """ Property target used to get the capacity value. """ return self._capacity mediaType = property(_getMediaType, None, None, doc="Configured media type.") rewritable = property(_getRewritable, None, None, doc="Boolean indicating whether the media is rewritable.") capacity = property(_getCapacity, None, None, doc="Total capacity of media in 2048-byte sectors.") ######################################################################## # MediaCapacity class definition ######################################################################## class MediaCapacity(object): """ Class encapsulating information about DVD media capacity. Space used and space available do not include any information about media lead-in or other overhead. @sort: __init__, bytesUsed, bytesAvailable, totalCapacity, utilized """ def __init__(self, bytesUsed, bytesAvailable): """ Initializes a capacity object. @raise ValueError: If the bytes used and available values are not floats. """ self._bytesUsed = float(bytesUsed) self._bytesAvailable = float(bytesAvailable) def __str__(self): """ Informal string representation for class instance. """ return "utilized %s of %s (%.2f%%)" % (displayBytes(self.bytesUsed), displayBytes(self.totalCapacity), self.utilized) def _getBytesUsed(self): """ Property target used to get the bytes-used value. """ return self._bytesUsed def _getBytesAvailable(self): """ Property target available to get the bytes-available value. """ return self._bytesAvailable def _getTotalCapacity(self): """ Property target to get the total capacity (used + available). """ return self.bytesUsed + self.bytesAvailable def _getUtilized(self): """ Property target to get the percent of capacity which is utilized. """ if self.bytesAvailable <= 0.0: return 100.0 elif self.bytesUsed <= 0.0: return 0.0 return (self.bytesUsed / self.totalCapacity) * 100.0 bytesUsed = property(_getBytesUsed, None, None, doc="Space used on disc, in bytes.") bytesAvailable = property(_getBytesAvailable, None, None, doc="Space available on disc, in bytes.") totalCapacity = property(_getTotalCapacity, None, None, doc="Total capacity of the disc, in bytes.") utilized = property(_getUtilized, None, None, "Percentage of the total capacity which is utilized.") ######################################################################## # _ImageProperties class definition ######################################################################## class _ImageProperties(object): """ Simple value object to hold image properties for C{DvdWriter}. """ def __init__(self): self.newDisc = False self.tmpdir = None self.mediaLabel = None self.entries = None # dict mapping path to graft point ######################################################################## # DvdWriter class definition ######################################################################## class DvdWriter(object): ###################### # Class documentation ###################### """ Class representing a device that knows how to write some kinds of DVD media. Summary ======= This is a class representing a device that knows how to write some kinds of DVD media. It provides common operations for the device, such as ejecting the media and writing data to the media. This class is implemented in terms of the C{eject} and C{growisofs} utilities, all of which should be available on most UN*X platforms. Image Writer Interface ====================== The following methods make up the "image writer" interface shared with other kinds of writers:: __init__ initializeImage() addImageEntry() writeImage() setImageNewDisc() retrieveCapacity() getEstimatedImageSize() Only these methods will be used by other Cedar Backup functionality that expects a compatible image writer. The media attribute is also assumed to be available. Unlike the C{CdWriter}, the C{DvdWriter} can only operate in terms of filesystem devices, not SCSI devices. So, although the constructor interface accepts a SCSI device parameter for the sake of compatibility, it's not used. Media Types =========== This class knows how to write to DVD+R and DVD+RW media, represented by the following constants: - C{MEDIA_DVDPLUSR}: DVD+R media (4.4 GB capacity) - C{MEDIA_DVDPLUSRW}: DVD+RW media (4.4 GB capacity) The difference is that DVD+RW media can be rewritten, while DVD+R media cannot be (although at present, C{DvdWriter} does not really differentiate between rewritable and non-rewritable media). The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes of 1024*1024*1024 bytes per gigabyte. The underlying C{growisofs} utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type. Device Attributes vs. Media Attributes ====================================== As with the cdwriter functionality, a given dvdwriter instance has two different kinds of attributes associated with it. I call these device attributes and media attributes. Device attributes are things which can be determined without looking at the media. Media attributes are attributes which vary depending on the state of the media. In general, device attributes are available via instance variables and are constant over the life of an object, while media attributes can be retrieved through method calls. Compared to cdwriters, dvdwriters have very few attributes. This is due to differences between the way C{growisofs} works relative to C{cdrecord}. Media Capacity ============== One major difference between the C{cdrecord}/C{mkisofs} utilities used by the cdwriter class and the C{growisofs} utility used here is that the process of estimating remaining capacity and image size is more straightforward with C{cdrecord}/C{mkisofs} than with C{growisofs}. In this class, remaining capacity is calculated by asking doing a dry run of C{growisofs} and grabbing some information from the output of that command. Image size is estimated by asking the C{IsoImage} class for an estimate and then adding on a "fudge factor" determined through experimentation. Testing ======= It's rather difficult to test this code in an automated fashion, even if you have access to a physical DVD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to. Because of this, some of the implementation below is in terms of static methods that are supposed to take defined actions based on their arguments. Public methods are then implemented in terms of a series of calls to simplistic static methods. This way, we can test as much as possible of the "difficult" functionality via testing the static methods, while hoping that if the static methods are called appropriately, things will work properly. It's not perfect, but it's much better than no testing at all. @sort: __init__, isRewritable, retrieveCapacity, openTray, closeTray, refreshMedia, initializeImage, addImageEntry, writeImage, setImageNewDisc, getEstimatedImageSize, _writeImage, _getEstimatedImageSize, _searchForOverburn, _buildWriteArgs, device, scsiId, hardwareId, driveSpeed, media, deviceHasTray, deviceCanEject """ ############## # Constructor ############## def __init__(self, device, scsiId=None, driveSpeed=None, mediaType=MEDIA_DVDPLUSRW, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False): """ Initializes a DVD writer object. Since C{growisofs} can only address devices using the device path (i.e. C{/dev/dvd}), the hardware id will always be set based on the device. If passed in, it will be saved for reference purposes only. We have no way to query the device to ask whether it has a tray or can be safely opened and closed. So, the C{noEject} flag is used to set these values. If C{noEject=False}, then we assume a tray exists and open/close is safe. If C{noEject=True}, then we assume that there is no tray and open/close is not safe. @note: The C{unittest} parameter should never be set to C{True} outside of Cedar Backup code. It is intended for use in unit testing Cedar Backup internals and has no other sensible purpose. @param device: Filesystem device associated with this writer. @type device: Absolute path to a filesystem device, i.e. C{/dev/dvd} @param scsiId: SCSI id for the device (optional, for reference only). @type scsiId: If provided, SCSI id in the form C{[:]scsibus,target,lun} @param driveSpeed: Speed at which the drive writes. @type driveSpeed: Use C{2} for 2x device, etc. or C{None} to use device default. @param mediaType: Type of the media that is assumed to be in the drive. @type mediaType: One of the valid media type as discussed above. @param noEject: Tells Cedar Backup that the device cannot safely be ejected @type noEject: Boolean true/false @param refreshMediaDelay: Refresh media delay to use, if any @type refreshMediaDelay: Number of seconds, an integer >= 0 @param ejectDelay: Eject delay to use, if any @type ejectDelay: Number of seconds, an integer >= 0 @param unittest: Turns off certain validations, for use in unit testing. @type unittest: Boolean true/false @raise ValueError: If the device is not valid for some reason. @raise ValueError: If the SCSI id is not in a valid form. @raise ValueError: If the drive speed is not an integer >= 1. """ if scsiId is not None: logger.warning("SCSI id [%s] will be ignored by DvdWriter.", scsiId) self._image = None # optionally filled in by initializeImage() self._device = validateDevice(device, unittest) self._scsiId = scsiId # not validated, because it's just for reference self._driveSpeed = validateDriveSpeed(driveSpeed) self._media = MediaDefinition(mediaType) self._refreshMediaDelay = refreshMediaDelay self._ejectDelay = ejectDelay if noEject: self._deviceHasTray = False self._deviceCanEject = False else: self._deviceHasTray = True # just assume self._deviceCanEject = True # just assume ############# # Properties ############# def _getDevice(self): """ Property target used to get the device value. """ return self._device def _getScsiId(self): """ Property target used to get the SCSI id value. """ return self._scsiId def _getHardwareId(self): """ Property target used to get the hardware id value. """ return self._device def _getDriveSpeed(self): """ Property target used to get the drive speed. """ return self._driveSpeed def _getMedia(self): """ Property target used to get the media description. """ return self._media def _getDeviceHasTray(self): """ Property target used to get the device-has-tray flag. """ return self._deviceHasTray def _getDeviceCanEject(self): """ Property target used to get the device-can-eject flag. """ return self._deviceCanEject def _getRefreshMediaDelay(self): """ Property target used to get the configured refresh media delay, in seconds. """ return self._refreshMediaDelay def _getEjectDelay(self): """ Property target used to get the configured eject delay, in seconds. """ return self._ejectDelay device = property(_getDevice, None, None, doc="Filesystem device name for this writer.") scsiId = property(_getScsiId, None, None, doc="SCSI id for the device (saved for reference only).") hardwareId = property(_getHardwareId, None, None, doc="Hardware id for this writer (always the device path).") driveSpeed = property(_getDriveSpeed, None, None, doc="Speed at which the drive writes.") media = property(_getMedia, None, None, doc="Definition of media that is expected to be in the device.") deviceHasTray = property(_getDeviceHasTray, None, None, doc="Indicates whether the device has a media tray.") deviceCanEject = property(_getDeviceCanEject, None, None, doc="Indicates whether the device supports ejecting its media.") refreshMediaDelay = property(_getRefreshMediaDelay, None, None, doc="Refresh media delay, in seconds.") ejectDelay = property(_getEjectDelay, None, None, doc="Eject delay, in seconds.") ################################################# # Methods related to device and media attributes ################################################# def isRewritable(self): """Indicates whether the media is rewritable per configuration.""" return self._media.rewritable def retrieveCapacity(self, entireDisc=False): """ Retrieves capacity for the current media in terms of a C{MediaCapacity} object. If C{entireDisc} is passed in as C{True}, the capacity will be for the entire disc, as if it were to be rewritten from scratch. The same will happen if the disc can't be read for some reason. Otherwise, the capacity will be calculated by subtracting the sectors currently used on the disc, as reported by C{growisofs} itself. @param entireDisc: Indicates whether to return capacity for entire disc. @type entireDisc: Boolean true/false @return: C{MediaCapacity} object describing the capacity of the media. @raise ValueError: If there is a problem parsing the C{growisofs} output @raise IOError: If the media could not be read for some reason. """ sectorsUsed = 0.0 if not entireDisc: sectorsUsed = self._retrieveSectorsUsed() sectorsAvailable = self._media.capacity - sectorsUsed # both are in sectors bytesUsed = convertSize(sectorsUsed, UNIT_SECTORS, UNIT_BYTES) bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) return MediaCapacity(bytesUsed, bytesAvailable) ####################################################### # Methods used for working with the internal ISO image ####################################################### def initializeImage(self, newDisc, tmpdir, mediaLabel=None): """ Initializes the writer's associated ISO image. This method initializes the C{image} instance variable so that the caller can use the C{addImageEntry} method. Once entries have been added, the C{writeImage} method can be called with no arguments. @param newDisc: Indicates whether the disc should be re-initialized @type newDisc: Boolean true/false @param tmpdir: Temporary directory to use if needed @type tmpdir: String representing a directory path on disk @param mediaLabel: Media label to be applied to the image, if any @type mediaLabel: String, no more than 25 characters long """ self._image = _ImageProperties() self._image.newDisc = newDisc self._image.tmpdir = encodePath(tmpdir) self._image.mediaLabel = mediaLabel self._image.entries = {} # mapping from path to graft point (if any) def addImageEntry(self, path, graftPoint): """ Adds a filepath entry to the writer's associated ISO image. The contents of the filepath -- but not the path itself -- will be added to the image at the indicated graft point. If you don't want to use a graft point, just pass C{None}. @note: Before calling this method, you must call L{initializeImage}. @param path: File or directory to be added to the image @type path: String representing a path on disk @param graftPoint: Graft point to be used when adding this entry @type graftPoint: String representing a graft point path, as described above @raise ValueError: If initializeImage() was not previously called @raise ValueError: If the path is not a valid file or directory """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") if not os.path.exists(path): raise ValueError("Path [%s] does not exist." % path) self._image.entries[path] = graftPoint def setImageNewDisc(self, newDisc): """ Resets (overrides) the newDisc flag on the internal image. @param newDisc: New disc flag to set @raise ValueError: If initializeImage() was not previously called """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") self._image.newDisc = newDisc def getEstimatedImageSize(self): """ Gets the estimated size of the image associated with the writer. This is an estimate and is conservative. The actual image could be as much as 450 blocks (sectors) smaller under some circmstances. @return: Estimated size of the image, in bytes. @raise IOError: If there is a problem calling C{mkisofs}. @raise ValueError: If initializeImage() was not previously called """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") return DvdWriter._getEstimatedImageSize(self._image.entries) ###################################### # Methods which expose device actions ###################################### def openTray(self): """ Opens the device's tray and leaves it open. This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. Starting with Debian wheezy on my backup hardware, I started seeing consistent problems with the eject command. I couldn't tell whether these problems were due to the device management system or to the new kernel (3.2.0). Initially, I saw simple eject failures, possibly because I was opening and closing the tray too quickly. I worked around that behavior with the new ejectDelay flag. Later, I sometimes ran into issues after writing an image to a disc: eject would give errors like "unable to eject, last error: Inappropriate ioctl for device". Various sources online (like Ubuntu bug #875543) suggested that the drive was being locked somehow, and that the workaround was to run 'eject -i off' to unlock it. Sure enough, that fixed the problem for me, so now it's a normal error-handling strategy. @raise IOError: If there is an error talking to the device. """ if self._deviceHasTray and self._deviceCanEject: command = resolveCommand(EJECT_COMMAND) args = [ self.device, ] result = executeCommand(command, args)[0] if result != 0: logger.debug("Eject failed; attempting kludge of unlocking the tray before retrying.") self.unlockTray() result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to open tray (failed even after unlocking tray)." % result) logger.debug("Kludge was apparently successful.") if self.ejectDelay is not None: logger.debug("Per configuration, sleeping %d seconds after opening tray.", self.ejectDelay) time.sleep(self.ejectDelay) def unlockTray(self): """ Unlocks the device's tray via 'eject -i off'. @raise IOError: If there is an error talking to the device. """ command = resolveCommand(EJECT_COMMAND) args = [ "-i", "off", self.device, ] result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to unlock tray." % result) def closeTray(self): """ Closes the device's tray. This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. @raise IOError: If there is an error talking to the device. """ if self._deviceHasTray and self._deviceCanEject: command = resolveCommand(EJECT_COMMAND) args = [ "-t", self.device, ] result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to close tray." % result) def refreshMedia(self): """ Opens and then immediately closes the device's tray, to refresh the device's idea of the media. Sometimes, a device gets confused about the state of its media. Often, all it takes to solve the problem is to eject the media and then immediately reload it. (There are also configurable eject and refresh media delays which can be applied, for situations where this makes a difference.) This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. The configured delays still apply, though. @raise IOError: If there is an error talking to the device. """ self.openTray() self.closeTray() self.unlockTray() # on some systems, writing a disc leaves the tray locked, yikes! if self.refreshMediaDelay is not None: logger.debug("Per configuration, sleeping %d seconds to stabilize media state.", self.refreshMediaDelay) time.sleep(self.refreshMediaDelay) logger.debug("Media refresh complete; hopefully media state is stable now.") def writeImage(self, imagePath=None, newDisc=False, writeMulti=True): """ Writes an ISO image to the media in the device. If C{newDisc} is passed in as C{True}, we assume that the entire disc will be re-created from scratch. Note that unlike C{CdWriter}, C{DvdWriter} does not blank rewritable media before reusing it; however, C{growisofs} is called such that the media will be re-initialized as needed. If C{imagePath} is passed in as C{None}, then the existing image configured with C{initializeImage()} will be used. Under these circumstances, the passed-in C{newDisc} flag will be ignored and the value passed in to C{initializeImage()} will apply instead. The C{writeMulti} argument is ignored. It exists for compatibility with the Cedar Backup image writer interface. @note: The image size indicated in the log ("Image size will be...") is an estimate. The estimate is conservative and is probably larger than the actual space that C{dvdwriter} will use. @param imagePath: Path to an ISO image on disk, or C{None} to use writer's image @type imagePath: String representing a path on disk @param newDisc: Indicates whether the disc should be re-initialized @type newDisc: Boolean true/false. @param writeMulti: Unused @type writeMulti: Boolean true/false @raise ValueError: If the image path is not absolute. @raise ValueError: If some path cannot be encoded properly. @raise IOError: If the media could not be written to for some reason. @raise ValueError: If no image is passed in and initializeImage() was not previously called """ if not writeMulti: logger.warning("writeMulti value of [%s] ignored.", writeMulti) if imagePath is None: if self._image is None: raise ValueError("Must call initializeImage() before using this method with no image path.") size = self.getEstimatedImageSize() logger.info("Image size will be %s (estimated).", displayBytes(size)) available = self.retrieveCapacity(entireDisc=self._image.newDisc).bytesAvailable if size > available: logger.error("Image [%s] does not fit in available capacity [%s].", displayBytes(size), displayBytes(available)) raise IOError("Media does not contain enough capacity to store image.") self._writeImage(self._image.newDisc, None, self._image.entries, self._image.mediaLabel) else: if not os.path.isabs(imagePath): raise ValueError("Image path must be absolute.") imagePath = encodePath(imagePath) self._writeImage(newDisc, imagePath, None) ################################################################## # Utility methods for dealing with growisofs and dvd+rw-mediainfo ################################################################## def _writeImage(self, newDisc, imagePath, entries, mediaLabel=None): """ Writes an image to disc using either an entries list or an ISO image on disk. Callers are assumed to have done validation on paths, etc. before calling this method. @param newDisc: Indicates whether the disc should be re-initialized @param imagePath: Path to an ISO image on disk, or c{None} to use C{entries} @param entries: Mapping from path to graft point, or C{None} to use C{imagePath} @raise IOError: If the media could not be written to for some reason. """ command = resolveCommand(GROWISOFS_COMMAND) args = DvdWriter._buildWriteArgs(newDisc, self.hardwareId, self._driveSpeed, imagePath, entries, mediaLabel, dryRun=False) (result, output) = executeCommand(command, args, returnOutput=True) if result != 0: DvdWriter._searchForOverburn(output) # throws own exception if overburn condition is found raise IOError("Error (%d) executing command to write disc." % result) self.refreshMedia() @staticmethod def _getEstimatedImageSize(entries): """ Gets the estimated size of a set of image entries. This is implemented in terms of the C{IsoImage} class. The returned value is calculated by adding a "fudge factor" to the value from C{IsoImage}. This fudge factor was determined by experimentation and is conservative -- the actual image could be as much as 450 blocks smaller under some circumstances. @param entries: Dictionary mapping path to graft point. @return: Total estimated size of image, in bytes. @raise ValueError: If there are no entries in the dictionary @raise ValueError: If any path in the dictionary does not exist @raise IOError: If there is a problem calling C{mkisofs}. """ fudgeFactor = convertSize(2500.0, UNIT_SECTORS, UNIT_BYTES) # determined through experimentation if len(list(entries.keys())) == 0: raise ValueError("Must add at least one entry with addImageEntry().") image = IsoImage() for path in list(entries.keys()): image.addEntry(path, entries[path], override=False, contentsOnly=True) estimatedSize = image.getEstimatedSize() + fudgeFactor return estimatedSize def _retrieveSectorsUsed(self): """ Retrieves the number of sectors used on the current media. This is a little ugly. We need to call growisofs in "dry-run" mode and parse some information from its output. However, to do that, we need to create a dummy file that we can pass to the command -- and we have to make sure to remove it later. Once growisofs has been run, then we call C{_parseSectorsUsed} to parse the output and calculate the number of sectors used on the media. @return: Number of sectors used on the media """ tempdir = tempfile.mkdtemp() try: entries = { tempdir: None } args = DvdWriter._buildWriteArgs(False, self.hardwareId, self.driveSpeed, None, entries, None, dryRun=True) command = resolveCommand(GROWISOFS_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True) if result != 0: logger.debug("Error (%d) calling growisofs to read sectors used.", result) logger.warning("Unable to read disc (might not be initialized); returning zero sectors used.") return 0.0 sectorsUsed = DvdWriter._parseSectorsUsed(output) logger.debug("Determined sectors used as %s", sectorsUsed) return sectorsUsed finally: if os.path.exists(tempdir): try: os.rmdir(tempdir) except: pass @staticmethod def _parseSectorsUsed(output): """ Parse sectors used information out of C{growisofs} output. The first line of a growisofs run looks something like this:: Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566' Dmitry has determined that the seek value in this line gives us information about how much data has previously been written to the media. That value multiplied by 16 yields the number of sectors used. If the seek line cannot be found in the output, then sectors used of zero is assumed. @return: Sectors used on the media, as a floating point number. @raise ValueError: If the output cannot be parsed properly. """ if output is not None: pattern = re.compile(r"(^)(.*)(seek=)(.*)('$)") for line in output: match = pattern.search(line) if match is not None: try: return float(match.group(4).strip()) * 16.0 except ValueError: raise ValueError("Unable to parse sectors used out of growisofs output.") logger.warning("Unable to read disc (might not be initialized); returning zero sectors used.") return 0.0 @staticmethod def _searchForOverburn(output): """ Search for an "overburn" error message in C{growisofs} output. The C{growisofs} command returns a non-zero exit code and puts a message into the output -- even on a dry run -- if there is not enough space on the media. This is called an "overburn" condition. The error message looks like this:: :-( /dev/cdrom: 894048 blocks are free, 2033746 to be written! This method looks for the overburn error message anywhere in the output. If a matching error message is found, an C{IOError} exception is raised containing relevant information about the problem. Otherwise, the method call returns normally. @param output: List of output lines to search, as from C{executeCommand} @raise IOError: If an overburn condition is found. """ if output is None: return pattern = re.compile(r"(^)(:-[(])(\s*.*:\s*)(.* )(blocks are free, )(.* )(to be written!)") for line in output: match = pattern.search(line) if match is not None: try: available = convertSize(float(match.group(4).strip()), UNIT_SECTORS, UNIT_BYTES) size = convertSize(float(match.group(6).strip()), UNIT_SECTORS, UNIT_BYTES) logger.error("Image [%s] does not fit in available capacity [%s].", displayBytes(size), displayBytes(available)) except ValueError: logger.error("Image does not fit in available capacity (no useful capacity info available).") raise IOError("Media does not contain enough capacity to store image.") @staticmethod def _buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel=None, dryRun=False): """ Builds a list of arguments to be passed to a C{growisofs} command. The arguments will either cause C{growisofs} to write the indicated image file to disc, or will pass C{growisofs} a list of directories or files that should be written to disc. If a new image is created, it will always be created with Rock Ridge extensions (-r). A volume name will be applied (-V) if C{mediaLabel} is not C{None}. @param newDisc: Indicates whether the disc should be re-initialized @param hardwareId: Hardware id for the device @param driveSpeed: Speed at which the drive writes. @param imagePath: Path to an ISO image on disk, or c{None} to use C{entries} @param entries: Mapping from path to graft point, or C{None} to use C{imagePath} @param mediaLabel: Media label to set on the image, if any @param dryRun: Says whether to make this a dry run (for checking capacity) @note: If we write an existing image to disc, then the mediaLabel is ignored. The media label is an attribute of the image, and should be set on the image when it is created. @note: We always pass the undocumented option C{-use-the-force-like=tty} to growisofs. Without this option, growisofs will refuse to execute certain actions when running from cron. A good example is -Z, which happily overwrites an existing DVD from the command-line, but fails when run from cron. It took a while to figure that out, since it worked every time I tested it by hand. :( @return: List suitable for passing to L{util.executeCommand} as C{args}. @raise ValueError: If caller does not pass one or the other of imagePath or entries. """ args = [] if (imagePath is None and entries is None) or (imagePath is not None and entries is not None): raise ValueError("Must use either imagePath or entries.") args.append("-use-the-force-luke=tty") # tell growisofs to let us run from cron if dryRun: args.append("-dry-run") if driveSpeed is not None: args.append("-speed=%d" % driveSpeed) if newDisc: args.append("-Z") else: args.append("-M") if imagePath is not None: args.append("%s=%s" % (hardwareId, imagePath)) else: args.append(hardwareId) if mediaLabel is not None: args.append("-V") args.append(mediaLabel) args.append("-r") # Rock Ridge extensions with sane ownership and permissions args.append("-graft-points") keys = list(entries.keys()) keys.sort() # just so we get consistent results for key in keys: # Same syntax as when calling mkisofs in IsoImage if entries[key] is None: args.append(key) else: args.append("%s/=%s" % (entries[key].strip("/"), key)) return args CedarBackup3-3.1.6/CedarBackup3/release.py0000664000175000017500000000224012657665514021732 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides location to maintain release information. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # """ Provides location to maintain version information. @sort: AUTHOR, EMAIL, COPYRIGHT, VERSION, DATE, URL @var AUTHOR: Author of software. @var EMAIL: Email address of author. @var COPYRIGHT: Copyright date. @var VERSION: Software version. @var DATE: Software release date. @var URL: URL of Cedar Backup webpage. @author: Kenneth J. Pronovici """ AUTHOR = "Kenneth J. Pronovici" EMAIL = "pronovic@ieee.org" COPYRIGHT = "2004-2011,2013-2016" VERSION = "3.1.6" DATE = "13 Feb 2016" URL = "https://bitbucket.org/cedarsolutions/cedar-backup3" CedarBackup3-3.1.6/CedarBackup3/knapsack.py0000664000175000017500000003207512560007327022077 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2005,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides knapsack algorithms used for "fit" decisions # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######## # Notes ######## """ Provides the implementation for various knapsack algorithms. Knapsack algorithms are "fit" algorithms, used to take a set of "things" and decide on the optimal way to fit them into some container. The focus of this code is to fit files onto a disc, although the interface (in terms of item, item size and capacity size, with no units) is generic enough that it can be applied to items other than files. All of the algorithms implemented below assume that "optimal" means "use up as much of the disc's capacity as possible", but each produces slightly different results. For instance, the best fit and first fit algorithms tend to include fewer files than the worst fit and alternate fit algorithms, even if they use the disc space more efficiently. Usually, for a given set of circumstances, it will be obvious to a human which algorithm is the right one to use, based on trade-offs between number of files included and ideal space utilization. It's a little more difficult to do this programmatically. For Cedar Backup's purposes (i.e. trying to fit a small number of collect-directory tarfiles onto a disc), worst-fit is probably the best choice if the goal is to include as many of the collect directories as possible. @sort: firstFit, bestFit, worstFit, alternateFit @author: Kenneth J. Pronovici """ ####################################################################### # Public functions ####################################################################### ###################### # firstFit() function ###################### def firstFit(items, capacity): """ Implements the first-fit knapsack algorithm. The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting. The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items. The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed. The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass C{items.copy()} if they do not want their version of the list modified. The function returns a list of chosen items and the unitless amount of capacity used by the items. @param items: Items to operate on @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer @param capacity: Capacity of container to fit to @type capacity: integer @returns: Tuple C{(items, used)} as described above """ # Use dict since insert into dict is faster than list append included = { } # Search the list as it stands (arbitrary order) used = 0 remaining = capacity for key in list(items.keys()): if remaining == 0: break if remaining - items[key][1] >= 0: included[key] = None used += items[key][1] remaining -= items[key][1] # Return results return (list(included.keys()), used) ##################### # bestFit() function ##################### def bestFit(items, capacity): """ Implements the best-fit knapsack algorithm. The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not ususual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms. The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items. The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed. The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass C{items.copy()} if they do not want their version of the list modified. The function returns a list of chosen items and the unitless amount of capacity used by the items. @param items: Items to operate on @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer @param capacity: Capacity of container to fit to @type capacity: integer @returns: Tuple C{(items, used)} as described above """ # Use dict since insert into dict is faster than list append included = { } # Sort the list from largest to smallest itemlist = list(items.items()) itemlist.sort(key=lambda x: x[1][1], reverse=True) # sort descending keys = [] for item in itemlist: keys.append(item[0]) # Search the list used = 0 remaining = capacity for key in keys: if remaining == 0: break if remaining - items[key][1] >= 0: included[key] = None used += items[key][1] remaining -= items[key][1] # Return the results return (list(included.keys()), used) ###################### # worstFit() function ###################### def worstFit(items, capacity): """ Implements the worst-fit knapsack algorithm. The worst-fit algorithm proceeds through an a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing. The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items. The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed. The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass C{items.copy()} if they do not want their version of the list modified. The function returns a list of chosen items and the unitless amount of capacity used by the items. @param items: Items to operate on @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer @param capacity: Capacity of container to fit to @type capacity: integer @returns: Tuple C{(items, used)} as described above """ # Use dict since insert into dict is faster than list append included = { } # Sort the list from smallest to largest itemlist = list(items.items()) itemlist.sort(key=lambda x: x[1][1]) # sort ascending keys = [] for item in itemlist: keys.append(item[0]) # Search the list used = 0 remaining = capacity for key in keys: if remaining == 0: break if remaining - items[key][1] >= 0: included[key] = None used += items[key][1] remaining -= items[key][1] # Return results return (list(included.keys()), used) ########################## # alternateFit() function ########################## def alternateFit(items, capacity): """ Implements the alternate-fit knapsack algorithm. This algorithm (which I'm calling "alternate-fit" as in "alternate from one to the other") tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slighly fewer items. The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items. The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed. The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass C{items.copy()} if they do not want their version of the list modified. The function returns a list of chosen items and the unitless amount of capacity used by the items. @param items: Items to operate on @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer @param capacity: Capacity of container to fit to @type capacity: integer @returns: Tuple C{(items, used)} as described above """ # Use dict since insert into dict is faster than list append included = { } # Sort the list from smallest to largest itemlist = list(items.items()) itemlist.sort(key=lambda x: x[1][1]) # sort ascending keys = [] for item in itemlist: keys.append(item[0]) # Search the list used = 0 remaining = capacity front = keys[0:len(keys)//2] back = keys[len(keys)//2:len(keys)] back.reverse() i = 0 j = 0 while remaining > 0 and (i < len(front) or j < len(back)): if i < len(front): if remaining - items[front[i]][1] >= 0: included[front[i]] = None used += items[front[i]][1] remaining -= items[front[i]][1] i += 1 if j < len(back): if remaining - items[back[j]][1] >= 0: included[back[j]] = None used += items[back[j]][1] remaining -= items[back[j]][1] j += 1 # Return results return (list(included.keys()), used) CedarBackup3-3.1.6/CedarBackup3/util.py0000664000175000017500000021155212642030302021246 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # Portions copyright (c) 2001, 2002 Python Software Foundation. # All Rights Reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides general-purpose utilities. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides general-purpose utilities. @sort: AbsolutePathList, ObjectTypeList, RestrictedContentList, RegexMatchList, RegexList, _Vertex, DirectedGraph, PathResolverSingleton, sortDict, convertSize, getUidGid, changeOwnership, splitCommandLine, resolveCommand, executeCommand, calculateFileAge, encodePath, nullDevice, deriveDayOfWeek, isStartOfWeek, buildNormalizedPath, ISO_SECTOR_SIZE, BYTES_PER_SECTOR, BYTES_PER_KBYTE, BYTES_PER_MBYTE, BYTES_PER_GBYTE, KBYTES_PER_MBYTE, MBYTES_PER_GBYTE, SECONDS_PER_MINUTE, MINUTES_PER_HOUR, HOURS_PER_DAY, SECONDS_PER_DAY, UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES, UNIT_SECTORS @var ISO_SECTOR_SIZE: Size of an ISO image sector, in bytes. @var BYTES_PER_SECTOR: Number of bytes (B) per ISO sector. @var BYTES_PER_KBYTE: Number of bytes (B) per kilobyte (kB). @var BYTES_PER_MBYTE: Number of bytes (B) per megabyte (MB). @var BYTES_PER_GBYTE: Number of bytes (B) per megabyte (GB). @var KBYTES_PER_MBYTE: Number of kilobytes (kB) per megabyte (MB). @var MBYTES_PER_GBYTE: Number of megabytes (MB) per gigabyte (GB). @var SECONDS_PER_MINUTE: Number of seconds per minute. @var MINUTES_PER_HOUR: Number of minutes per hour. @var HOURS_PER_DAY: Number of hours per day. @var SECONDS_PER_DAY: Number of seconds per day. @var UNIT_BYTES: Constant representing the byte (B) unit for conversion. @var UNIT_KBYTES: Constant representing the kilobyte (kB) unit for conversion. @var UNIT_MBYTES: Constant representing the megabyte (MB) unit for conversion. @var UNIT_GBYTES: Constant representing the gigabyte (GB) unit for conversion. @var UNIT_SECTORS: Constant representing the ISO sector unit for conversion. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## import sys import math import os import re import time import logging from subprocess import Popen, STDOUT, PIPE from functools import total_ordering from numbers import Real from decimal import Decimal import collections try: import pwd import grp _UID_GID_AVAILABLE = True except ImportError: _UID_GID_AVAILABLE = False from CedarBackup3.release import VERSION, DATE ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.util") outputLogger = logging.getLogger("CedarBackup3.output") ISO_SECTOR_SIZE = 2048.0 # in bytes BYTES_PER_SECTOR = ISO_SECTOR_SIZE BYTES_PER_KBYTE = 1024.0 KBYTES_PER_MBYTE = 1024.0 MBYTES_PER_GBYTE = 1024.0 BYTES_PER_MBYTE = BYTES_PER_KBYTE * KBYTES_PER_MBYTE BYTES_PER_GBYTE = BYTES_PER_MBYTE * MBYTES_PER_GBYTE SECONDS_PER_MINUTE = 60.0 MINUTES_PER_HOUR = 60.0 HOURS_PER_DAY = 24.0 SECONDS_PER_DAY = SECONDS_PER_MINUTE * MINUTES_PER_HOUR * HOURS_PER_DAY UNIT_BYTES = 0 UNIT_KBYTES = 1 UNIT_MBYTES = 2 UNIT_GBYTES = 4 UNIT_SECTORS = 3 MTAB_FILE = "/etc/mtab" MOUNT_COMMAND = [ "mount", ] UMOUNT_COMMAND = [ "umount", ] DEFAULT_LANGUAGE = "C" LANG_VAR = "LANG" LOCALE_VARS = [ "LC_ADDRESS", "LC_ALL", "LC_COLLATE", "LC_CTYPE", "LC_IDENTIFICATION", "LC_MEASUREMENT", "LC_MESSAGES", "LC_MONETARY", "LC_NAME", "LC_NUMERIC", "LC_PAPER", "LC_TELEPHONE", "LC_TIME", ] ######################################################################## # UnorderedList class definition ######################################################################## class UnorderedList(list): """ Class representing an "unordered list". An "unordered list" is a list in which only the contents matter, not the order in which the contents appear in the list. For instance, we might be keeping track of set of paths in a list, because it's convenient to have them in that form. However, for comparison purposes, we would only care that the lists contain exactly the same contents, regardless of order. I have come up with two reasonable ways of doing this, plus a couple more that would work but would be a pain to implement. My first method is to copy and sort each list, comparing the sorted versions. This will only work if two lists with exactly the same members are guaranteed to sort in exactly the same order. The second way would be to create two Sets and then compare the sets. However, this would lose information about any duplicates in either list. I've decided to go with option #1 for now. I'll modify this code if I run into problems in the future. We override the original C{__eq__}, C{__ne__}, C{__ge__}, C{__gt__}, C{__le__} and C{__lt__} list methods to change the definition of the various comparison operators. In all cases, the comparison is changed to return the result of the original operation I{but instead comparing sorted lists}. This is going to be quite a bit slower than a normal list, so you probably only want to use it on small lists. """ def __eq__(self, other): """ Definition of C{==} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self == other}. """ if other is None: return False selfSorted = UnorderedList.mixedsort(self[:]) otherSorted = UnorderedList.mixedsort(other[:]) return selfSorted.__eq__(otherSorted) def __ne__(self, other): """ Definition of C{!=} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self != other}. """ if other is None: return True selfSorted = UnorderedList.mixedsort(self[:]) otherSorted = UnorderedList.mixedsort(other[:]) return selfSorted.__ne__(otherSorted) def __ge__(self, other): """ Definition of S{>=} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self >= other}. """ if other is None: return True selfSorted = UnorderedList.mixedsort(self[:]) otherSorted = UnorderedList.mixedsort(other[:]) return selfSorted.__ge__(otherSorted) def __gt__(self, other): """ Definition of C{>} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self > other}. """ if other is None: return True selfSorted = UnorderedList.mixedsort(self[:]) otherSorted = UnorderedList.mixedsort(other[:]) return selfSorted.__gt__(otherSorted) def __le__(self, other): """ Definition of S{<=} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self <= other}. """ if other is None: return False selfSorted = UnorderedList.mixedsort(self[:]) otherSorted = UnorderedList.mixedsort(other[:]) return selfSorted.__le__(otherSorted) def __lt__(self, other): """ Definition of C{<} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self < other}. """ if other is None: return False selfSorted = UnorderedList.mixedsort(self[:]) otherSorted = UnorderedList.mixedsort(other[:]) return selfSorted.__lt__(otherSorted) @staticmethod def mixedsort(value): """ Sort a list, making sure we don't blow up if the list happens to include mixed values. @see: http://stackoverflow.com/questions/26575183/how-can-i-get-2-x-like-sorting-behaviour-in-python-3-x """ return sorted(value, key=UnorderedList.mixedkey) @staticmethod #pylint: disable=R0204 def mixedkey(value): """Provide a key for use by mixedsort()""" numeric = Real, Decimal if isinstance(value, numeric): typeinfo = numeric else: typeinfo = type(value) try: x = value < value except TypeError: value = repr(value) return repr(typeinfo), value ######################################################################## # AbsolutePathList class definition ######################################################################## class AbsolutePathList(UnorderedList): """ Class representing a list of absolute paths. This is an unordered list. We override the C{append}, C{insert} and C{extend} methods to ensure that any item added to the list is an absolute path. Each item added to the list is encoded using L{encodePath}. If we don't do this, we have problems trying certain operations between strings and unicode objects, particularly for "odd" filenames that can't be encoded in standard ASCII. """ def append(self, item): """ Overrides the standard C{append} method. @raise ValueError: If item is not an absolute path. """ if not os.path.isabs(item): raise ValueError("Not an absolute path: [%s]" % item) list.append(self, encodePath(item)) def insert(self, index, item): """ Overrides the standard C{insert} method. @raise ValueError: If item is not an absolute path. """ if not os.path.isabs(item): raise ValueError("Not an absolute path: [%s]" % item) list.insert(self, index, encodePath(item)) def extend(self, seq): """ Overrides the standard C{insert} method. @raise ValueError: If any item is not an absolute path. """ for item in seq: if not os.path.isabs(item): raise ValueError("Not an absolute path: [%s]" % item) for item in seq: list.append(self, encodePath(item)) ######################################################################## # ObjectTypeList class definition ######################################################################## class ObjectTypeList(UnorderedList): """ Class representing a list containing only objects with a certain type. This is an unordered list. We override the C{append}, C{insert} and C{extend} methods to ensure that any item added to the list matches the type that is requested. The comparison uses the built-in C{isinstance}, which should allow subclasses of of the requested type to be added to the list as well. The C{objectName} value will be used in exceptions, i.e. C{"Item must be a CollectDir object."} if C{objectName} is C{"CollectDir"}. """ def __init__(self, objectType, objectName): """ Initializes a typed list for a particular type. @param objectType: Type that the list elements must match. @param objectName: Short string containing the "name" of the type. """ super(ObjectTypeList, self).__init__() self.objectType = objectType self.objectName = objectName def append(self, item): """ Overrides the standard C{append} method. @raise ValueError: If item does not match requested type. """ if not isinstance(item, self.objectType): raise ValueError("Item must be a %s object." % self.objectName) list.append(self, item) def insert(self, index, item): """ Overrides the standard C{insert} method. @raise ValueError: If item does not match requested type. """ if not isinstance(item, self.objectType): raise ValueError("Item must be a %s object." % self.objectName) list.insert(self, index, item) def extend(self, seq): """ Overrides the standard C{insert} method. @raise ValueError: If item does not match requested type. """ for item in seq: if not isinstance(item, self.objectType): raise ValueError("All items must be %s objects." % self.objectName) list.extend(self, seq) ######################################################################## # RestrictedContentList class definition ######################################################################## class RestrictedContentList(UnorderedList): """ Class representing a list containing only object with certain values. This is an unordered list. We override the C{append}, C{insert} and C{extend} methods to ensure that any item added to the list is among the valid values. We use a standard comparison, so pretty much anything can be in the list of valid values. The C{valuesDescr} value will be used in exceptions, i.e. C{"Item must be one of values in VALID_ACTIONS"} if C{valuesDescr} is C{"VALID_ACTIONS"}. @note: This class doesn't make any attempt to trap for nonsensical arguments. All of the values in the values list should be of the same type (i.e. strings). Then, all list operations also need to be of that type (i.e. you should always insert or append just strings). If you mix types -- for instance lists and strings -- you will likely see AttributeError exceptions or other problems. """ def __init__(self, valuesList, valuesDescr, prefix=None): """ Initializes a list restricted to containing certain values. @param valuesList: List of valid values. @param valuesDescr: Short string describing list of values. @param prefix: Prefix to use in error messages (None results in prefix "Item") """ super(RestrictedContentList, self).__init__() self.prefix = "Item" if prefix is not None: self.prefix = prefix self.valuesList = valuesList self.valuesDescr = valuesDescr def append(self, item): """ Overrides the standard C{append} method. @raise ValueError: If item is not in the values list. """ if item not in self.valuesList: raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) list.append(self, item) def insert(self, index, item): """ Overrides the standard C{insert} method. @raise ValueError: If item is not in the values list. """ if item not in self.valuesList: raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) list.insert(self, index, item) def extend(self, seq): """ Overrides the standard C{insert} method. @raise ValueError: If item is not in the values list. """ for item in seq: if item not in self.valuesList: raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) list.extend(self, seq) ######################################################################## # RegexMatchList class definition ######################################################################## class RegexMatchList(UnorderedList): """ Class representing a list containing only strings that match a regular expression. If C{emptyAllowed} is passed in as C{False}, then empty strings are explicitly disallowed, even if they happen to match the regular expression. (C{None} values are always disallowed, since string operations are not permitted on C{None}.) This is an unordered list. We override the C{append}, C{insert} and C{extend} methods to ensure that any item added to the list matches the indicated regular expression. @note: If you try to put values that are not strings into the list, you will likely get either TypeError or AttributeError exceptions as a result. """ def __init__(self, valuesRegex, emptyAllowed=True, prefix=None): """ Initializes a list restricted to containing certain values. @param valuesRegex: Regular expression that must be matched, as a string @param emptyAllowed: Indicates whether empty or None values are allowed. @param prefix: Prefix to use in error messages (None results in prefix "Item") """ super(RegexMatchList, self).__init__() self.prefix = "Item" if prefix is not None: self.prefix = prefix self.valuesRegex = valuesRegex self.emptyAllowed = emptyAllowed self.pattern = re.compile(self.valuesRegex) def append(self, item): """ Overrides the standard C{append} method. @raise ValueError: If item is None @raise ValueError: If item is empty and empty values are not allowed @raise ValueError: If item does not match the configured regular expression """ if item is None or (not self.emptyAllowed and item == ""): raise ValueError("%s cannot be empty." % self.prefix) if not self.pattern.search(item): raise ValueError("%s is not valid: [%s]" % (self.prefix, item)) list.append(self, item) def insert(self, index, item): """ Overrides the standard C{insert} method. @raise ValueError: If item is None @raise ValueError: If item is empty and empty values are not allowed @raise ValueError: If item does not match the configured regular expression """ if item is None or (not self.emptyAllowed and item == ""): raise ValueError("%s cannot be empty." % self.prefix) if not self.pattern.search(item): raise ValueError("%s is not valid [%s]" % (self.prefix, item)) list.insert(self, index, item) def extend(self, seq): """ Overrides the standard C{insert} method. @raise ValueError: If any item is None @raise ValueError: If any item is empty and empty values are not allowed @raise ValueError: If any item does not match the configured regular expression """ for item in seq: if item is None or (not self.emptyAllowed and item == ""): raise ValueError("%s cannot be empty." % self.prefix) if not self.pattern.search(item): raise ValueError("%s is not valid: [%s]" % (self.prefix, item)) list.extend(self, seq) ######################################################################## # RegexList class definition ######################################################################## class RegexList(UnorderedList): """ Class representing a list of valid regular expression strings. This is an unordered list. We override the C{append}, C{insert} and C{extend} methods to ensure that any item added to the list is a valid regular expression. """ def append(self, item): """ Overrides the standard C{append} method. @raise ValueError: If item is not an absolute path. """ try: re.compile(item) except re.error: raise ValueError("Not a valid regular expression: [%s]" % item) list.append(self, item) def insert(self, index, item): """ Overrides the standard C{insert} method. @raise ValueError: If item is not an absolute path. """ try: re.compile(item) except re.error: raise ValueError("Not a valid regular expression: [%s]" % item) list.insert(self, index, item) def extend(self, seq): """ Overrides the standard C{insert} method. @raise ValueError: If any item is not an absolute path. """ for item in seq: try: re.compile(item) except re.error: raise ValueError("Not a valid regular expression: [%s]" % item) for item in seq: list.append(self, item) ######################################################################## # Directed graph implementation ######################################################################## class _Vertex(object): """ Represents a vertex (or node) in a directed graph. """ def __init__(self, name): """ Constructor. @param name: Name of this graph vertex. @type name: String value. """ self.name = name self.endpoints = [] self.state = None @total_ordering class DirectedGraph(object): """ Represents a directed graph. A graph B{G=(V,E)} consists of a set of vertices B{V} together with a set B{E} of vertex pairs or edges. In a directed graph, each edge also has an associated direction (from vertext B{v1} to vertex B{v2}). A C{DirectedGraph} object provides a way to construct a directed graph and execute a depth- first search. This data structure was designed based on the graphing chapter in U{The Algorithm Design Manual}, by Steven S. Skiena. This class is intended to be used by Cedar Backup for dependency ordering. Because of this, it's not quite general-purpose. Unlike a "general" graph, every vertex in this graph has at least one edge pointing to it, from a special "start" vertex. This is so no vertices get "lost" either because they have no dependencies or because nothing depends on them. """ _UNDISCOVERED = 0 _DISCOVERED = 1 _EXPLORED = 2 def __init__(self, name): """ Directed graph constructor. @param name: Name of this graph. @type name: String value. """ if name is None or name == "": raise ValueError("Graph name must be non-empty.") self._name = name self._vertices = {} self._startVertex = _Vertex(None) # start vertex is only vertex with no name def __repr__(self): """ Official string representation for class instance. """ return "DirectedGraph(%s)" % self.name def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ # pylint: disable=W0212 if other is None: return 1 if self.name != other.name: if str(self.name or "") < str(other.name or ""): return -1 else: return 1 if self._vertices != other._vertices: if self._vertices < other._vertices: return -1 else: return 1 return 0 def _getName(self): """ Property target used to get the graph name. """ return self._name name = property(_getName, None, None, "Name of the graph.") def createVertex(self, name): """ Creates a named vertex. @param name: vertex name @raise ValueError: If the vertex name is C{None} or empty. """ if name is None or name == "": raise ValueError("Vertex name must be non-empty.") vertex = _Vertex(name) self._startVertex.endpoints.append(vertex) # so every vertex is connected at least once self._vertices[name] = vertex def createEdge(self, start, finish): """ Adds an edge with an associated direction, from C{start} vertex to C{finish} vertex. @param start: Name of start vertex. @param finish: Name of finish vertex. @raise ValueError: If one of the named vertices is unknown. """ try: startVertex = self._vertices[start] finishVertex = self._vertices[finish] startVertex.endpoints.append(finishVertex) except KeyError as e: raise ValueError("Vertex [%s] could not be found." % e) def topologicalSort(self): """ Implements a topological sort of the graph. This method also enforces that the graph is a directed acyclic graph, which is a requirement of a topological sort. A directed acyclic graph (or "DAG") is a directed graph with no directed cycles. A topological sort of a DAG is an ordering on the vertices such that all edges go from left to right. Only an acyclic graph can have a topological sort, but any DAG has at least one topological sort. Since a topological sort only makes sense for an acyclic graph, this method throws an exception if a cycle is found. A depth-first search only makes sense if the graph is acyclic. If the graph contains any cycles, it is not possible to determine a consistent ordering for the vertices. @note: If a particular vertex has no edges, then its position in the final list depends on the order in which the vertices were created in the graph. If you're using this method to determine a dependency order, this makes sense: a vertex with no dependencies can go anywhere (and will). @return: Ordering on the vertices so that all edges go from left to right. @raise ValueError: If a cycle is found in the graph. """ ordering = [] for key in self._vertices: vertex = self._vertices[key] vertex.state = self._UNDISCOVERED for key in self._vertices: vertex = self._vertices[key] if vertex.state == self._UNDISCOVERED: self._topologicalSort(self._startVertex, ordering) return ordering def _topologicalSort(self, vertex, ordering): """ Recursive depth first search function implementing topological sort. @param vertex: Vertex to search @param ordering: List of vertices in proper order """ vertex.state = self._DISCOVERED for endpoint in vertex.endpoints: if endpoint.state == self._UNDISCOVERED: self._topologicalSort(endpoint, ordering) elif endpoint.state != self._EXPLORED: raise ValueError("Cycle found in graph (found '%s' while searching '%s')." % (vertex.name, endpoint.name)) if vertex.name is not None: ordering.insert(0, vertex.name) vertex.state = self._EXPLORED ######################################################################## # PathResolverSingleton class definition ######################################################################## class PathResolverSingleton(object): """ Singleton used for resolving executable paths. Various functions throughout Cedar Backup (including extensions) need a way to resolve the path of executables that they use. For instance, the image functionality needs to find the C{mkisofs} executable, and the Subversion extension needs to find the C{svnlook} executable. Cedar Backup's original behavior was to assume that the simple name (C{"svnlook"} or whatever) was available on the caller's C{$PATH}, and to fail otherwise. However, this turns out to be less than ideal, since for instance the root user might not always have executables like C{svnlook} in its path. One solution is to specify a path (either via an absolute path or some sort of path insertion or path appending mechanism) that would apply to the C{executeCommand()} function. This is not difficult to implement, but it seem like kind of a "big hammer" solution. Besides that, it might also represent a security flaw (for instance, I prefer not to mess with root's C{$PATH} on the application level if I don't have to). The alternative is to set up some sort of configuration for the path to certain executables, i.e. "find C{svnlook} in C{/usr/local/bin/svnlook}" or whatever. This PathResolverSingleton aims to provide a good solution to the mapping problem. Callers of all sorts (extensions or not) can get an instance of the singleton. Then, they call the C{lookup} method to try and resolve the executable they are looking for. Through the C{lookup} method, the caller can also specify a default to use if a mapping is not found. This way, with no real effort on the part of the caller, behavior can neatly degrade to something equivalent to the current behavior if there is no special mapping or if the singleton was never initialized in the first place. Even better, extensions automagically get access to the same resolver functionality, and they don't even need to understand how the mapping happens. All extension authors need to do is document what executables their code requires, and the standard resolver configuration section will meet their needs. The class should be initialized once through the constructor somewhere in the main routine. Then, the main routine should call the L{fill} method to fill in the resolver's internal structures. Everyone else who needs to resolve a path will get an instance of the class using L{getInstance} and will then just call the L{lookup} method. @cvar _instance: Holds a reference to the singleton @ivar _mapping: Internal mapping from resource name to path. """ _instance = None # Holds a reference to singleton instance class _Helper: """Helper class to provide a singleton factory method.""" def __init__(self): pass def __call__(self, *args, **kw): # pylint: disable=W0212,R0201 if PathResolverSingleton._instance is None: obj = PathResolverSingleton() PathResolverSingleton._instance = obj return PathResolverSingleton._instance getInstance = _Helper() # Method that callers will use to get an instance def __init__(self, ): """Singleton constructor, which just creates the singleton instance.""" PathResolverSingleton._instance = self self._mapping = { } def lookup(self, name, default=None): """ Looks up name and returns the resolved path associated with the name. @param name: Name of the path resource to resolve. @param default: Default to return if resource cannot be resolved. @return: Resolved path associated with name, or default if name can't be resolved. """ value = default if name in list(self._mapping.keys()): value = self._mapping[name] logger.debug("Resolved command [%s] to [%s].", name, value) return value def fill(self, mapping): """ Fills in the singleton's internal mapping from name to resource. @param mapping: Mapping from resource name to path. @type mapping: Dictionary mapping name to path, both as strings. """ self._mapping = { } for key in list(mapping.keys()): self._mapping[key] = mapping[key] ######################################################################## # Pipe class definition ######################################################################## class Pipe(Popen): """ Specialized pipe class for use by C{executeCommand}. The L{executeCommand} function needs a specialized way of interacting with a pipe. First, C{executeCommand} only reads from the pipe, and never writes to it. Second, C{executeCommand} needs a way to discard all output written to C{stderr}, as a means of simulating the shell C{2>/dev/null} construct. """ def __init__(self, cmd, bufsize=-1, ignoreStderr=False): stderr = STDOUT if ignoreStderr: devnull = nullDevice() stderr = os.open(devnull, os.O_RDWR) Popen.__init__(self, shell=False, args=cmd, bufsize=bufsize, stdin=None, stdout=PIPE, stderr=stderr) ######################################################################## # Diagnostics class definition ######################################################################## class Diagnostics(object): """ Class holding runtime diagnostic information. Diagnostic information is information that is useful to get from users for debugging purposes. I'm consolidating it all here into one object. @sort: __init__, __repr__, __str__ """ # pylint: disable=R0201 def __init__(self): """ Constructor for the C{Diagnostics} class. """ def __repr__(self): """ Official string representation for class instance. """ return "Diagnostics()" def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def getValues(self): """ Get a map containing all of the diagnostic values. @return: Map from diagnostic name to diagnostic value. """ values = {} values['version'] = self.version values['interpreter'] = self.interpreter values['platform'] = self.platform values['encoding'] = self.encoding values['locale'] = self.locale values['timestamp'] = self.timestamp return values def printDiagnostics(self, fd=sys.stdout, prefix=""): """ Pretty-print diagnostic information to a file descriptor. @param fd: File descriptor used to print information. @param prefix: Prefix string (if any) to place onto printed lines @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ lines = self._buildDiagnosticLines(prefix) for line in lines: fd.write("%s\n" % line) def logDiagnostics(self, method, prefix=""): """ Pretty-print diagnostic information using a logger method. @param method: Logger method to use for logging (i.e. logger.info) @param prefix: Prefix string (if any) to place onto printed lines """ lines = self._buildDiagnosticLines(prefix) for line in lines: method("%s" % line) def _buildDiagnosticLines(self, prefix=""): """ Build a set of pretty-printed diagnostic lines. @param prefix: Prefix string (if any) to place onto printed lines @return: List of strings, not terminated by newlines. """ values = self.getValues() keys = list(values.keys()) keys.sort() tmax = Diagnostics._getMaxLength(keys) + 3 # three extra dots in output lines = [] for key in keys: title = key.title() title += (tmax - len(title)) * '.' value = values[key] line = "%s%s: %s" % (prefix, title, value) lines.append(line) return lines @staticmethod def _getMaxLength(values): """ Get the maximum length from among a list of strings. """ tmax = 0 for value in values: if len(value) > tmax: tmax = len(value) return tmax def _getVersion(self): """ Property target to get the Cedar Backup version. """ return "Cedar Backup %s (%s)" % (VERSION, DATE) def _getInterpreter(self): """ Property target to get the Python interpreter version. """ version = sys.version_info return "Python %d.%d.%d (%s)" % (version[0], version[1], version[2], version[3]) def _getEncoding(self): """ Property target to get the filesystem encoding. """ return sys.getfilesystemencoding() or sys.getdefaultencoding() def _getPlatform(self): """ Property target to get the operating system platform. """ try: uname = os.uname() sysname = uname[0] # i.e. Linux release = uname[2] # i.e. 2.16.18-2 machine = uname[4] # i.e. i686 return "%s (%s %s %s)" % (sys.platform, sysname, release, machine) except: return sys.platform def _getLocale(self): """ Property target to get the default locale that is in effect. """ try: import locale return locale.getdefaultlocale()[0] except: return "(unknown)" def _getTimestamp(self): """ Property target to get a current date/time stamp. """ try: import datetime return datetime.datetime.utcnow().ctime() + " UTC" except: return "(unknown)" version = property(_getVersion, None, None, "Cedar Backup version.") interpreter = property(_getInterpreter, None, None, "Python interpreter version.") platform = property(_getPlatform, None, None, "Platform identifying information.") encoding = property(_getEncoding, None, None, "Filesystem encoding that is in effect.") locale = property(_getLocale, None, None, "Locale that is in effect.") timestamp = property(_getTimestamp, None, None, "Current timestamp.") ######################################################################## # General utility functions ######################################################################## ###################### # sortDict() function ###################### def sortDict(d): """ Returns the keys of the dictionary sorted by value. @param d: Dictionary to operate on @return: List of dictionary keys sorted in order by dictionary value. """ items = list(d.items()) items.sort(key=lambda x: (x[1], x[0])) # sort by value and then by key return [key for key, value in items] ######################## # removeKeys() function ######################## def removeKeys(d, keys): """ Removes all of the keys from the dictionary. The dictionary is altered in-place. Each key must exist in the dictionary. @param d: Dictionary to operate on @param keys: List of keys to remove @raise KeyError: If one of the keys does not exist """ for key in keys: del d[key] ######################### # convertSize() function ######################### def convertSize(size, fromUnit, toUnit): """ Converts a size in one unit to a size in another unit. This is just a convenience function so that the functionality can be implemented in just one place. Internally, we convert values to bytes and then to the final unit. The available units are: - C{UNIT_BYTES} - Bytes - C{UNIT_KBYTES} - Kilobytes, where 1 kB = 1024 B - C{UNIT_MBYTES} - Megabytes, where 1 MB = 1024 kB - C{UNIT_GBYTES} - Gigabytes, where 1 GB = 1024 MB - C{UNIT_SECTORS} - Sectors, where 1 sector = 2048 B @param size: Size to convert @type size: Integer or float value in units of C{fromUnit} @param fromUnit: Unit to convert from @type fromUnit: One of the units listed above @param toUnit: Unit to convert to @type toUnit: One of the units listed above @return: Number converted to new unit, as a float. @raise ValueError: If one of the units is invalid. """ if size is None: raise ValueError("Cannot convert size of None.") if fromUnit == UNIT_BYTES: byteSize = float(size) elif fromUnit == UNIT_KBYTES: byteSize = float(size) * BYTES_PER_KBYTE elif fromUnit == UNIT_MBYTES: byteSize = float(size) * BYTES_PER_MBYTE elif fromUnit == UNIT_GBYTES: byteSize = float(size) * BYTES_PER_GBYTE elif fromUnit == UNIT_SECTORS: byteSize = float(size) * BYTES_PER_SECTOR else: raise ValueError("Unknown 'from' unit %s." % fromUnit) if toUnit == UNIT_BYTES: return byteSize elif toUnit == UNIT_KBYTES: return byteSize / BYTES_PER_KBYTE elif toUnit == UNIT_MBYTES: return byteSize / BYTES_PER_MBYTE elif toUnit == UNIT_GBYTES: return byteSize / BYTES_PER_GBYTE elif toUnit == UNIT_SECTORS: return byteSize / BYTES_PER_SECTOR else: raise ValueError("Unknown 'to' unit %s." % toUnit) ########################## # displayBytes() function ########################## def displayBytes(bytes, digits=2): # pylint: disable=W0622 """ Format a byte quantity so it can be sensibly displayed. It's rather difficult to look at a number like "72372224 bytes" and get any meaningful information out of it. It would be more useful to see something like "69.02 MB". That's what this function does. Any time you want to display a byte value, i.e.:: print "Size: %s bytes" % bytes Call this function instead:: print "Size: %s" % displayBytes(bytes) What comes out will be sensibly formatted. The indicated number of digits will be listed after the decimal point, rounded based on whatever rules are used by Python's standard C{%f} string format specifier. (Values less than 1 kB will be listed in bytes and will not have a decimal point, since the concept of a fractional byte is nonsensical.) @param bytes: Byte quantity. @type bytes: Integer number of bytes. @param digits: Number of digits to display after the decimal point. @type digits: Integer value, typically 2-5. @return: String, formatted for sensible display. """ if bytes is None: raise ValueError("Cannot display byte value of None.") bytes = float(bytes) if math.fabs(bytes) < BYTES_PER_KBYTE: fmt = "%.0f bytes" value = bytes elif math.fabs(bytes) < BYTES_PER_MBYTE: fmt = "%." + "%d" % digits + "f kB" value = bytes / BYTES_PER_KBYTE elif math.fabs(bytes) < BYTES_PER_GBYTE: fmt = "%." + "%d" % digits + "f MB" value = bytes / BYTES_PER_MBYTE else: fmt = "%." + "%d" % digits + "f GB" value = bytes / BYTES_PER_GBYTE return fmt % value ################################## # getFunctionReference() function ################################## def getFunctionReference(module, function): """ Gets a reference to a named function. This does some hokey-pokey to get back a reference to a dynamically named function. For instance, say you wanted to get a reference to the C{os.path.isdir} function. You could use:: myfunc = getFunctionReference("os.path", "isdir") Although we won't bomb out directly, behavior is pretty much undefined if you pass in C{None} or C{""} for either C{module} or C{function}. The only validation we enforce is that whatever we get back must be callable. I derived this code based on the internals of the Python unittest implementation. I don't claim to completely understand how it works. @param module: Name of module associated with function. @type module: Something like "os.path" or "CedarBackup3.util" @param function: Name of function @type function: Something like "isdir" or "getUidGid" @return: Reference to function associated with name. @raise ImportError: If the function cannot be found. @raise ValueError: If the resulting reference is not callable. @copyright: Some of this code, prior to customization, was originally part of the Python 2.3 codebase. Python code is copyright (c) 2001, 2002 Python Software Foundation; All Rights Reserved. """ parts = [] if module is not None and module != "": parts = module.split(".") if function is not None and function != "": parts.append(function) copy = parts[:] while copy: try: module = __import__(".".join(copy)) break except ImportError: del copy[-1] if not copy: raise parts = parts[1:] obj = module for part in parts: obj = getattr(obj, part) if not isinstance(obj, collections.Callable): raise ValueError("Reference to %s.%s is not callable." % (module, function)) return obj ####################### # getUidGid() function ####################### def getUidGid(user, group): """ Get the uid/gid associated with a user/group pair This is a no-op if user/group functionality is not available on the platform. @param user: User name @type user: User name as a string @param group: Group name @type group: Group name as a string @return: Tuple C{(uid, gid)} matching passed-in user and group. @raise ValueError: If the ownership user/group values are invalid """ if _UID_GID_AVAILABLE: try: uid = pwd.getpwnam(user)[2] gid = grp.getgrnam(group)[2] return (uid, gid) except Exception as e: logger.debug("Error looking up uid and gid for [%s:%s]: %s", user, group, e) raise ValueError("Unable to lookup up uid and gid for passed in user/group.") else: return (0, 0) ############################# # changeOwnership() function ############################# def changeOwnership(path, user, group): """ Changes ownership of path to match the user and group. This is a no-op if user/group functionality is not available on the platform, or if the either passed-in user or group is C{None}. Further, we won't even try to do it unless running as root, since it's unlikely to work. @param path: Path whose ownership to change. @param user: User which owns file. @param group: Group which owns file. """ if _UID_GID_AVAILABLE: if user is None or group is None: logger.debug("User or group is None, so not attempting to change owner on [%s].", path) elif not isRunningAsRoot(): logger.debug("Not root, so not attempting to change owner on [%s].", path) else: try: (uid, gid) = getUidGid(user, group) os.chown(path, uid, gid) except Exception as e: logger.error("Error changing ownership of [%s]: %s", path, e) ############################# # isRunningAsRoot() function ############################# def isRunningAsRoot(): """ Indicates whether the program is running as the root user. """ return os.getuid() == 0 ############################## # splitCommandLine() function ############################## def splitCommandLine(commandLine): """ Splits a command line string into a list of arguments. Unfortunately, there is no "standard" way to parse a command line string, and it's actually not an easy problem to solve portably (essentially, we have to emulate the shell argument-processing logic). This code only respects double quotes (C{"}) for grouping arguments, not single quotes (C{'}). Make sure you take this into account when building your command line. Incidentally, I found this particular parsing method while digging around in Google Groups, and I tweaked it for my own use. @param commandLine: Command line string @type commandLine: String, i.e. "cback3 --verbose stage store" @return: List of arguments, suitable for passing to C{popen2}. @raise ValueError: If the command line is None. """ if commandLine is None: raise ValueError("Cannot split command line of None.") fields = re.findall('[^ "]+|"[^"]+"', commandLine) fields = [field.replace('"', '') for field in fields] return fields ############################ # resolveCommand() function ############################ def resolveCommand(command): """ Resolves the real path to a command through the path resolver mechanism. Both extensions and standard Cedar Backup functionality need a way to resolve the "real" location of various executables. Normally, they assume that these executables are on the system path, but some callers need to specify an alternate location. Ideally, we want to handle this configuration in a central location. The Cedar Backup path resolver mechanism (a singleton called L{PathResolverSingleton}) provides the central location to store the mappings. This function wraps access to the singleton, and is what all functions (extensions or standard functionality) should call if they need to find a command. The passed-in command must actually be a list, in the standard form used by all existing Cedar Backup code (something like C{["svnlook", ]}). The lookup will actually be done on the first element in the list, and the returned command will always be in list form as well. If the passed-in command can't be resolved or no mapping exists, then the command itself will be returned unchanged. This way, we neatly fall back on default behavior if we have no sensible alternative. @param command: Command to resolve. @type command: List form of command, i.e. C{["svnlook", ]}. @return: Path to command or just command itself if no mapping exists. """ singleton = PathResolverSingleton.getInstance() name = command[0] result = command[:] result[0] = singleton.lookup(name, name) return result ############################ # executeCommand() function ############################ def executeCommand(command, args, returnOutput=False, ignoreStderr=False, doNotLog=False, outputFile=None): """ Executes a shell command, hopefully in a safe way. This function exists to replace direct calls to C{os.popen} in the Cedar Backup code. It's not safe to call a function such as C{os.popen()} with untrusted arguments, since that can cause problems if the string contains non-safe variables or other constructs (imagine that the argument is C{$WHATEVER}, but C{$WHATEVER} contains something like C{"; rm -fR ~/; echo"} in the current environment). Instead, it's safer to pass a list of arguments in the style supported bt C{popen2} or C{popen4}. This function actually uses a specialized C{Pipe} class implemented using either C{subprocess.Popen} or C{popen2.Popen4}. Under the normal case, this function will return a tuple of C{(status, None)} where the status is the wait-encoded return status of the call per the C{popen2.Popen4} documentation. If C{returnOutput} is passed in as C{True}, the function will return a tuple of C{(status, output)} where C{output} is a list of strings, one entry per line in the output from the command. Output is always logged to the C{outputLogger.info()} target, regardless of whether it's returned. By default, C{stdout} and C{stderr} will be intermingled in the output. However, if you pass in C{ignoreStderr=True}, then only C{stdout} will be included in the output. The C{doNotLog} parameter exists so that callers can force the function to not log command output to the debug log. Normally, you would want to log. However, if you're using this function to write huge output files (i.e. database backups written to C{stdout}) then you might want to avoid putting all that information into the debug log. The C{outputFile} parameter exists to make it easier for a caller to push output into a file, i.e. as a substitute for redirection to a file. If this value is passed in, each time a line of output is generated, it will be written to the file using C{outputFile.write()}. At the end, the file descriptor will be flushed using C{outputFile.flush()}. The caller maintains responsibility for closing the file object appropriately. @note: I know that it's a bit confusing that the command and the arguments are both lists. I could have just required the caller to pass in one big list. However, I think it makes some sense to keep the command (the constant part of what we're executing, i.e. C{"scp -B"}) separate from its arguments, even if they both end up looking kind of similar. @note: You cannot redirect output via shell constructs (i.e. C{>file}, C{2>/dev/null}, etc.) using this function. The redirection string would be passed to the command just like any other argument. However, you can implement the equivalent to redirection using C{ignoreStderr} and C{outputFile}, as discussed above. @note: The operating system environment is partially sanitized before the command is invoked. See L{sanitizeEnvironment} for details. @param command: Shell command to execute @type command: List of individual arguments that make up the command @param args: List of arguments to the command @type args: List of additional arguments to the command @param returnOutput: Indicates whether to return the output of the command @type returnOutput: Boolean C{True} or C{False} @param ignoreStderr: Whether stderr should be discarded @type ignoreStderr: Boolean True or False @param doNotLog: Indicates that output should not be logged. @type doNotLog: Boolean C{True} or C{False} @param outputFile: File object that all output should be written to. @type outputFile: File object as returned from C{open()} or C{file()}, configured for binary write @return: Tuple of C{(result, output)} as described above. """ logger.debug("Executing command %s with args %s.", command, args) outputLogger.info("Executing command %s with args %s.", command, args) if doNotLog: logger.debug("Note: output will not be logged, per the doNotLog flag.") outputLogger.info("Note: output will not be logged, per the doNotLog flag.") output = [] fields = command[:] # make sure to copy it so we don't destroy it fields.extend(args) try: sanitizeEnvironment() # make sure we have a consistent environment try: pipe = Pipe(fields, ignoreStderr=ignoreStderr) except OSError: # On some platforms (i.e. Cygwin) this intermittently fails the first time we do it. # So, we attempt it a second time and if that works, we just go on as usual. # The problem appears to be that we sometimes get a bad stderr file descriptor. pipe = Pipe(fields, ignoreStderr=ignoreStderr) while True: line = pipe.stdout.readline() if not line: break if returnOutput: output.append(line.decode("utf-8")) if outputFile is not None: outputFile.write(line) if not doNotLog: outputLogger.info(line.decode("utf-8")[:-1]) # this way the log will (hopefully) get updated in realtime if outputFile is not None: try: # note, not every file-like object can be flushed outputFile.flush() except: pass if returnOutput: return (pipe.wait(), output) else: return (pipe.wait(), None) except OSError as e: try: if returnOutput: if output != []: return (pipe.wait(), output) else: return (pipe.wait(), [ e, ]) else: return (pipe.wait(), None) except UnboundLocalError: # pipe not set if returnOutput: return (256, []) else: return (256, None) ############################## # calculateFileAge() function ############################## def calculateFileAge(path): """ Calculates the age (in days) of a file. The "age" of a file is the amount of time since the file was last used, per the most recent of the file's C{st_atime} and C{st_mtime} values. Technically, we only intend this function to work with files, but it will probably work with anything on the filesystem. @param path: Path to a file on disk. @return: Age of the file in days (possibly fractional). @raise OSError: If the file doesn't exist. """ currentTime = int(time.time()) fileStats = os.stat(path) lastUse = max(fileStats.st_atime, fileStats.st_mtime) # "most recent" is "largest" ageInSeconds = currentTime - lastUse ageInDays = ageInSeconds / SECONDS_PER_DAY return ageInDays ################### # mount() function ################### def mount(devicePath, mountPoint, fsType): """ Mounts the indicated device at the indicated mount point. For instance, to mount a CD, you might use device path C{/dev/cdrw}, mount point C{/media/cdrw} and filesystem type C{iso9660}. You can safely use any filesystem type that is supported by C{mount} on your platform. If the type is C{None}, we'll attempt to let C{mount} auto-detect it. This may or may not work on all systems. @note: This only works on platforms that have a concept of "mounting" a filesystem through a command-line C{"mount"} command, like UNIXes. It won't work on Windows. @param devicePath: Path of device to be mounted. @param mountPoint: Path that device should be mounted at. @param fsType: Type of the filesystem assumed to be available via the device. @raise IOError: If the device cannot be mounted. """ if fsType is None: args = [ devicePath, mountPoint ] else: args = [ "-t", fsType, devicePath, mountPoint ] command = resolveCommand(MOUNT_COMMAND) result = executeCommand(command, args, returnOutput=False, ignoreStderr=True)[0] if result != 0: raise IOError("Error [%d] mounting [%s] at [%s] as [%s]." % (result, devicePath, mountPoint, fsType)) ##################### # unmount() function ##################### def unmount(mountPoint, removeAfter=False, attempts=1, waitSeconds=0): """ Unmounts whatever device is mounted at the indicated mount point. Sometimes, it might not be possible to unmount the mount point immediately, if there are still files open there. Use the C{attempts} and C{waitSeconds} arguments to indicate how many unmount attempts to make and how many seconds to wait between attempts. If you pass in zero attempts, no attempts will be made (duh). If the indicated mount point is not really a mount point per C{os.path.ismount()}, then it will be ignored. This seems to be a safer check then looking through C{/etc/mtab}, since C{ismount()} is already in the Python standard library and is documented as working on all POSIX systems. If C{removeAfter} is C{True}, then the mount point will be removed using C{os.rmdir()} after the unmount action succeeds. If for some reason the mount point is not a directory, then it will not be removed. @note: This only works on platforms that have a concept of "mounting" a filesystem through a command-line C{"mount"} command, like UNIXes. It won't work on Windows. @param mountPoint: Mount point to be unmounted. @param removeAfter: Remove the mount point after unmounting it. @param attempts: Number of times to attempt the unmount. @param waitSeconds: Number of seconds to wait between repeated attempts. @raise IOError: If the mount point is still mounted after attempts are exhausted. """ if os.path.ismount(mountPoint): for attempt in range(0, attempts): logger.debug("Making attempt %d to unmount [%s].", attempt, mountPoint) command = resolveCommand(UMOUNT_COMMAND) result = executeCommand(command, [ mountPoint, ], returnOutput=False, ignoreStderr=True)[0] if result != 0: logger.error("Error [%d] unmounting [%s] on attempt %d.", result, mountPoint, attempt) elif os.path.ismount(mountPoint): logger.error("After attempt %d, [%s] is still mounted.", attempt, mountPoint) else: logger.debug("Successfully unmounted [%s] on attempt %d.", mountPoint, attempt) break # this will cause us to skip the loop else: clause if attempt+1 < attempts: # i.e. this isn't the last attempt if waitSeconds > 0: logger.info("Sleeping %d second(s) before next unmount attempt.", waitSeconds) time.sleep(waitSeconds) else: if os.path.ismount(mountPoint): raise IOError("Unable to unmount [%s] after %d attempts.", mountPoint, attempts) logger.info("Mount point [%s] seems to have finally gone away.", mountPoint) if os.path.isdir(mountPoint) and removeAfter: logger.debug("Removing mount point [%s].", mountPoint) os.rmdir(mountPoint) ########################### # deviceMounted() function ########################### def deviceMounted(devicePath): """ Indicates whether a specific filesystem device is currently mounted. We determine whether the device is mounted by looking through the system's C{mtab} file. This file shows every currently-mounted filesystem, ordered by device. We only do the check if the C{mtab} file exists and is readable. Otherwise, we assume that the device is not mounted. @note: This only works on platforms that have a concept of an mtab file to show mounted volumes, like UNIXes. It won't work on Windows. @param devicePath: Path of device to be checked @return: True if device is mounted, false otherwise. """ if os.path.exists(MTAB_FILE) and os.access(MTAB_FILE, os.R_OK): realPath = os.path.realpath(devicePath) with open(MTAB_FILE) as f: lines = f.readlines() for line in lines: (mountDevice, mountPoint, remainder) = line.split(None, 2) if mountDevice in [ devicePath, realPath, ]: logger.debug("Device [%s] is mounted at [%s].", devicePath, mountPoint) return True return False ######################## # encodePath() function ######################## def encodePath(path): """ Safely encodes a filesystem path as a Unicode string, converting bytes to fileystem encoding if necessary. @param path: Path to encode @return: Path, as a string, encoded appropriately @raise ValueError: If the path cannot be encoded properly. @see: http://lucumr.pocoo.org/2013/7/2/the-updated-guide-to-unicode/ """ if path is None: return path try: if isinstance(path, bytes): encoding = sys.getfilesystemencoding() or sys.getdefaultencoding() path = path.decode(encoding, "surrogateescape") # to match what os.listdir() does return path except UnicodeError as e: raise ValueError("Path could not be safely encoded as %s: %s" % (encoding, str(e))) ######################## # nullDevice() function ######################## def nullDevice(): """ Attempts to portably return the null device on this system. The null device is something like C{/dev/null} on a UNIX system. The name varies on other platforms. """ return os.devnull ############################## # deriveDayOfWeek() function ############################## def deriveDayOfWeek(dayName): """ Converts English day name to numeric day of week as from C{time.localtime}. For instance, the day C{monday} would be converted to the number C{0}. @param dayName: Day of week to convert @type dayName: string, i.e. C{"monday"}, C{"tuesday"}, etc. @returns: Integer, where Monday is 0 and Sunday is 6; or -1 if no conversion is possible. """ if dayName.lower() == "monday": return 0 elif dayName.lower() == "tuesday": return 1 elif dayName.lower() == "wednesday": return 2 elif dayName.lower() == "thursday": return 3 elif dayName.lower() == "friday": return 4 elif dayName.lower() == "saturday": return 5 elif dayName.lower() == "sunday": return 6 else: return -1 # What else can we do?? Thrown an exception, I guess. ########################### # isStartOfWeek() function ########################### def isStartOfWeek(startingDay): """ Indicates whether "today" is the backup starting day per configuration. If the current day's English name matches the indicated starting day, then today is a starting day. @param startingDay: Configured starting day. @type startingDay: string, i.e. C{"monday"}, C{"tuesday"}, etc. @return: Boolean indicating whether today is the starting day. """ value = time.localtime().tm_wday == deriveDayOfWeek(startingDay) if value: logger.debug("Today is the start of the week.") else: logger.debug("Today is NOT the start of the week.") return value ################################# # buildNormalizedPath() function ################################# def buildNormalizedPath(path): """ Returns a "normalized" path based on a path name. A normalized path is a representation of a path that is also a valid file name. To make a valid file name out of a complete path, we have to convert or remove some characters that are significant to the filesystem -- in particular, the path separator and any leading C{'.'} character (which would cause the file to be hidden in a file listing). Note that this is a one-way transformation -- you can't safely derive the original path from the normalized path. To normalize a path, we begin by looking at the first character. If the first character is C{'/'} or C{'\\'}, it gets removed. If the first character is C{'.'}, it gets converted to C{'_'}. Then, we look through the rest of the path and convert all remaining C{'/'} or C{'\\'} characters C{'-'}, and all remaining whitespace characters to C{'_'}. As a special case, a path consisting only of a single C{'/'} or C{'\\'} character will be converted to C{'-'}. @param path: Path to normalize @return: Normalized path as described above. @raise ValueError: If the path is None """ if path is None: raise ValueError("Cannot normalize path None.") elif len(path) == 0: return path elif path == "/" or path == "\\": return "-" else: normalized = path normalized = re.sub(r"^\/", "", normalized) # remove leading '/' normalized = re.sub(r"^\\", "", normalized) # remove leading '\' normalized = re.sub(r"^\.", "_", normalized) # convert leading '.' to '_' so file won't be hidden normalized = re.sub(r"\/", "-", normalized) # convert all '/' characters to '-' normalized = re.sub(r"\\", "-", normalized) # convert all '\' characters to '-' normalized = re.sub(r"\s", "_", normalized) # convert all whitespace to '_' return normalized ################################# # sanitizeEnvironment() function ################################# def sanitizeEnvironment(): """ Sanitizes the operating system environment. The operating system environment is contained in C{os.environ}. This method sanitizes the contents of that dictionary. Currently, all it does is reset the locale (removing C{$LC_*}) and set the default language (C{$LANG}) to L{DEFAULT_LANGUAGE}. This way, we can count on consistent localization regardless of what the end-user has configured. This is important for code that needs to parse program output. The C{os.environ} dictionary is modifed in-place. If C{$LANG} is already set to the proper value, it is not re-set, so we can avoid the memory leaks that are documented to occur on BSD-based systems. @return: Copy of the sanitized environment. """ for var in LOCALE_VARS: if var in os.environ: del os.environ[var] if LANG_VAR in os.environ: if os.environ[LANG_VAR] != DEFAULT_LANGUAGE: # no need to reset if it exists (avoid leaks on BSD systems) os.environ[LANG_VAR] = DEFAULT_LANGUAGE return os.environ.copy() ############################# # dereferenceLink() function ############################# def dereferenceLink(path, absolute=True): """ Deference a soft link, optionally normalizing it to an absolute path. @param path: Path of link to dereference @param absolute: Whether to normalize the result to an absolute path @return: Dereferenced path, or original path if original is not a link. """ if os.path.islink(path): result = os.readlink(path) if absolute and not os.path.isabs(result): result = os.path.abspath(os.path.join(os.path.dirname(path), result)) return result return path ######################### # checkUnique() function ######################### def checkUnique(prefix, values): """ Checks that all values are unique. The values list is checked for duplicate values. If there are duplicates, an exception is thrown. All duplicate values are listed in the exception. @param prefix: Prefix to use in the thrown exception @param values: List of values to check @raise ValueError: If there are duplicates in the list """ values.sort() duplicates = [] for i in range(1, len(values)): if values[i-1] == values[i]: duplicates.append(values[i]) if duplicates: raise ValueError("%s %s" % (prefix, duplicates)) ####################################### # parseCommaSeparatedString() function ####################################### def parseCommaSeparatedString(commaString): """ Parses a list of values out of a comma-separated string. The items in the list are split by comma, and then have whitespace stripped. As a special case, if C{commaString} is C{None}, then C{None} will be returned. @param commaString: List of values in comma-separated string format. @return: Values from commaString split into a list, or C{None}. """ if commaString is None: return None else: pass1 = commaString.split(",") pass2 = [] for item in pass1: item = item.strip() if len(item) > 0: pass2.append(item) return pass2 CedarBackup3-3.1.6/CedarBackup3/peer.py0000664000175000017500000015255212560007327021242 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides backup peer-related objects. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides backup peer-related objects and utility functions. @sort: LocalPeer, RemotePeer @var DEF_COLLECT_INDICATOR: Name of the default collect indicator file. @var DEF_STAGE_INDICATOR: Name of the default stage indicator file. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging import shutil # Cedar Backup modules from CedarBackup3.filesystem import FilesystemList from CedarBackup3.util import resolveCommand, executeCommand, isRunningAsRoot from CedarBackup3.util import splitCommandLine, encodePath from CedarBackup3.config import VALID_FAILURE_MODES ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.peer") DEF_RCP_COMMAND = [ "/usr/bin/scp", "-B", "-q", "-C" ] DEF_RSH_COMMAND = [ "/usr/bin/ssh", ] DEF_CBACK_COMMAND = "/usr/bin/cback3" DEF_COLLECT_INDICATOR = "cback.collect" DEF_STAGE_INDICATOR = "cback.stage" SU_COMMAND = [ "su" ] ######################################################################## # LocalPeer class definition ######################################################################## class LocalPeer(object): ###################### # Class documentation ###################### """ Backup peer representing a local peer in a backup pool. This is a class representing a local (non-network) peer in a backup pool. Local peers are backed up by simple filesystem copy operations. A local peer has associated with it a name (typically, but not necessarily, a hostname) and a collect directory. The public methods other than the constructor are part of a "backup peer" interface shared with the C{RemotePeer} class. @sort: __init__, stagePeer, checkCollectIndicator, writeStageIndicator, _copyLocalDir, _copyLocalFile, name, collectDir """ ############## # Constructor ############## def __init__(self, name, collectDir, ignoreFailureMode=None): """ Initializes a local backup peer. Note that the collect directory must be an absolute path, but does not have to exist when the object is instantiated. We do a lazy validation on this value since we could (potentially) be creating peer objects before an ongoing backup completed. @param name: Name of the backup peer @type name: String, typically a hostname @param collectDir: Path to the peer's collect directory @type collectDir: String representing an absolute local path on disk @param ignoreFailureMode: Ignore failure mode for this peer @type ignoreFailureMode: One of VALID_FAILURE_MODES @raise ValueError: If the name is empty. @raise ValueError: If collect directory is not an absolute path. """ self._name = None self._collectDir = None self._ignoreFailureMode = None self.name = name self.collectDir = collectDir self.ignoreFailureMode = ignoreFailureMode ############# # Properties ############# def _setName(self, value): """ Property target used to set the peer name. The value must be a non-empty string and cannot be C{None}. @raise ValueError: If the value is an empty string or C{None}. """ if value is None or len(value) < 1: raise ValueError("Peer name must be a non-empty string.") self._name = value def _getName(self): """ Property target used to get the peer name. """ return self._name def _setCollectDir(self, value): """ Property target used to set the collect directory. The value must be an absolute path and cannot be C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is C{None} or is not an absolute path. @raise ValueError: If a path cannot be encoded properly. """ if value is None or not os.path.isabs(value): raise ValueError("Collect directory must be an absolute path.") self._collectDir = encodePath(value) def _getCollectDir(self): """ Property target used to get the collect directory. """ return self._collectDir def _setIgnoreFailureMode(self, value): """ Property target used to set the ignoreFailure mode. If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_FAILURE_MODES: raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) self._ignoreFailureMode = value def _getIgnoreFailureMode(self): """ Property target used to get the ignoreFailure mode. """ return self._ignoreFailureMode name = property(_getName, _setName, None, "Name of the peer.") collectDir = property(_getCollectDir, _setCollectDir, None, "Path to the peer's collect directory (an absolute local path).") ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") ################# # Public methods ################# def stagePeer(self, targetDir, ownership=None, permissions=None): """ Stages data from the peer into the indicated local target directory. The collect and target directories must both already exist before this method is called. If passed in, ownership and permissions will be applied to the files that are copied. @note: The caller is responsible for checking that the indicator exists, if they care. This function only stages the files within the directory. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @param targetDir: Target directory to write data into @type targetDir: String representing a directory on disk @param ownership: Owner and group that the staged files should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @return: Number of files copied from the source directory to the target directory. @raise ValueError: If collect directory is not a directory or does not exist @raise ValueError: If target directory is not a directory, does not exist or is not absolute. @raise ValueError: If a path cannot be encoded properly. @raise IOError: If there were no files to stage (i.e. the directory was empty) @raise IOError: If there is an IO error copying a file. @raise OSError: If there is an OS error copying or changing permissions on a file """ targetDir = encodePath(targetDir) if not os.path.isabs(targetDir): logger.debug("Target directory [%s] not an absolute path.", targetDir) raise ValueError("Target directory must be an absolute path.") if not os.path.exists(self.collectDir) or not os.path.isdir(self.collectDir): logger.debug("Collect directory [%s] is not a directory or does not exist on disk.", self.collectDir) raise ValueError("Collect directory is not a directory or does not exist on disk.") if not os.path.exists(targetDir) or not os.path.isdir(targetDir): logger.debug("Target directory [%s] is not a directory or does not exist on disk.", targetDir) raise ValueError("Target directory is not a directory or does not exist on disk.") count = LocalPeer._copyLocalDir(self.collectDir, targetDir, ownership, permissions) if count == 0: raise IOError("Did not copy any files from local peer.") return count def checkCollectIndicator(self, collectIndicator=None): """ Checks the collect indicator in the peer's staging directory. When a peer has completed collecting its backup files, it will write an empty indicator file into its collect directory. This method checks to see whether that indicator has been written. We're "stupid" here - if the collect directory doesn't exist, you'll naturally get back C{False}. If you need to, you can override the name of the collect indicator file by passing in a different name. @param collectIndicator: Name of the collect indicator file to check @type collectIndicator: String representing name of a file in the collect directory @return: Boolean true/false depending on whether the indicator exists. @raise ValueError: If a path cannot be encoded properly. """ collectIndicator = encodePath(collectIndicator) if collectIndicator is None: return os.path.exists(os.path.join(self.collectDir, DEF_COLLECT_INDICATOR)) else: return os.path.exists(os.path.join(self.collectDir, collectIndicator)) def writeStageIndicator(self, stageIndicator=None, ownership=None, permissions=None): """ Writes the stage indicator in the peer's staging directory. When the master has completed collecting its backup files, it will write an empty indicator file into the peer's collect directory. The presence of this file implies that the staging process is complete. If you need to, you can override the name of the stage indicator file by passing in a different name. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @param stageIndicator: Name of the indicator file to write @type stageIndicator: String representing name of a file in the collect directory @param ownership: Owner and group that the indicator file should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the indicator file should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @raise ValueError: If collect directory is not a directory or does not exist @raise ValueError: If a path cannot be encoded properly. @raise IOError: If there is an IO error creating the file. @raise OSError: If there is an OS error creating or changing permissions on the file """ stageIndicator = encodePath(stageIndicator) if not os.path.exists(self.collectDir) or not os.path.isdir(self.collectDir): logger.debug("Collect directory [%s] is not a directory or does not exist on disk.", self.collectDir) raise ValueError("Collect directory is not a directory or does not exist on disk.") if stageIndicator is None: fileName = os.path.join(self.collectDir, DEF_STAGE_INDICATOR) else: fileName = os.path.join(self.collectDir, stageIndicator) LocalPeer._copyLocalFile(None, fileName, ownership, permissions) # None for sourceFile results in an empty target ################## # Private methods ################## @staticmethod def _copyLocalDir(sourceDir, targetDir, ownership=None, permissions=None): """ Copies files from the source directory to the target directory. This function is not recursive. Only the files in the directory will be copied. Ownership and permissions will be left at their default values if new values are not specified. The source and target directories are allowed to be soft links to a directory, but besides that soft links are ignored. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @param sourceDir: Source directory @type sourceDir: String representing a directory on disk @param targetDir: Target directory @type targetDir: String representing a directory on disk @param ownership: Owner and group that the copied files should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @return: Number of files copied from the source directory to the target directory. @raise ValueError: If source or target is not a directory or does not exist. @raise ValueError: If a path cannot be encoded properly. @raise IOError: If there is an IO error copying the files. @raise OSError: If there is an OS error copying or changing permissions on a files """ filesCopied = 0 sourceDir = encodePath(sourceDir) targetDir = encodePath(targetDir) for fileName in os.listdir(sourceDir): sourceFile = os.path.join(sourceDir, fileName) targetFile = os.path.join(targetDir, fileName) LocalPeer._copyLocalFile(sourceFile, targetFile, ownership, permissions) filesCopied += 1 return filesCopied @staticmethod def _copyLocalFile(sourceFile=None, targetFile=None, ownership=None, permissions=None, overwrite=True): """ Copies a source file to a target file. If the source file is C{None} then the target file will be created or overwritten as an empty file. If the target file is C{None}, this method is a no-op. Attempting to copy a soft link or a directory will result in an exception. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @note: We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception. @param sourceFile: Source file to copy @type sourceFile: String representing a file on disk, as an absolute path @param targetFile: Target file to create @type targetFile: String representing a file on disk, as an absolute path @param ownership: Owner and group that the copied should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @param overwrite: Indicates whether it's OK to overwrite the target file. @type overwrite: Boolean true/false. @raise ValueError: If the passed-in source file is not a regular file. @raise ValueError: If a path cannot be encoded properly. @raise IOError: If the target file already exists. @raise IOError: If there is an IO error copying the file @raise OSError: If there is an OS error copying or changing permissions on a file """ targetFile = encodePath(targetFile) sourceFile = encodePath(sourceFile) if targetFile is None: return if not overwrite: if os.path.exists(targetFile): raise IOError("Target file [%s] already exists." % targetFile) if sourceFile is None: with open(targetFile, "w") as f: f.write("") else: if os.path.isfile(sourceFile) and not os.path.islink(sourceFile): shutil.copy(sourceFile, targetFile) else: logger.debug("Source [%s] is not a regular file.", sourceFile) raise ValueError("Source is not a regular file.") if ownership is not None: os.chown(targetFile, ownership[0], ownership[1]) if permissions is not None: os.chmod(targetFile, permissions) ######################################################################## # RemotePeer class definition ######################################################################## class RemotePeer(object): ###################### # Class documentation ###################### """ Backup peer representing a remote peer in a backup pool. This is a class representing a remote (networked) peer in a backup pool. Remote peers are backed up using an rcp-compatible copy command. A remote peer has associated with it a name (which must be a valid hostname), a collect directory, a working directory and a copy method (an rcp-compatible command). You can also set an optional local user value. This username will be used as the local user for any remote copies that are required. It can only be used if the root user is executing the backup. The root user will C{su} to the local user and execute the remote copies as that user. The copy method is associated with the peer and not with the actual request to copy, because we can envision that each remote host might have a different connect method. The public methods other than the constructor are part of a "backup peer" interface shared with the C{LocalPeer} class. @sort: __init__, stagePeer, checkCollectIndicator, writeStageIndicator, executeRemoteCommand, executeManagedAction, _getDirContents, _copyRemoteDir, _copyRemoteFile, _pushLocalFile, name, collectDir, remoteUser, rcpCommand, rshCommand, cbackCommand """ ############## # Constructor ############## def __init__(self, name=None, collectDir=None, workingDir=None, remoteUser=None, rcpCommand=None, localUser=None, rshCommand=None, cbackCommand=None, ignoreFailureMode=None): """ Initializes a remote backup peer. @note: If provided, each command will eventually be parsed into a list of strings suitable for passing to C{util.executeCommand} in order to avoid security holes related to shell interpolation. This parsing will be done by the L{util.splitCommandLine} function. See the documentation for that function for some important notes about its limitations. @param name: Name of the backup peer @type name: String, must be a valid DNS hostname @param collectDir: Path to the peer's collect directory @type collectDir: String representing an absolute path on the remote peer @param workingDir: Working directory that can be used to create temporary files, etc. @type workingDir: String representing an absolute path on the current host. @param remoteUser: Name of the Cedar Backup user on the remote peer @type remoteUser: String representing a username, valid via remote shell to the peer @param localUser: Name of the Cedar Backup user on the current host @type localUser: String representing a username, valid on the current host @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer @type rcpCommand: String representing a system command including required arguments @param rshCommand: An rsh-compatible copy command to use for remote shells to the peer @type rshCommand: String representing a system command including required arguments @param cbackCommand: A chack-compatible command to use for executing managed actions @type cbackCommand: String representing a system command including required arguments @param ignoreFailureMode: Ignore failure mode for this peer @type ignoreFailureMode: One of VALID_FAILURE_MODES @raise ValueError: If collect directory is not an absolute path """ self._name = None self._collectDir = None self._workingDir = None self._remoteUser = None self._localUser = None self._rcpCommand = None self._rcpCommandList = None self._rshCommand = None self._rshCommandList = None self._cbackCommand = None self._ignoreFailureMode = None self.name = name self.collectDir = collectDir self.workingDir = workingDir self.remoteUser = remoteUser self.localUser = localUser self.rcpCommand = rcpCommand self.rshCommand = rshCommand self.cbackCommand = cbackCommand self.ignoreFailureMode = ignoreFailureMode ############# # Properties ############# def _setName(self, value): """ Property target used to set the peer name. The value must be a non-empty string and cannot be C{None}. @raise ValueError: If the value is an empty string or C{None}. """ if value is None or len(value) < 1: raise ValueError("Peer name must be a non-empty string.") self._name = value def _getName(self): """ Property target used to get the peer name. """ return self._name def _setCollectDir(self, value): """ Property target used to set the collect directory. The value must be an absolute path and cannot be C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is C{None} or is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Collect directory must be an absolute path.") self._collectDir = encodePath(value) def _getCollectDir(self): """ Property target used to get the collect directory. """ return self._collectDir def _setWorkingDir(self, value): """ Property target used to set the working directory. The value must be an absolute path and cannot be C{None}. @raise ValueError: If the value is C{None} or is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Working directory must be an absolute path.") self._workingDir = encodePath(value) def _getWorkingDir(self): """ Property target used to get the working directory. """ return self._workingDir def _setRemoteUser(self, value): """ Property target used to set the remote user. The value must be a non-empty string and cannot be C{None}. @raise ValueError: If the value is an empty string or C{None}. """ if value is None or len(value) < 1: raise ValueError("Peer remote user must be a non-empty string.") self._remoteUser = value def _getRemoteUser(self): """ Property target used to get the remote user. """ return self._remoteUser def _setLocalUser(self, value): """ Property target used to set the local user. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("Peer local user must be a non-empty string.") self._localUser = value def _getLocalUser(self): """ Property target used to get the local user. """ return self._localUser def _setRcpCommand(self, value): """ Property target to set the rcp command. The value must be a non-empty string or C{None}. Its value is stored in the two forms: "raw" as provided by the client, and "parsed" into a list suitable for being passed to L{util.executeCommand} via L{util.splitCommandLine}. However, all the caller will ever see via the property is the actual value they set (which includes seeing C{None}, even if we translate that internally to C{DEF_RCP_COMMAND}). Internally, we should always use C{self._rcpCommandList} if we want the actual command list. @raise ValueError: If the value is an empty string. """ if value is None: self._rcpCommand = None self._rcpCommandList = DEF_RCP_COMMAND else: if len(value) >= 1: self._rcpCommand = value self._rcpCommandList = splitCommandLine(self._rcpCommand) else: raise ValueError("The rcp command must be a non-empty string.") def _getRcpCommand(self): """ Property target used to get the rcp command. """ return self._rcpCommand def _setRshCommand(self, value): """ Property target to set the rsh command. The value must be a non-empty string or C{None}. Its value is stored in the two forms: "raw" as provided by the client, and "parsed" into a list suitable for being passed to L{util.executeCommand} via L{util.splitCommandLine}. However, all the caller will ever see via the property is the actual value they set (which includes seeing C{None}, even if we translate that internally to C{DEF_RSH_COMMAND}). Internally, we should always use C{self._rshCommandList} if we want the actual command list. @raise ValueError: If the value is an empty string. """ if value is None: self._rshCommand = None self._rshCommandList = DEF_RSH_COMMAND else: if len(value) >= 1: self._rshCommand = value self._rshCommandList = splitCommandLine(self._rshCommand) else: raise ValueError("The rsh command must be a non-empty string.") def _getRshCommand(self): """ Property target used to get the rsh command. """ return self._rshCommand def _setCbackCommand(self, value): """ Property target to set the cback command. The value must be a non-empty string or C{None}. Unlike the other command, this value is only stored in the "raw" form provided by the client. @raise ValueError: If the value is an empty string. """ if value is None: self._cbackCommand = None else: if len(value) >= 1: self._cbackCommand = value else: raise ValueError("The cback command must be a non-empty string.") def _getCbackCommand(self): """ Property target used to get the cback command. """ return self._cbackCommand def _setIgnoreFailureMode(self, value): """ Property target used to set the ignoreFailure mode. If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_FAILURE_MODES: raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) self._ignoreFailureMode = value def _getIgnoreFailureMode(self): """ Property target used to get the ignoreFailure mode. """ return self._ignoreFailureMode name = property(_getName, _setName, None, "Name of the peer (a valid DNS hostname).") collectDir = property(_getCollectDir, _setCollectDir, None, "Path to the peer's collect directory (an absolute local path).") workingDir = property(_getWorkingDir, _setWorkingDir, None, "Path to the peer's working directory (an absolute local path).") remoteUser = property(_getRemoteUser, _setRemoteUser, None, "Name of the Cedar Backup user on the remote peer.") localUser = property(_getLocalUser, _setLocalUser, None, "Name of the Cedar Backup user on the current host.") rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "An rcp-compatible copy command to use for copying files.") rshCommand = property(_getRshCommand, _setRshCommand, None, "An rsh-compatible command to use for remote shells to the peer.") cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "A chack-compatible command to use for executing managed actions.") ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") ################# # Public methods ################# def stagePeer(self, targetDir, ownership=None, permissions=None): """ Stages data from the peer into the indicated local target directory. The target directory must already exist before this method is called. If passed in, ownership and permissions will be applied to the files that are copied. @note: The returned count of copied files might be inaccurate if some of the copied files already existed in the staging directory prior to the copy taking place. We don't clear the staging directory first, because some extension might also be using it. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @note: Unlike the local peer version of this method, an I/O error might or might not be raised if the directory is empty. Since we're using a remote copy method, we just don't have the fine-grained control over our exceptions that's available when we can look directly at the filesystem, and we can't control whether the remote copy method thinks an empty directory is an error. @param targetDir: Target directory to write data into @type targetDir: String representing a directory on disk @param ownership: Owner and group that the staged files should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @return: Number of files copied from the source directory to the target directory. @raise ValueError: If target directory is not a directory, does not exist or is not absolute. @raise ValueError: If a path cannot be encoded properly. @raise IOError: If there were no files to stage (i.e. the directory was empty) @raise IOError: If there is an IO error copying a file. @raise OSError: If there is an OS error copying or changing permissions on a file """ targetDir = encodePath(targetDir) if not os.path.isabs(targetDir): logger.debug("Target directory [%s] not an absolute path.", targetDir) raise ValueError("Target directory must be an absolute path.") if not os.path.exists(targetDir) or not os.path.isdir(targetDir): logger.debug("Target directory [%s] is not a directory or does not exist on disk.", targetDir) raise ValueError("Target directory is not a directory or does not exist on disk.") count = RemotePeer._copyRemoteDir(self.remoteUser, self.localUser, self.name, self._rcpCommand, self._rcpCommandList, self.collectDir, targetDir, ownership, permissions) if count == 0: raise IOError("Did not copy any files from local peer.") return count def checkCollectIndicator(self, collectIndicator=None): """ Checks the collect indicator in the peer's staging directory. When a peer has completed collecting its backup files, it will write an empty indicator file into its collect directory. This method checks to see whether that indicator has been written. If the remote copy command fails, we return C{False} as if the file weren't there. If you need to, you can override the name of the collect indicator file by passing in a different name. @note: Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the C{scp} command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. Because of this, the implementation of this method is rather convoluted. @param collectIndicator: Name of the collect indicator file to check @type collectIndicator: String representing name of a file in the collect directory @return: Boolean true/false depending on whether the indicator exists. @raise ValueError: If a path cannot be encoded properly. """ try: if collectIndicator is None: sourceFile = os.path.join(self.collectDir, DEF_COLLECT_INDICATOR) targetFile = os.path.join(self.workingDir, DEF_COLLECT_INDICATOR) else: collectIndicator = encodePath(collectIndicator) sourceFile = os.path.join(self.collectDir, collectIndicator) targetFile = os.path.join(self.workingDir, collectIndicator) logger.debug("Fetch remote [%s] into [%s].", sourceFile, targetFile) if os.path.exists(targetFile): try: os.remove(targetFile) except: raise Exception("Error: collect indicator [%s] already exists!" % targetFile) try: RemotePeer._copyRemoteFile(self.remoteUser, self.localUser, self.name, self._rcpCommand, self._rcpCommandList, sourceFile, targetFile, overwrite=False) if os.path.exists(targetFile): return True else: return False except Exception as e: logger.info("Failed looking for collect indicator: %s", e) return False finally: if os.path.exists(targetFile): try: os.remove(targetFile) except: pass def writeStageIndicator(self, stageIndicator=None): """ Writes the stage indicator in the peer's staging directory. When the master has completed collecting its backup files, it will write an empty indicator file into the peer's collect directory. The presence of this file implies that the staging process is complete. If you need to, you can override the name of the stage indicator file by passing in a different name. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @param stageIndicator: Name of the indicator file to write @type stageIndicator: String representing name of a file in the collect directory @raise ValueError: If a path cannot be encoded properly. @raise IOError: If there is an IO error creating the file. @raise OSError: If there is an OS error creating or changing permissions on the file """ stageIndicator = encodePath(stageIndicator) if stageIndicator is None: sourceFile = os.path.join(self.workingDir, DEF_STAGE_INDICATOR) targetFile = os.path.join(self.collectDir, DEF_STAGE_INDICATOR) else: sourceFile = os.path.join(self.workingDir, DEF_STAGE_INDICATOR) targetFile = os.path.join(self.collectDir, stageIndicator) try: if not os.path.exists(sourceFile): with open(sourceFile, "w") as f: f.write("") RemotePeer._pushLocalFile(self.remoteUser, self.localUser, self.name, self._rcpCommand, self._rcpCommandList, sourceFile, targetFile) finally: if os.path.exists(sourceFile): try: os.remove(sourceFile) except: pass def executeRemoteCommand(self, command): """ Executes a command on the peer via remote shell. @param command: Command to execute @type command: String command-line suitable for use with rsh. @raise IOError: If there is an error executing the command on the remote peer. """ RemotePeer._executeRemoteCommand(self.remoteUser, self.localUser, self.name, self._rshCommand, self._rshCommandList, command) def executeManagedAction(self, action, fullBackup): """ Executes a managed action on this peer. @param action: Name of the action to execute. @param fullBackup: Whether a full backup should be executed. @raise IOError: If there is an error executing the action on the remote peer. """ try: command = RemotePeer._buildCbackCommand(self.cbackCommand, action, fullBackup) self.executeRemoteCommand(command) except IOError as e: logger.info(e) raise IOError("Failed to execute action [%s] on managed client [%s]." % (action, self.name)) ################## # Private methods ################## @staticmethod def _getDirContents(path): """ Returns the contents of a directory in terms of a Set. The directory's contents are read as a L{FilesystemList} containing only files, and then the list is converted into a set object for later use. @param path: Directory path to get contents for @type path: String representing a path on disk @return: Set of files in the directory @raise ValueError: If path is not a directory or does not exist. """ contents = FilesystemList() contents.excludeDirs = True contents.excludeLinks = True contents.addDirContents(path) return set(contents) @staticmethod def _copyRemoteDir(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceDir, targetDir, ownership=None, permissions=None): """ Copies files from the source directory to the target directory. This function is not recursive. Only the files in the directory will be copied. Ownership and permissions will be left at their default values if new values are not specified. Behavior when copying soft links from the collect directory is dependent on the behavior of the specified rcp command. @note: The returned count of copied files might be inaccurate if some of the copied files already existed in the staging directory prior to the copy taking place. We don't clear the staging directory first, because some extension might also be using it. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @note: We don't have a good way of knowing exactly what files we copied down from the remote peer, unless we want to parse the output of the rcp command (ugh). We could change permissions on everything in the target directory, but that's kind of ugly too. Instead, we use Python's set functionality to figure out what files were added while we executed the rcp command. This isn't perfect - for instance, it's not correct if someone else is messing with the directory at the same time we're doing the remote copy - but it's about as good as we're going to get. @note: Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the C{scp} command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. We try to work around this by issuing C{IOError} if we don't copy any files from the remote host. @param remoteUser: Name of the Cedar Backup user on the remote peer @type remoteUser: String representing a username, valid via the copy command @param localUser: Name of the Cedar Backup user on the current host @type localUser: String representing a username, valid on the current host @param remoteHost: Hostname of the remote peer @type remoteHost: String representing a hostname, accessible via the copy command @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer @type rcpCommand: String representing a system command including required arguments @param rcpCommandList: An rcp-compatible copy command to use for copying files @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} @param sourceDir: Source directory @type sourceDir: String representing a directory on disk @param targetDir: Target directory @type targetDir: String representing a directory on disk @param ownership: Owner and group that the copied files should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @return: Number of files copied from the source directory to the target directory. @raise ValueError: If source or target is not a directory or does not exist. @raise IOError: If there is an IO error copying the files. """ beforeSet = RemotePeer._getDirContents(targetDir) if localUser is not None: try: if not isRunningAsRoot(): raise IOError("Only root can remote copy as another user.") except AttributeError: pass actualCommand = "%s %s@%s:%s/* %s" % (rcpCommand, remoteUser, remoteHost, sourceDir, targetDir) command = resolveCommand(SU_COMMAND) result = executeCommand(command, [localUser, "-c", actualCommand])[0] if result != 0: raise IOError("Error (%d) copying files from remote host as local user [%s]." % (result, localUser)) else: copySource = "%s@%s:%s/*" % (remoteUser, remoteHost, sourceDir) command = resolveCommand(rcpCommandList) result = executeCommand(command, [copySource, targetDir])[0] if result != 0: raise IOError("Error (%d) copying files from remote host." % result) afterSet = RemotePeer._getDirContents(targetDir) if len(afterSet) == 0: raise IOError("Did not copy any files from remote peer.") differenceSet = afterSet.difference(beforeSet) # files we added as part of copy if len(differenceSet) == 0: raise IOError("Apparently did not copy any new files from remote peer.") for targetFile in differenceSet: if ownership is not None: os.chown(targetFile, ownership[0], ownership[1]) if permissions is not None: os.chmod(targetFile, permissions) return len(differenceSet) @staticmethod def _copyRemoteFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, ownership=None, permissions=None, overwrite=True): """ Copies a remote source file to a target file. @note: Internally, we have to go through and escape any spaces in the source path with double-backslash, otherwise things get screwed up. It doesn't seem to be required in the target path. I hope this is portable to various different rcp methods, but I guess it might not be (all I have to test with is OpenSSH). @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @note: We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception. @note: Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the C{scp} command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. We try to work around this by issuing C{IOError} the target file does not exist when we're done. @param remoteUser: Name of the Cedar Backup user on the remote peer @type remoteUser: String representing a username, valid via the copy command @param remoteHost: Hostname of the remote peer @type remoteHost: String representing a hostname, accessible via the copy command @param localUser: Name of the Cedar Backup user on the current host @type localUser: String representing a username, valid on the current host @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer @type rcpCommand: String representing a system command including required arguments @param rcpCommandList: An rcp-compatible copy command to use for copying files @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} @param sourceFile: Source file to copy @type sourceFile: String representing a file on disk, as an absolute path @param targetFile: Target file to create @type targetFile: String representing a file on disk, as an absolute path @param ownership: Owner and group that the copied should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @param overwrite: Indicates whether it's OK to overwrite the target file. @type overwrite: Boolean true/false. @raise IOError: If the target file already exists. @raise IOError: If there is an IO error copying the file @raise OSError: If there is an OS error changing permissions on the file """ if not overwrite: if os.path.exists(targetFile): raise IOError("Target file [%s] already exists." % targetFile) if localUser is not None: try: if not isRunningAsRoot(): raise IOError("Only root can remote copy as another user.") except AttributeError: pass actualCommand = "%s %s@%s:%s %s" % (rcpCommand, remoteUser, remoteHost, sourceFile.replace(" ", "\\ "), targetFile) command = resolveCommand(SU_COMMAND) result = executeCommand(command, [localUser, "-c", actualCommand])[0] if result != 0: raise IOError("Error (%d) copying [%s] from remote host as local user [%s]." % (result, sourceFile, localUser)) else: copySource = "%s@%s:%s" % (remoteUser, remoteHost, sourceFile.replace(" ", "\\ ")) command = resolveCommand(rcpCommandList) result = executeCommand(command, [copySource, targetFile])[0] if result != 0: raise IOError("Error (%d) copying [%s] from remote host." % (result, sourceFile)) if not os.path.exists(targetFile): raise IOError("Apparently unable to copy file from remote host.") if ownership is not None: os.chown(targetFile, ownership[0], ownership[1]) if permissions is not None: os.chmod(targetFile, permissions) @staticmethod def _pushLocalFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, overwrite=True): """ Copies a local source file to a remote host. @note: We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception. @note: Internally, we have to go through and escape any spaces in the source and target paths with double-backslash, otherwise things get screwed up. I hope this is portable to various different rcp methods, but I guess it might not be (all I have to test with is OpenSSH). @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @param remoteUser: Name of the Cedar Backup user on the remote peer @type remoteUser: String representing a username, valid via the copy command @param localUser: Name of the Cedar Backup user on the current host @type localUser: String representing a username, valid on the current host @param remoteHost: Hostname of the remote peer @type remoteHost: String representing a hostname, accessible via the copy command @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer @type rcpCommand: String representing a system command including required arguments @param rcpCommandList: An rcp-compatible copy command to use for copying files @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} @param sourceFile: Source file to copy @type sourceFile: String representing a file on disk, as an absolute path @param targetFile: Target file to create @type targetFile: String representing a file on disk, as an absolute path @param overwrite: Indicates whether it's OK to overwrite the target file. @type overwrite: Boolean true/false. @raise IOError: If there is an IO error copying the file @raise OSError: If there is an OS error changing permissions on the file """ if not overwrite: if os.path.exists(targetFile): raise IOError("Target file [%s] already exists." % targetFile) if localUser is not None: try: if not isRunningAsRoot(): raise IOError("Only root can remote copy as another user.") except AttributeError: pass actualCommand = '%s "%s" "%s@%s:%s"' % (rcpCommand, sourceFile, remoteUser, remoteHost, targetFile) command = resolveCommand(SU_COMMAND) result = executeCommand(command, [localUser, "-c", actualCommand])[0] if result != 0: raise IOError("Error (%d) copying [%s] to remote host as local user [%s]." % (result, sourceFile, localUser)) else: copyTarget = "%s@%s:%s" % (remoteUser, remoteHost, targetFile.replace(" ", "\\ ")) command = resolveCommand(rcpCommandList) result = executeCommand(command, [sourceFile.replace(" ", "\\ "), copyTarget])[0] if result != 0: raise IOError("Error (%d) copying [%s] to remote host." % (result, sourceFile)) @staticmethod def _executeRemoteCommand(remoteUser, localUser, remoteHost, rshCommand, rshCommandList, remoteCommand): """ Executes a command on the peer via remote shell. @param remoteUser: Name of the Cedar Backup user on the remote peer @type remoteUser: String representing a username, valid on the remote host @param localUser: Name of the Cedar Backup user on the current host @type localUser: String representing a username, valid on the current host @param remoteHost: Hostname of the remote peer @type remoteHost: String representing a hostname, accessible via the copy command @param rshCommand: An rsh-compatible copy command to use for remote shells to the peer @type rshCommand: String representing a system command including required arguments @param rshCommandList: An rsh-compatible copy command to use for remote shells to the peer @type rshCommandList: Command as a list to be passed to L{util.executeCommand} @param remoteCommand: The command to be executed on the remote host @type remoteCommand: String command-line, with no special shell characters ($, <, etc.) @raise IOError: If there is an error executing the remote command """ actualCommand = "%s %s@%s '%s'" % (rshCommand, remoteUser, remoteHost, remoteCommand) if localUser is not None: try: if not isRunningAsRoot(): raise IOError("Only root can remote shell as another user.") except AttributeError: pass command = resolveCommand(SU_COMMAND) result = executeCommand(command, [localUser, "-c", actualCommand])[0] if result != 0: raise IOError("Command failed [su -c %s \"%s\"]" % (localUser, actualCommand)) else: command = resolveCommand(rshCommandList) result = executeCommand(command, ["%s@%s" % (remoteUser, remoteHost), "%s" % remoteCommand])[0] if result != 0: raise IOError("Command failed [%s]" % (actualCommand)) @staticmethod def _buildCbackCommand(cbackCommand, action, fullBackup): """ Builds a Cedar Backup command line for the named action. @note: If the cback command is None, then DEF_CBACK_COMMAND is used. @param cbackCommand: cback command to execute, including required options @param action: Name of the action to execute. @param fullBackup: Whether a full backup should be executed. @return: String suitable for passing to L{_executeRemoteCommand} as remoteCommand. @raise ValueError: If action is None. """ if action is None: raise ValueError("Action cannot be None.") if cbackCommand is None: cbackCommand = DEF_CBACK_COMMAND if fullBackup: return "%s --full %s" % (cbackCommand, action) else: return "%s %s" % (cbackCommand, action) CedarBackup3-3.1.6/CedarBackup3/xmlutil.py0000664000175000017500000006357712560007327022015 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2006,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # Portions Copyright (c) 2000 Fourthought Inc, USA. # All Rights Reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides general XML-related functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides general XML-related functionality. What I'm trying to do here is abstract much of the functionality that directly accesses the DOM tree. This is not so much to "protect" the other code from the DOM, but to standardize the way it's used. It will also help extension authors write code that easily looks more like the rest of Cedar Backup. @sort: createInputDom, createOutputDom, serializeDom, isElement, readChildren, readFirstChild, readStringList, readString, readInteger, readBoolean, addContainerNode, addStringNode, addIntegerNode, addBooleanNode, TRUE_BOOLEAN_VALUES, FALSE_BOOLEAN_VALUES, VALID_BOOLEAN_VALUES @var TRUE_BOOLEAN_VALUES: List of boolean values in XML representing C{True}. @var FALSE_BOOLEAN_VALUES: List of boolean values in XML representing C{False}. @var VALID_BOOLEAN_VALUES: List of valid boolean values in XML. @author: Kenneth J. Pronovici """ # pylint: disable=C0111,C0103,W0511,W0104,W0106 ######################################################################## # Imported modules ######################################################################## # System modules import sys import re import logging from io import StringIO # XML-related modules from xml.parsers.expat import ExpatError from xml.dom.minidom import Node from xml.dom.minidom import getDOMImplementation from xml.dom.minidom import parseString ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.xml") TRUE_BOOLEAN_VALUES = [ "Y", "y", ] FALSE_BOOLEAN_VALUES = [ "N", "n", ] VALID_BOOLEAN_VALUES = TRUE_BOOLEAN_VALUES + FALSE_BOOLEAN_VALUES ######################################################################## # Functions for creating and parsing DOM trees ######################################################################## def createInputDom(xmlData, name="cb_config"): """ Creates a DOM tree based on reading an XML string. @param name: Assumed base name of the document (root node name). @return: Tuple (xmlDom, parentNode) for the parsed document @raise ValueError: If the document can't be parsed. """ try: xmlDom = parseString(xmlData) parentNode = readFirstChild(xmlDom, name) return (xmlDom, parentNode) except (IOError, ExpatError) as e: raise ValueError("Unable to parse XML document: %s" % e) def createOutputDom(name="cb_config"): """ Creates a DOM tree used for writing an XML document. @param name: Base name of the document (root node name). @return: Tuple (xmlDom, parentNode) for the new document """ impl = getDOMImplementation() xmlDom = impl.createDocument(None, name, None) return (xmlDom, xmlDom.documentElement) ######################################################################## # Functions for reading values out of XML documents ######################################################################## def isElement(node): """ Returns True or False depending on whether the XML node is an element node. """ return node.nodeType == Node.ELEMENT_NODE def readChildren(parent, name): """ Returns a list of nodes with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. Underneath, we use the Python C{getElementsByTagName} method, which is pretty cool, but which (surprisingly?) returns a list of all children with a given name below the parent, at any level. We just prune that list to include only children whose C{parentNode} matches the passed-in parent. @param parent: Parent node to search beneath. @param name: Name of nodes to search for. @return: List of child nodes with correct parent, or an empty list if no matching nodes are found. """ lst = [] if parent is not None: result = parent.getElementsByTagName(name) for entry in result: if entry.parentNode is parent: lst.append(entry) return lst def readFirstChild(parent, name): """ Returns the first child with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: First properly-named child of parent, or C{None} if no matching nodes are found. """ result = readChildren(parent, name) if result is None or result == []: return None return result[0] def readStringList(parent, name): """ Returns a list of the string contents associated with nodes with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. First, we find all of the nodes using L{readChildren}, and then we retrieve the "string contents" of each of those nodes. The returned list has one entry per matching node. We assume that string contents of a given node belong to the first C{TEXT_NODE} child of that node. Nodes which have no C{TEXT_NODE} children are not represented in the returned list. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: List of strings as described above, or C{None} if no matching nodes are found. """ lst = [] result = readChildren(parent, name) for entry in result: if entry.hasChildNodes(): for child in entry.childNodes: if child.nodeType == Node.TEXT_NODE: lst.append(child.nodeValue) break if lst == []: lst = None return lst def readString(parent, name): """ Returns string contents of the first child with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. We assume that string contents of a given node belong to the first C{TEXT_NODE} child of that node. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: String contents of node or C{None} if no matching nodes are found. """ result = readStringList(parent, name) if result is None: return None return result[0] def readInteger(parent, name): """ Returns integer contents of the first child with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: Integer contents of node or C{None} if no matching nodes are found. @raise ValueError: If the string at the location can't be converted to an integer. """ result = readString(parent, name) if result is None: return None else: return int(result) def readLong(parent, name): """ Returns long integer contents of the first child with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: Long integer contents of node or C{None} if no matching nodes are found. @raise ValueError: If the string at the location can't be converted to an integer. """ result = readString(parent, name) if result is None: return None else: return int(result) def readFloat(parent, name): """ Returns float contents of the first child with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: Float contents of node or C{None} if no matching nodes are found. @raise ValueError: If the string at the location can't be converted to a float value. """ result = readString(parent, name) if result is None: return None else: return float(result) def readBoolean(parent, name): """ Returns boolean contents of the first child with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. The string value of the node must be one of the values in L{VALID_BOOLEAN_VALUES}. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: Boolean contents of node or C{None} if no matching nodes are found. @raise ValueError: If the string at the location can't be converted to a boolean. """ result = readString(parent, name) if result is None: return None else: if result in TRUE_BOOLEAN_VALUES: return True elif result in FALSE_BOOLEAN_VALUES: return False else: raise ValueError("Boolean values must be one of %s." % VALID_BOOLEAN_VALUES) ######################################################################## # Functions for writing values into XML documents ######################################################################## def addContainerNode(xmlDom, parentNode, nodeName): """ Adds a container node as the next child of a parent node. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @return: Reference to the newly-created node. """ containerNode = xmlDom.createElement(nodeName) parentNode.appendChild(containerNode) return containerNode def addStringNode(xmlDom, parentNode, nodeName, nodeValue): """ Adds a text node as the next child of a parent, to contain a string. If the C{nodeValue} is None, then the node will be created, but will be empty (i.e. will contain no text node child). @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @param nodeValue: The value to put into the node. @return: Reference to the newly-created node. """ containerNode = addContainerNode(xmlDom, parentNode, nodeName) if nodeValue is not None: textNode = xmlDom.createTextNode(nodeValue) containerNode.appendChild(textNode) return containerNode def addIntegerNode(xmlDom, parentNode, nodeName, nodeValue): """ Adds a text node as the next child of a parent, to contain an integer. If the C{nodeValue} is None, then the node will be created, but will be empty (i.e. will contain no text node child). The integer will be converted to a string using "%d". The result will be added to the document via L{addStringNode}. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @param nodeValue: The value to put into the node. @return: Reference to the newly-created node. """ if nodeValue is None: return addStringNode(xmlDom, parentNode, nodeName, None) else: return addStringNode(xmlDom, parentNode, nodeName, "%d" % nodeValue) # %d works for both int and long def addLongNode(xmlDom, parentNode, nodeName, nodeValue): """ Adds a text node as the next child of a parent, to contain a long integer. If the C{nodeValue} is None, then the node will be created, but will be empty (i.e. will contain no text node child). The integer will be converted to a string using "%d". The result will be added to the document via L{addStringNode}. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @param nodeValue: The value to put into the node. @return: Reference to the newly-created node. """ if nodeValue is None: return addStringNode(xmlDom, parentNode, nodeName, None) else: return addStringNode(xmlDom, parentNode, nodeName, "%d" % nodeValue) # %d works for both int and long def addBooleanNode(xmlDom, parentNode, nodeName, nodeValue): """ Adds a text node as the next child of a parent, to contain a boolean. If the C{nodeValue} is None, then the node will be created, but will be empty (i.e. will contain no text node child). Boolean C{True}, or anything else interpreted as C{True} by Python, will be converted to a string "Y". Anything else will be converted to a string "N". The result is added to the document via L{addStringNode}. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @param nodeValue: The value to put into the node. @return: Reference to the newly-created node. """ if nodeValue is None: return addStringNode(xmlDom, parentNode, nodeName, None) else: if nodeValue: return addStringNode(xmlDom, parentNode, nodeName, "Y") else: return addStringNode(xmlDom, parentNode, nodeName, "N") ######################################################################## # Functions for serializing DOM trees ######################################################################## def serializeDom(xmlDom, indent=3): """ Serializes a DOM tree and returns the result in a string. @param xmlDom: XML DOM tree to serialize @param indent: Number of spaces to indent, as an integer @return: String form of DOM tree, pretty-printed. """ xmlBuffer = StringIO() serializer = Serializer(xmlBuffer, "UTF-8", indent=indent) serializer.serialize(xmlDom) xmlData = xmlBuffer.getvalue() xmlBuffer.close() return xmlData class Serializer(object): """ XML serializer class. This is a customized serializer that I hacked together based on what I found in the PyXML distribution. Basically, around release 2.7.0, the only reason I still had around a dependency on PyXML was for the PrettyPrint functionality, and that seemed pointless. So, I stripped the PrettyPrint code out of PyXML and hacked bits of it off until it did just what I needed and no more. This code started out being called PrintVisitor, but I decided it makes more sense just calling it a serializer. I've made nearly all of the methods private, and I've added a new high-level serialize() method rather than having clients call C{visit()}. Anyway, as a consequence of my hacking with it, this can't quite be called a complete XML serializer any more. I ripped out support for HTML and XHTML, and there is also no longer any support for namespaces (which I took out because this dragged along a lot of extra code, and Cedar Backup doesn't use namespaces). However, everything else should pretty much work as expected. @copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved. """ def __init__(self, stream=sys.stdout, encoding="UTF-8", indent=3): """ Initialize a serializer. @param stream: Stream to write output to. @param encoding: Output encoding. @param indent: Number of spaces to indent, as an integer """ self.stream = stream self.encoding = encoding self._indent = indent * " " self._depth = 0 self._inText = 0 def serialize(self, xmlDom): """ Serialize the passed-in XML document. @param xmlDom: XML DOM tree to serialize @raise ValueError: If there's an unknown node type in the document. """ self._visit(xmlDom) self.stream.write("\n") def _write(self, text): obj = _encodeText(text, self.encoding) self.stream.write(obj) return def _tryIndent(self): if not self._inText and self._indent: self._write('\n' + self._indent*self._depth) return def _visit(self, node): """ @raise ValueError: If there's an unknown node type in the document. """ if node.nodeType == Node.ELEMENT_NODE: return self._visitElement(node) elif node.nodeType == Node.ATTRIBUTE_NODE: return self._visitAttr(node) elif node.nodeType == Node.TEXT_NODE: return self._visitText(node) elif node.nodeType == Node.CDATA_SECTION_NODE: return self._visitCDATASection(node) elif node.nodeType == Node.ENTITY_REFERENCE_NODE: return self._visitEntityReference(node) elif node.nodeType == Node.ENTITY_NODE: return self._visitEntity(node) elif node.nodeType == Node.PROCESSING_INSTRUCTION_NODE: return self._visitProcessingInstruction(node) elif node.nodeType == Node.COMMENT_NODE: return self._visitComment(node) elif node.nodeType == Node.DOCUMENT_NODE: return self._visitDocument(node) elif node.nodeType == Node.DOCUMENT_TYPE_NODE: return self._visitDocumentType(node) elif node.nodeType == Node.DOCUMENT_FRAGMENT_NODE: return self._visitDocumentFragment(node) elif node.nodeType == Node.NOTATION_NODE: return self._visitNotation(node) # It has a node type, but we don't know how to handle it raise ValueError("Unknown node type: %s" % repr(node)) def _visitNodeList(self, node, exclude=None): for curr in node: curr is not exclude and self._visit(curr) return def _visitNamedNodeMap(self, node): for item in list(node.values()): self._visit(item) return def _visitAttr(self, node): self._write(' ' + node.name) value = node.value text = _translateCDATA(value, self.encoding) text, delimiter = _translateCDATAAttr(text) self.stream.write("=%s%s%s" % (delimiter, text, delimiter)) return def _visitProlog(self): self._write("" % (self.encoding or 'utf-8')) self._inText = 0 return def _visitDocument(self, node): self._visitProlog() node.doctype and self._visitDocumentType(node.doctype) self._visitNodeList(node.childNodes, exclude=node.doctype) return def _visitDocumentFragment(self, node): self._visitNodeList(node.childNodes) return def _visitElement(self, node): self._tryIndent() self._write('<%s' % node.tagName) for attr in list(node.attributes.values()): self._visitAttr(attr) if len(node.childNodes): self._write('>') self._depth = self._depth + 1 self._visitNodeList(node.childNodes) self._depth = self._depth - 1 not (self._inText) and self._tryIndent() self._write('' % node.tagName) else: self._write('/>') self._inText = 0 return def _visitText(self, node): text = node.data if self._indent: text.strip() if text: text = _translateCDATA(text, self.encoding) self.stream.write(text) self._inText = 1 return def _visitDocumentType(self, doctype): if not doctype.systemId and not doctype.publicId: return self._tryIndent() self._write(' | | | # [a-zA-Z0-9] | [-'()+,./:=?;!*#@$_%] public = "'%s'" % doctype.publicId else: public = '"%s"' % doctype.publicId if doctype.publicId and doctype.systemId: self._write(' PUBLIC %s %s' % (public, system)) elif doctype.systemId: self._write(' SYSTEM %s' % system) if doctype.entities or doctype.notations: self._write(' [') self._depth = self._depth + 1 self._visitNamedNodeMap(doctype.entities) self._visitNamedNodeMap(doctype.notations) self._depth = self._depth - 1 self._tryIndent() self._write(']>') else: self._write('>') self._inText = 0 return def _visitEntity(self, node): """Visited from a NamedNodeMap in DocumentType""" self._tryIndent() self._write('') return def _visitNotation(self, node): """Visited from a NamedNodeMap in DocumentType""" self._tryIndent() self._write('') return def _visitCDATASection(self, node): self._tryIndent() self._write('' % (node.data)) self._inText = 0 return def _visitComment(self, node): self._tryIndent() self._write('' % (node.data)) self._inText = 0 return def _visitEntityReference(self, node): self._write('&%s;' % node.nodeName) self._inText = 1 return def _visitProcessingInstruction(self, node): self._tryIndent() self._write('' % (node.target, node.data)) self._inText = 0 return def _encodeText(text, encoding): """Safely encodes the passed-in text as a Unicode string, converting bytes to UTF-8 if necessary.""" if text is None: return text try: if isinstance(text, bytes): text = str(text, "utf-8") return text except UnicodeError: raise ValueError("Path could not be safely encoded as utf-8.") def _translateCDATAAttr(characters): """ Handles normalization and some intelligence about quoting. @copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved. """ if not characters: return '', "'" if "'" in characters: delimiter = '"' new_chars = re.sub('"', '"', characters) else: delimiter = "'" new_chars = re.sub("'", ''', characters) #FIXME: There's more to normalization #Convert attribute new-lines to character entity # characters is possibly shorter than new_chars (no entities) if "\n" in characters: new_chars = re.sub('\n', ' ', new_chars) return new_chars, delimiter #Note: Unicode object only for now def _translateCDATA(characters, encoding='UTF-8', prev_chars='', markupSafe=0): """ @copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved. """ CDATA_CHAR_PATTERN = re.compile('[&<]|]]>') CHAR_TO_ENTITY = { '&': '&', '<': '<', ']]>': ']]>', } ILLEGAL_LOW_CHARS = '[\x01-\x08\x0B-\x0C\x0E-\x1F]' ILLEGAL_HIGH_CHARS = '\xEF\xBF[\xBE\xBF]' XML_ILLEGAL_CHAR_PATTERN = re.compile('%s|%s'%(ILLEGAL_LOW_CHARS, ILLEGAL_HIGH_CHARS)) if not characters: return '' if not markupSafe: if CDATA_CHAR_PATTERN.search(characters): new_string = CDATA_CHAR_PATTERN.subn(lambda m, d=CHAR_TO_ENTITY: d[m.group()], characters)[0] else: new_string = characters if prev_chars[-2:] == ']]' and characters[0] == '>': new_string = '>' + new_string[1:] else: new_string = characters #Note: use decimal char entity rep because some browsers are broken #FIXME: This will bomb for high characters. Should, for instance, detect #The UTF-8 for 0xFFFE and put out ￾ if XML_ILLEGAL_CHAR_PATTERN.search(new_string): new_string = XML_ILLEGAL_CHAR_PATTERN.subn(lambda m: '&#%i;' % ord(m.group()), new_string)[0] new_string = _encodeText(new_string, encoding) return new_string CedarBackup3-3.1.6/CedarBackup3/extend/0002775000175000017500000000000012657665551021234 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/CedarBackup3/extend/mbox.py0000664000175000017500000016041112560007327022534 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2006-2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Official Cedar Backup Extensions # Purpose : Provides an extension to back up mbox email files. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to back up mbox email files. Backing up email ================ Email folders (often stored as mbox flatfiles) are not well-suited being backed up with an incremental backup like the one offered by Cedar Backup. This is because mbox files often change on a daily basis, forcing the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large folders. (Note that the alternative maildir format does not share this problem, since it typically uses one file per message.) One solution to this problem is to design a smarter incremental backup process, which backs up baseline content on the first day of the week, and then backs up only new messages added to that folder on every other day of the week. This way, the backup for any single day is only as large as the messages placed into the folder on that day. The backup isn't as "perfect" as the incremental backup process, because it doesn't preserve information about messages deleted from the backed-up folder. However, it should be much more space-efficient, and in a recovery situation, it seems better to restore too much data rather than too little. What is this extension? ======================= This is a Cedar Backup extension used to back up mbox email files via the Cedar Backup command line. Individual mbox files or directories containing mbox files can be backed up using the same collect modes allowed for filesystems in the standard Cedar Backup collect action: weekly, daily, incremental. It implements the "smart" incremental backup process discussed above, using functionality provided by the C{grepmail} utility. This extension requires a new configuration section and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file. The mbox action is conceptually similar to the standard collect action, except that mbox directories are not collected recursively. This implies some configuration changes (i.e. there's no need for global exclusions or an ignore file). If you back up a directory, all of the mbox files in that directory are backed up into a single tar file using the indicated compression method. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging import datetime import pickle import tempfile from bz2 import BZ2File from gzip import GzipFile from functools import total_ordering # Cedar Backup modules from CedarBackup3.filesystem import FilesystemList, BackupFileList from CedarBackup3.xmlutil import createInputDom, addContainerNode, addStringNode from CedarBackup3.xmlutil import isElement, readChildren, readFirstChild, readString, readStringList from CedarBackup3.config import VALID_COLLECT_MODES, VALID_COMPRESS_MODES from CedarBackup3.util import isStartOfWeek, buildNormalizedPath from CedarBackup3.util import resolveCommand, executeCommand from CedarBackup3.util import ObjectTypeList, UnorderedList, RegexList, encodePath, changeOwnership ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.extend.mbox") GREPMAIL_COMMAND = [ "grepmail", ] REVISION_PATH_EXTENSION = "mboxlast" ######################################################################## # MboxFile class definition ######################################################################## @total_ordering class MboxFile(object): """ Class representing mbox file configuration.. The following restrictions exist on data in this class: - The absolute path must be absolute. - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, absolutePath, collectMode, compressMode """ def __init__(self, absolutePath=None, collectMode=None, compressMode=None): """ Constructor for the C{MboxFile} class. You should never directly instantiate this class. @param absolutePath: Absolute path to an mbox file on disk. @param collectMode: Overridden collect mode for this directory. @param compressMode: Overridden compression mode for this directory. """ self._absolutePath = None self._collectMode = None self._compressMode = None self.absolutePath = absolutePath self.collectMode = collectMode self.compressMode = compressMode def __repr__(self): """ Official string representation for class instance. """ return "MboxFile(%s, %s, %s)" % (self.absolutePath, self.collectMode, self.compressMode) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.absolutePath != other.absolutePath: if str(self.absolutePath or "") < str(other.absolutePath or ""): return -1 else: return 1 if self.collectMode != other.collectMode: if str(self.collectMode or "") < str(other.collectMode or ""): return -1 else: return 1 if self.compressMode != other.compressMode: if str(self.compressMode or "") < str(other.compressMode or ""): return -1 else: return 1 return 0 def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Absolute path must be, er, an absolute path.") self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path to the mbox file.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this mbox file.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this mbox file.") ######################################################################## # MboxDir class definition ######################################################################## @total_ordering class MboxDir(object): """ Class representing mbox directory configuration.. The following restrictions exist on data in this class: - The absolute path must be absolute. - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. Unlike collect directory configuration, this is the only place exclusions are allowed (no global exclusions at the configuration level). Also, we only allow relative exclusions and there is no configured ignore file. This is because mbox directory backups are not recursive. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, absolutePath, collectMode, compressMode, relativeExcludePaths, excludePatterns """ def __init__(self, absolutePath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None): """ Constructor for the C{MboxDir} class. You should never directly instantiate this class. @param absolutePath: Absolute path to a mbox file on disk. @param collectMode: Overridden collect mode for this directory. @param compressMode: Overridden compression mode for this directory. @param relativeExcludePaths: List of relative paths to exclude. @param excludePatterns: List of regular expression patterns to exclude """ self._absolutePath = None self._collectMode = None self._compressMode = None self._relativeExcludePaths = None self._excludePatterns = None self.absolutePath = absolutePath self.collectMode = collectMode self.compressMode = compressMode self.relativeExcludePaths = relativeExcludePaths self.excludePatterns = excludePatterns def __repr__(self): """ Official string representation for class instance. """ return "MboxDir(%s, %s, %s, %s, %s)" % (self.absolutePath, self.collectMode, self.compressMode, self.relativeExcludePaths, self.excludePatterns) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.absolutePath != other.absolutePath: if str(self.absolutePath or "") < str(other.absolutePath or ""): return -1 else: return 1 if self.collectMode != other.collectMode: if str(self.collectMode or "") < str(other.collectMode or ""): return -1 else: return 1 if self.compressMode != other.compressMode: if str(self.compressMode or "") < str(other.compressMode or ""): return -1 else: return 1 if self.relativeExcludePaths != other.relativeExcludePaths: if self.relativeExcludePaths < other.relativeExcludePaths: return -1 else: return 1 if self.excludePatterns != other.excludePatterns: if self.excludePatterns < other.excludePatterns: return -1 else: return 1 return 0 def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Absolute path must be, er, an absolute path.") self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setRelativeExcludePaths(self, value): """ Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment. """ if value is None: self._relativeExcludePaths = None else: try: saved = self._relativeExcludePaths self._relativeExcludePaths = UnorderedList() self._relativeExcludePaths.extend(value) except Exception as e: self._relativeExcludePaths = saved raise e def _getRelativeExcludePaths(self): """ Property target used to get the relative exclude paths list. """ return self._relativeExcludePaths def _setExcludePatterns(self, value): """ Property target used to set the exclude patterns list. """ if value is None: self._excludePatterns = None else: try: saved = self._excludePatterns self._excludePatterns = RegexList() self._excludePatterns.extend(value) except Exception as e: self._excludePatterns = saved raise e def _getExcludePatterns(self): """ Property target used to get the exclude patterns list. """ return self._excludePatterns absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path to the mbox directory.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this mbox directory.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this mbox directory.") relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.") ######################################################################## # MboxConfig class definition ######################################################################## @total_ordering class MboxConfig(object): """ Class representing mbox configuration. Mbox configuration is used for backing up mbox email files. The following restrictions exist on data in this class: - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. - The C{mboxFiles} list must be a list of C{MboxFile} objects - The C{mboxDirs} list must be a list of C{MboxDir} objects For the C{mboxFiles} and C{mboxDirs} lists, validation is accomplished through the L{util.ObjectTypeList} list implementation that overrides common list methods and transparently ensures that each element is of the proper type. Unlike collect configuration, no global exclusions are allowed on this level. We only allow relative exclusions at the mbox directory level. Also, there is no configured ignore file. This is because mbox directory backups are not recursive. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, collectMode, compressMode, mboxFiles, mboxDirs """ def __init__(self, collectMode=None, compressMode=None, mboxFiles=None, mboxDirs=None): """ Constructor for the C{MboxConfig} class. @param collectMode: Default collect mode. @param compressMode: Default compress mode. @param mboxFiles: List of mbox files to back up @param mboxDirs: List of mbox directories to back up @raise ValueError: If one of the values is invalid. """ self._collectMode = None self._compressMode = None self._mboxFiles = None self._mboxDirs = None self.collectMode = collectMode self.compressMode = compressMode self.mboxFiles = mboxFiles self.mboxDirs = mboxDirs def __repr__(self): """ Official string representation for class instance. """ return "MboxConfig(%s, %s, %s, %s)" % (self.collectMode, self.compressMode, self.mboxFiles, self.mboxDirs) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.collectMode != other.collectMode: if str(self.collectMode or "") < str(other.collectMode or ""): return -1 else: return 1 if self.compressMode != other.compressMode: if str(self.compressMode or "") < str(other.compressMode or ""): return -1 else: return 1 if self.mboxFiles != other.mboxFiles: if self.mboxFiles < other.mboxFiles: return -1 else: return 1 if self.mboxDirs != other.mboxDirs: if self.mboxDirs < other.mboxDirs: return -1 else: return 1 return 0 def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setMboxFiles(self, value): """ Property target used to set the mboxFiles list. Either the value must be C{None} or each element must be an C{MboxFile}. @raise ValueError: If the value is not an C{MboxFile} """ if value is None: self._mboxFiles = None else: try: saved = self._mboxFiles self._mboxFiles = ObjectTypeList(MboxFile, "MboxFile") self._mboxFiles.extend(value) except Exception as e: self._mboxFiles = saved raise e def _getMboxFiles(self): """ Property target used to get the mboxFiles list. """ return self._mboxFiles def _setMboxDirs(self, value): """ Property target used to set the mboxDirs list. Either the value must be C{None} or each element must be an C{MboxDir}. @raise ValueError: If the value is not an C{MboxDir} """ if value is None: self._mboxDirs = None else: try: saved = self._mboxDirs self._mboxDirs = ObjectTypeList(MboxDir, "MboxDir") self._mboxDirs.extend(value) except Exception as e: self._mboxDirs = saved raise e def _getMboxDirs(self): """ Property target used to get the mboxDirs list. """ return self._mboxDirs collectMode = property(_getCollectMode, _setCollectMode, None, doc="Default collect mode.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Default compress mode.") mboxFiles = property(_getMboxFiles, _setMboxFiles, None, doc="List of mbox files to back up.") mboxDirs = property(_getMboxDirs, _setMboxDirs, None, doc="List of mbox directories to back up.") ######################################################################## # LocalConfig class definition ######################################################################## @total_ordering class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit Mbox-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, mbox, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._mbox = None self.mbox = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: with open(xmlPath) as f: xmlData = f.read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.mbox) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.mbox != other.mbox: if self.mbox < other.mbox: return -1 else: return 1 return 0 def _setMbox(self, value): """ Property target used to set the mbox configuration value. If not C{None}, the value must be a C{MboxConfig} object. @raise ValueError: If the value is not a C{MboxConfig} """ if value is None: self._mbox = None else: if not isinstance(value, MboxConfig): raise ValueError("Value must be a C{MboxConfig} object.") self._mbox = value def _getMbox(self): """ Property target used to get the mbox configuration value. """ return self._mbox mbox = property(_getMbox, _setMbox, None, "Mbox configuration in terms of a C{MboxConfig} object.") def validate(self): """ Validates configuration represented by the object. Mbox configuration must be filled in. Within that, the collect mode and compress mode are both optional, but the list of repositories must contain at least one entry. Each configured file or directory must contain an absolute path, and then must be either able to take collect mode and compress mode configuration from the parent C{MboxConfig} object, or must set each value on its own. @raise ValueError: If one of the validations fails. """ if self.mbox is None: raise ValueError("Mbox section is required.") if (self.mbox.mboxFiles is None or len(self.mbox.mboxFiles) < 1) and \ (self.mbox.mboxDirs is None or len(self.mbox.mboxDirs) < 1): raise ValueError("At least one mbox file or directory must be configured.") if self.mbox.mboxFiles is not None: for mboxFile in self.mbox.mboxFiles: if mboxFile.absolutePath is None: raise ValueError("Each mbox file must set an absolute path.") if self.mbox.collectMode is None and mboxFile.collectMode is None: raise ValueError("Collect mode must either be set in parent mbox section or individual mbox file.") if self.mbox.compressMode is None and mboxFile.compressMode is None: raise ValueError("Compress mode must either be set in parent mbox section or individual mbox file.") if self.mbox.mboxDirs is not None: for mboxDir in self.mbox.mboxDirs: if mboxDir.absolutePath is None: raise ValueError("Each mbox directory must set an absolute path.") if self.mbox.collectMode is None and mboxDir.collectMode is None: raise ValueError("Collect mode must either be set in parent mbox section or individual mbox directory.") if self.mbox.compressMode is None and mboxDir.compressMode is None: raise ValueError("Compress mode must either be set in parent mbox section or individual mbox directory.") def addConfig(self, xmlDom, parentNode): """ Adds an configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: collectMode //cb_config/mbox/collectMode compressMode //cb_config/mbox/compressMode We also add groups of the following items, one list element per item:: mboxFiles //cb_config/mbox/file mboxDirs //cb_config/mbox/dir The mbox files and mbox directories are added by L{_addMboxFile} and L{_addMboxDir}. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.mbox is not None: sectionNode = addContainerNode(xmlDom, parentNode, "mbox") addStringNode(xmlDom, sectionNode, "collect_mode", self.mbox.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", self.mbox.compressMode) if self.mbox.mboxFiles is not None: for mboxFile in self.mbox.mboxFiles: LocalConfig._addMboxFile(xmlDom, sectionNode, mboxFile) if self.mbox.mboxDirs is not None: for mboxDir in self.mbox.mboxDirs: LocalConfig._addMboxDir(xmlDom, sectionNode, mboxDir) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the mbox configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._mbox = LocalConfig._parseMbox(parentNode) @staticmethod def _parseMbox(parent): """ Parses an mbox configuration section. We read the following individual fields:: collectMode //cb_config/mbox/collect_mode compressMode //cb_config/mbox/compress_mode We also read groups of the following item, one list element per item:: mboxFiles //cb_config/mbox/file mboxDirs //cb_config/mbox/dir The mbox files are parsed by L{_parseMboxFiles} and the mbox directories are parsed by L{_parseMboxDirs}. @param parent: Parent node to search beneath. @return: C{MboxConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ mbox = None section = readFirstChild(parent, "mbox") if section is not None: mbox = MboxConfig() mbox.collectMode = readString(section, "collect_mode") mbox.compressMode = readString(section, "compress_mode") mbox.mboxFiles = LocalConfig._parseMboxFiles(section) mbox.mboxDirs = LocalConfig._parseMboxDirs(section) return mbox @staticmethod def _parseMboxFiles(parent): """ Reads a list of C{MboxFile} objects from immediately beneath the parent. We read the following individual fields:: absolutePath abs_path collectMode collect_mode compressMode compess_mode @param parent: Parent node to search beneath. @return: List of C{MboxFile} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parent, "file"): if isElement(entry): mboxFile = MboxFile() mboxFile.absolutePath = readString(entry, "abs_path") mboxFile.collectMode = readString(entry, "collect_mode") mboxFile.compressMode = readString(entry, "compress_mode") lst.append(mboxFile) if lst == []: lst = None return lst @staticmethod def _parseMboxDirs(parent): """ Reads a list of C{MboxDir} objects from immediately beneath the parent. We read the following individual fields:: absolutePath abs_path collectMode collect_mode compressMode compess_mode We also read groups of the following items, one list element per item:: relativeExcludePaths exclude/rel_path excludePatterns exclude/pattern The exclusions are parsed by L{_parseExclusions}. @param parent: Parent node to search beneath. @return: List of C{MboxDir} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parent, "dir"): if isElement(entry): mboxDir = MboxDir() mboxDir.absolutePath = readString(entry, "abs_path") mboxDir.collectMode = readString(entry, "collect_mode") mboxDir.compressMode = readString(entry, "compress_mode") (mboxDir.relativeExcludePaths, mboxDir.excludePatterns) = LocalConfig._parseExclusions(entry) lst.append(mboxDir) if lst == []: lst = None return lst @staticmethod def _parseExclusions(parentNode): """ Reads exclusions data from immediately beneath the parent. We read groups of the following items, one list element per item:: relative exclude/rel_path patterns exclude/pattern If there are none of some pattern (i.e. no relative path items) then C{None} will be returned for that item in the tuple. @param parentNode: Parent node to search beneath. @return: Tuple of (relative, patterns) exclusions. """ section = readFirstChild(parentNode, "exclude") if section is None: return (None, None) else: relative = readStringList(section, "rel_path") patterns = readStringList(section, "pattern") return (relative, patterns) @staticmethod def _addMboxFile(xmlDom, parentNode, mboxFile): """ Adds an mbox file container as the next child of a parent. We add the following fields to the document:: absolutePath file/abs_path collectMode file/collect_mode compressMode file/compress_mode The node itself is created as the next child of the parent node. This method only adds one mbox file node. The parent must loop for each mbox file in the C{MboxConfig} object. If C{mboxFile} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. @param mboxFile: MboxFile to be added to the document. """ if mboxFile is not None: sectionNode = addContainerNode(xmlDom, parentNode, "file") addStringNode(xmlDom, sectionNode, "abs_path", mboxFile.absolutePath) addStringNode(xmlDom, sectionNode, "collect_mode", mboxFile.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", mboxFile.compressMode) @staticmethod def _addMboxDir(xmlDom, parentNode, mboxDir): """ Adds an mbox directory container as the next child of a parent. We add the following fields to the document:: absolutePath dir/abs_path collectMode dir/collect_mode compressMode dir/compress_mode We also add groups of the following items, one list element per item:: relativeExcludePaths dir/exclude/rel_path excludePatterns dir/exclude/pattern The node itself is created as the next child of the parent node. This method only adds one mbox directory node. The parent must loop for each mbox directory in the C{MboxConfig} object. If C{mboxDir} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. @param mboxDir: MboxDir to be added to the document. """ if mboxDir is not None: sectionNode = addContainerNode(xmlDom, parentNode, "dir") addStringNode(xmlDom, sectionNode, "abs_path", mboxDir.absolutePath) addStringNode(xmlDom, sectionNode, "collect_mode", mboxDir.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", mboxDir.compressMode) if ((mboxDir.relativeExcludePaths is not None and mboxDir.relativeExcludePaths != []) or (mboxDir.excludePatterns is not None and mboxDir.excludePatterns != [])): excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") if mboxDir.relativeExcludePaths is not None: for relativePath in mboxDir.relativeExcludePaths: addStringNode(xmlDom, excludeNode, "rel_path", relativePath) if mboxDir.excludePatterns is not None: for pattern in mboxDir.excludePatterns: addStringNode(xmlDom, excludeNode, "pattern", pattern) ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the mbox backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If a backup could not be written for some reason. """ logger.debug("Executing mbox extended action.") newRevision = datetime.datetime.today() # mark here so all actions are after this date/time if config.options is None or config.collect is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) todayIsStart = isStartOfWeek(config.options.startingDay) fullBackup = options.full or todayIsStart logger.debug("Full backup flag is [%s]", fullBackup) if local.mbox.mboxFiles is not None: for mboxFile in local.mbox.mboxFiles: logger.debug("Working with mbox file [%s]", mboxFile.absolutePath) collectMode = _getCollectMode(local, mboxFile) compressMode = _getCompressMode(local, mboxFile) lastRevision = _loadLastRevision(config, mboxFile, fullBackup, collectMode) if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): logger.debug("Mbox file meets criteria to be backed up today.") _backupMboxFile(config, mboxFile.absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision) else: logger.debug("Mbox file will not be backed up, per collect mode.") if collectMode == 'incr': _writeNewRevision(config, mboxFile, newRevision) if local.mbox.mboxDirs is not None: for mboxDir in local.mbox.mboxDirs: logger.debug("Working with mbox directory [%s]", mboxDir.absolutePath) collectMode = _getCollectMode(local, mboxDir) compressMode = _getCompressMode(local, mboxDir) lastRevision = _loadLastRevision(config, mboxDir, fullBackup, collectMode) (excludePaths, excludePatterns) = _getExclusions(mboxDir) if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): logger.debug("Mbox directory meets criteria to be backed up today.") _backupMboxDir(config, mboxDir.absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, excludePaths, excludePatterns) else: logger.debug("Mbox directory will not be backed up, per collect mode.") if collectMode == 'incr': _writeNewRevision(config, mboxDir, newRevision) logger.info("Executed the mbox extended action successfully.") def _getCollectMode(local, item): """ Gets the collect mode that should be used for an mbox file or directory. Use file- or directory-specific value if possible, otherwise take from mbox section. @param local: LocalConfig object. @param item: Mbox file or directory @return: Collect mode to use. """ if item.collectMode is None: collectMode = local.mbox.collectMode else: collectMode = item.collectMode logger.debug("Collect mode is [%s]", collectMode) return collectMode def _getCompressMode(local, item): """ Gets the compress mode that should be used for an mbox file or directory. Use file- or directory-specific value if possible, otherwise take from mbox section. @param local: LocalConfig object. @param item: Mbox file or directory @return: Compress mode to use. """ if item.compressMode is None: compressMode = local.mbox.compressMode else: compressMode = item.compressMode logger.debug("Compress mode is [%s]", compressMode) return compressMode def _getRevisionPath(config, item): """ Gets the path to the revision file associated with a repository. @param config: Cedar Backup configuration. @param item: Mbox file or directory @return: Absolute path to the revision file associated with the repository. """ normalized = buildNormalizedPath(item.absolutePath) filename = "%s.%s" % (normalized, REVISION_PATH_EXTENSION) revisionPath = os.path.join(config.options.workingDir, filename) logger.debug("Revision file path is [%s]", revisionPath) return revisionPath def _loadLastRevision(config, item, fullBackup, collectMode): """ Loads the last revision date for this item from disk and returns it. If this is a full backup, or if the revision file cannot be loaded for some reason, then C{None} is returned. This indicates that there is no previous revision, so the entire mail file or directory should be backed up. @note: We write the actual revision object to disk via pickle, so we don't deal with the datetime precision or format at all. Whatever's in the object is what we write. @param config: Cedar Backup configuration. @param item: Mbox file or directory @param fullBackup: Indicates whether this is a full backup @param collectMode: Indicates the collect mode for this item @return: Revision date as a datetime.datetime object or C{None}. """ revisionPath = _getRevisionPath(config, item) if fullBackup: revisionDate = None logger.debug("Revision file ignored because this is a full backup.") elif collectMode in ['weekly', 'daily']: revisionDate = None logger.debug("No revision file based on collect mode [%s].", collectMode) else: logger.debug("Revision file will be used for non-full incremental backup.") if not os.path.isfile(revisionPath): revisionDate = None logger.debug("Revision file [%s] does not exist on disk.", revisionPath) else: try: with open(revisionPath, "rb") as f: revisionDate = pickle.load(f, fix_imports=True) # be compatible with Python 2 logger.debug("Loaded revision file [%s] from disk: [%s]", revisionPath, revisionDate) except Exception as e: revisionDate = None logger.error("Failed loading revision file [%s] from disk: %s", revisionPath, e) return revisionDate def _writeNewRevision(config, item, newRevision): """ Writes new revision information to disk. If we can't write the revision file successfully for any reason, we'll log the condition but won't throw an exception. @note: We write the actual revision object to disk via pickle, so we don't deal with the datetime precision or format at all. Whatever's in the object is what we write. @param config: Cedar Backup configuration. @param item: Mbox file or directory @param newRevision: Revision date as a datetime.datetime object. """ revisionPath = _getRevisionPath(config, item) try: with open(revisionPath, "wb") as f: pickle.dump(newRevision, f, 0, fix_imports=True) # be compatible with Python 2 changeOwnership(revisionPath, config.options.backupUser, config.options.backupGroup) logger.debug("Wrote new revision file [%s] to disk: [%s]", revisionPath, newRevision) except Exception as e: logger.error("Failed to write revision file [%s] to disk: %s", revisionPath, e) def _getExclusions(mboxDir): """ Gets exclusions (file and patterns) associated with an mbox directory. The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the mbox directory's relative exclude paths. The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the mbox directory's list of patterns. @param mboxDir: Mbox directory object. @return: Tuple (files, patterns) indicating what to exclude. """ paths = [] if mboxDir.relativeExcludePaths is not None: for relativePath in mboxDir.relativeExcludePaths: paths.append(os.path.join(mboxDir.absolutePath, relativePath)) patterns = [] if mboxDir.excludePatterns is not None: patterns.extend(mboxDir.excludePatterns) logger.debug("Exclude paths: %s", paths) logger.debug("Exclude patterns: %s", patterns) return(paths, patterns) def _getBackupPath(config, mboxPath, compressMode, newRevision, targetDir=None): """ Gets the backup file path (including correct extension) associated with an mbox path. We assume that if the target directory is passed in, that we're backing up a directory. Under these circumstances, we'll just use the basename of the individual path as the output file. @note: The backup path only contains the current date in YYYYMMDD format, but that's OK because the index information (stored elsewhere) is the actual date object. @param config: Cedar Backup configuration. @param mboxPath: Path to the indicated mbox file or directory @param compressMode: Compress mode to use for this mbox path @param newRevision: Revision this backup path represents @param targetDir: Target directory in which the path should exist @return: Absolute path to the backup file associated with the repository. """ if targetDir is None: normalizedPath = buildNormalizedPath(mboxPath) revisionDate = newRevision.strftime("%Y%m%d") filename = "mbox-%s-%s" % (revisionDate, normalizedPath) else: filename = os.path.basename(mboxPath) if compressMode == 'gzip': filename = "%s.gz" % filename elif compressMode == 'bzip2': filename = "%s.bz2" % filename if targetDir is None: backupPath = os.path.join(config.collect.targetDir, filename) else: backupPath = os.path.join(targetDir, filename) logger.debug("Backup file path is [%s]", backupPath) return backupPath def _getTarfilePath(config, mboxPath, compressMode, newRevision): """ Gets the tarfile backup file path (including correct extension) associated with an mbox path. Along with the path, the tar archive mode is returned in a form that can be used with L{BackupFileList.generateTarfile}. @note: The tarfile path only contains the current date in YYYYMMDD format, but that's OK because the index information (stored elsewhere) is the actual date object. @param config: Cedar Backup configuration. @param mboxPath: Path to the indicated mbox file or directory @param compressMode: Compress mode to use for this mbox path @param newRevision: Revision this backup path represents @return: Tuple of (absolute path to tarfile, tar archive mode) """ normalizedPath = buildNormalizedPath(mboxPath) revisionDate = newRevision.strftime("%Y%m%d") filename = "mbox-%s-%s.tar" % (revisionDate, normalizedPath) if compressMode == 'gzip': filename = "%s.gz" % filename archiveMode = "targz" elif compressMode == 'bzip2': filename = "%s.bz2" % filename archiveMode = "tarbz2" else: archiveMode = "tar" tarfilePath = os.path.join(config.collect.targetDir, filename) logger.debug("Tarfile path is [%s]", tarfilePath) return (tarfilePath, archiveMode) def _getOutputFile(backupPath, compressMode): """ Opens the output file used for saving backup information. If the compress mode is "gzip", we'll open a C{GzipFile}, and if the compress mode is "bzip2", we'll open a C{BZ2File}. Otherwise, we'll just return an object from the normal C{open()} method. @param backupPath: Path to file to open. @param compressMode: Compress mode of file ("none", "gzip", "bzip"). @return: Output file object, opened in binary mode for use with executeCommand() """ if compressMode == "gzip": return GzipFile(backupPath, "wb") elif compressMode == "bzip2": return BZ2File(backupPath, "wb") else: return open(backupPath, "wb") def _backupMboxFile(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, targetDir=None): """ Backs up an individual mbox file. @param config: Cedar Backup configuration. @param absolutePath: Path to mbox file to back up. @param fullBackup: Indicates whether this should be a full backup. @param collectMode: Indicates the collect mode for this item @param compressMode: Compress mode of file ("none", "gzip", "bzip") @param lastRevision: Date of last backup as datetime.datetime @param newRevision: Date of new (current) backup as datetime.datetime @param targetDir: Target directory to write the backed-up file into @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem backing up the mbox file. """ if fullBackup or collectMode != "incr" or lastRevision is None: args = [ "-a", "-u", absolutePath, ] # remove duplicates but fetch entire mailbox else: revisionDate = lastRevision.strftime("%Y-%m-%dT%H:%M:%S") # ISO-8601 format; grepmail calls Date::Parse::str2time() args = [ "-a", "-u", "-d", "since %s" % revisionDate, absolutePath, ] command = resolveCommand(GREPMAIL_COMMAND) backupPath = _getBackupPath(config, absolutePath, compressMode, newRevision, targetDir=targetDir) with _getOutputFile(backupPath, compressMode) as outputFile: result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile)[0] if result != 0: raise IOError("Error [%d] executing grepmail on [%s]." % (result, absolutePath)) logger.debug("Completed backing up mailbox [%s].", absolutePath) return backupPath def _backupMboxDir(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, excludePaths, excludePatterns): """ Backs up a directory containing mbox files. @param config: Cedar Backup configuration. @param absolutePath: Path to mbox directory to back up. @param fullBackup: Indicates whether this should be a full backup. @param collectMode: Indicates the collect mode for this item @param compressMode: Compress mode of file ("none", "gzip", "bzip") @param lastRevision: Date of last backup as datetime.datetime @param newRevision: Date of new (current) backup as datetime.datetime @param excludePaths: List of absolute paths to exclude. @param excludePatterns: List of patterns to exclude. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem backing up the mbox file. """ try: tmpdir = tempfile.mkdtemp(dir=config.options.workingDir) mboxList = FilesystemList() mboxList.excludeDirs = True mboxList.excludePaths = excludePaths mboxList.excludePatterns = excludePatterns mboxList.addDirContents(absolutePath, recursive=False) tarList = BackupFileList() for item in mboxList: backupPath = _backupMboxFile(config, item, fullBackup, collectMode, "none", # no need to compress inside compressed tar lastRevision, newRevision, targetDir=tmpdir) tarList.addFile(backupPath) (tarfilePath, archiveMode) = _getTarfilePath(config, absolutePath, compressMode, newRevision) tarList.generateTarfile(tarfilePath, archiveMode, ignore=True, flat=True) changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) logger.debug("Completed backing up directory [%s].", absolutePath) finally: try: for cleanitem in tarList: if os.path.exists(cleanitem): try: os.remove(cleanitem) except: pass except: pass try: os.rmdir(tmpdir) except: pass CedarBackup3-3.1.6/CedarBackup3/extend/sysinfo.py0000664000175000017500000002151412560007327023261 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2005,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Official Cedar Backup Extensions # Purpose : Provides an extension to save off important system recovery information. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to save off important system recovery information. This is a simple Cedar Backup extension used to save off important system recovery information. It saves off three types of information: - Currently-installed Debian packages via C{dpkg --get-selections} - Disk partition information via C{fdisk -l} - System-wide mounted filesystem contents, via C{ls -laR} The saved-off information is placed into the collect directory and is compressed using C{bzip2} to save space. This extension relies on the options and collect configurations in the standard Cedar Backup configuration file, but requires no new configuration of its own. No public functions other than the action are exposed since all of this is pretty simple. @note: If the C{dpkg} or C{fdisk} commands cannot be found in their normal locations or executed by the current user, those steps will be skipped and a note will be logged at the INFO level. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging from bz2 import BZ2File # Cedar Backup modules from CedarBackup3.util import resolveCommand, executeCommand, changeOwnership ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.extend.sysinfo") DPKG_PATH = "/usr/bin/dpkg" FDISK_PATH = "/sbin/fdisk" DPKG_COMMAND = [ DPKG_PATH, "--get-selections", ] FDISK_COMMAND = [ FDISK_PATH, "-l", ] LS_COMMAND = [ "ls", "-laR", "/", ] ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the sysinfo backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If the backup process fails for some reason. """ logger.debug("Executing sysinfo extended action.") if config.options is None or config.collect is None: raise ValueError("Cedar Backup configuration is not properly filled in.") _dumpDebianPackages(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) _dumpPartitionTable(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) _dumpFilesystemContents(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) logger.info("Executed the sysinfo extended action successfully.") def _dumpDebianPackages(targetDir, backupUser, backupGroup, compress=True): """ Dumps a list of currently installed Debian packages via C{dpkg}. @param targetDir: Directory to write output file into. @param backupUser: User which should own the resulting file. @param backupGroup: Group which should own the resulting file. @param compress: Indicates whether to compress the output file. @raise IOError: If the dump fails for some reason. """ if not os.path.exists(DPKG_PATH): logger.info("Not executing Debian package dump since %s doesn't seem to exist.", DPKG_PATH) elif not os.access(DPKG_PATH, os.X_OK): logger.info("Not executing Debian package dump since %s cannot be executed.", DPKG_PATH) else: (outputFile, filename) = _getOutputFile(targetDir, "dpkg-selections", compress) with outputFile: command = resolveCommand(DPKG_COMMAND) result = executeCommand(command, [], returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile)[0] if result != 0: raise IOError("Error [%d] executing Debian package dump." % result) if not os.path.exists(filename): raise IOError("File [%s] does not seem to exist after Debian package dump finished." % filename) changeOwnership(filename, backupUser, backupGroup) def _dumpPartitionTable(targetDir, backupUser, backupGroup, compress=True): """ Dumps information about the partition table via C{fdisk}. @param targetDir: Directory to write output file into. @param backupUser: User which should own the resulting file. @param backupGroup: Group which should own the resulting file. @param compress: Indicates whether to compress the output file. @raise IOError: If the dump fails for some reason. """ if not os.path.exists(FDISK_PATH): logger.info("Not executing partition table dump since %s doesn't seem to exist.", FDISK_PATH) elif not os.access(FDISK_PATH, os.X_OK): logger.info("Not executing partition table dump since %s cannot be executed.", FDISK_PATH) else: (outputFile, filename) = _getOutputFile(targetDir, "fdisk-l", compress) with outputFile: command = resolveCommand(FDISK_COMMAND) result = executeCommand(command, [], returnOutput=False, ignoreStderr=True, outputFile=outputFile)[0] if result != 0: raise IOError("Error [%d] executing partition table dump." % result) if not os.path.exists(filename): raise IOError("File [%s] does not seem to exist after partition table dump finished." % filename) changeOwnership(filename, backupUser, backupGroup) def _dumpFilesystemContents(targetDir, backupUser, backupGroup, compress=True): """ Dumps complete listing of filesystem contents via C{ls -laR}. @param targetDir: Directory to write output file into. @param backupUser: User which should own the resulting file. @param backupGroup: Group which should own the resulting file. @param compress: Indicates whether to compress the output file. @raise IOError: If the dump fails for some reason. """ (outputFile, filename) = _getOutputFile(targetDir, "ls-laR", compress) with outputFile: # Note: can't count on return status from 'ls', so we don't check it. command = resolveCommand(LS_COMMAND) executeCommand(command, [], returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile) if not os.path.exists(filename): raise IOError("File [%s] does not seem to exist after filesystem contents dump finished." % filename) changeOwnership(filename, backupUser, backupGroup) def _getOutputFile(targetDir, name, compress=True): """ Opens the output file used for saving a dump to the filesystem. The filename will be C{name.txt} (or C{name.txt.bz2} if C{compress} is C{True}), written in the target directory. @param targetDir: Target directory to write file in. @param name: Name of the file to create. @param compress: Indicates whether to write compressed output. @return: Tuple of (Output file object, filename), file opened in binary mode for use with executeCommand() """ filename = os.path.join(targetDir, "%s.txt" % name) if compress: filename = "%s.bz2" % filename logger.debug("Dump file will be [%s].", filename) if compress: outputFile = BZ2File(filename, "wb") else: outputFile = open(filename, "wb") return (outputFile, filename) CedarBackup3-3.1.6/CedarBackup3/extend/subversion.py0000664000175000017500000016726012560007327023777 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2005,2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Official Cedar Backup Extensions # Purpose : Provides an extension to back up Subversion repositories. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to back up Subversion repositories. This is a Cedar Backup extension used to back up Subversion repositories via the Cedar Backup command line. Each Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action: weekly, daily, incremental. This extension requires a new configuration section and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file. There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). Although the repository type can be specified in configuration, that information is just kept around for reference. It doesn't affect the backup. Both kinds of repositories are backed up in the same way, using C{svnadmin dump} in an incremental mode. It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do that, then use the normal collect action. This is probably simpler, although it carries its own advantages and disadvantages (plus you will have to be careful to exclude the working directories Subversion uses when building an update to commit). Check the Subversion documentation for more information. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging import pickle from bz2 import BZ2File from gzip import GzipFile from functools import total_ordering # Cedar Backup modules from CedarBackup3.xmlutil import createInputDom, addContainerNode, addStringNode from CedarBackup3.xmlutil import isElement, readChildren, readFirstChild, readString, readStringList from CedarBackup3.config import VALID_COLLECT_MODES, VALID_COMPRESS_MODES from CedarBackup3.filesystem import FilesystemList from CedarBackup3.util import UnorderedList, RegexList from CedarBackup3.util import isStartOfWeek, buildNormalizedPath from CedarBackup3.util import resolveCommand, executeCommand from CedarBackup3.util import ObjectTypeList, encodePath, changeOwnership ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.extend.subversion") SVNLOOK_COMMAND = [ "svnlook", ] SVNADMIN_COMMAND = [ "svnadmin", ] REVISION_PATH_EXTENSION = "svnlast" ######################################################################## # RepositoryDir class definition ######################################################################## @total_ordering class RepositoryDir(object): """ Class representing Subversion repository directory. A repository directory is a directory that contains one or more Subversion repositories. The following restrictions exist on data in this class: - The directory path must be absolute. - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. The repository type value is kept around just for reference. It doesn't affect the behavior of the backup. Relative exclusions are allowed here. However, there is no configured ignore file, because repository dir backups are not recursive. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, directoryPath, collectMode, compressMode """ def __init__(self, repositoryType=None, directoryPath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None): """ Constructor for the C{RepositoryDir} class. @param repositoryType: Type of repository, for reference @param directoryPath: Absolute path of the Subversion parent directory @param collectMode: Overridden collect mode for this directory. @param compressMode: Overridden compression mode for this directory. @param relativeExcludePaths: List of relative paths to exclude. @param excludePatterns: List of regular expression patterns to exclude """ self._repositoryType = None self._directoryPath = None self._collectMode = None self._compressMode = None self._relativeExcludePaths = None self._excludePatterns = None self.repositoryType = repositoryType self.directoryPath = directoryPath self.collectMode = collectMode self.compressMode = compressMode self.relativeExcludePaths = relativeExcludePaths self.excludePatterns = excludePatterns def __repr__(self): """ Official string representation for class instance. """ return "RepositoryDir(%s, %s, %s, %s, %s, %s)" % (self.repositoryType, self.directoryPath, self.collectMode, self.compressMode, self.relativeExcludePaths, self.excludePatterns) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.repositoryType != other.repositoryType: if str(self.repositoryType or "") < str(other.repositoryType or ""): return -1 else: return 1 if self.directoryPath != other.directoryPath: if str(self.directoryPath or "") < str(other.directoryPath or ""): return -1 else: return 1 if self.collectMode != other.collectMode: if str(self.collectMode or "") < str(other.collectMode or ""): return -1 else: return 1 if self.compressMode != other.compressMode: if str(self.compressMode or "") < str(other.compressMode or ""): return -1 else: return 1 if self.relativeExcludePaths != other.relativeExcludePaths: if self.relativeExcludePaths < other.relativeExcludePaths: return -1 else: return 1 if self.excludePatterns != other.excludePatterns: if self.excludePatterns < other.excludePatterns: return -1 else: return 1 return 0 def _setRepositoryType(self, value): """ Property target used to set the repository type. There is no validation; this value is kept around just for reference. """ self._repositoryType = value def _getRepositoryType(self): """ Property target used to get the repository type. """ return self._repositoryType def _setDirectoryPath(self, value): """ Property target used to set the directory path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Repository path must be an absolute path.") self._directoryPath = encodePath(value) def _getDirectoryPath(self): """ Property target used to get the repository path. """ return self._directoryPath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setRelativeExcludePaths(self, value): """ Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment. """ if value is None: self._relativeExcludePaths = None else: try: saved = self._relativeExcludePaths self._relativeExcludePaths = UnorderedList() self._relativeExcludePaths.extend(value) except Exception as e: self._relativeExcludePaths = saved raise e def _getRelativeExcludePaths(self): """ Property target used to get the relative exclude paths list. """ return self._relativeExcludePaths def _setExcludePatterns(self, value): """ Property target used to set the exclude patterns list. """ if value is None: self._excludePatterns = None else: try: saved = self._excludePatterns self._excludePatterns = RegexList() self._excludePatterns.extend(value) except Exception as e: self._excludePatterns = saved raise e def _getExcludePatterns(self): """ Property target used to get the exclude patterns list. """ return self._excludePatterns repositoryType = property(_getRepositoryType, _setRepositoryType, None, doc="Type of this repository, for reference.") directoryPath = property(_getDirectoryPath, _setDirectoryPath, None, doc="Absolute path of the Subversion parent directory.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this repository.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this repository.") relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.") ######################################################################## # Repository class definition ######################################################################## @total_ordering class Repository(object): """ Class representing generic Subversion repository configuration.. The following restrictions exist on data in this class: - The respository path must be absolute. - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. The repository type value is kept around just for reference. It doesn't affect the behavior of the backup. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, repositoryPath, collectMode, compressMode """ def __init__(self, repositoryType=None, repositoryPath=None, collectMode=None, compressMode=None): """ Constructor for the C{Repository} class. @param repositoryType: Type of repository, for reference @param repositoryPath: Absolute path to a Subversion repository on disk. @param collectMode: Overridden collect mode for this directory. @param compressMode: Overridden compression mode for this directory. """ self._repositoryType = None self._repositoryPath = None self._collectMode = None self._compressMode = None self.repositoryType = repositoryType self.repositoryPath = repositoryPath self.collectMode = collectMode self.compressMode = compressMode def __repr__(self): """ Official string representation for class instance. """ return "Repository(%s, %s, %s, %s)" % (self.repositoryType, self.repositoryPath, self.collectMode, self.compressMode) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.repositoryType != other.repositoryType: if str(self.repositoryType or "") < str(other.repositoryType or ""): return -1 else: return 1 if self.repositoryPath != other.repositoryPath: if str(self.repositoryPath or "") < str(other.repositoryPath or ""): return -1 else: return 1 if self.collectMode != other.collectMode: if str(self.collectMode or "") < str(other.collectMode or ""): return -1 else: return 1 if self.compressMode != other.compressMode: if str(self.compressMode or "") < str(other.compressMode or ""): return -1 else: return 1 return 0 def _setRepositoryType(self, value): """ Property target used to set the repository type. There is no validation; this value is kept around just for reference. """ self._repositoryType = value def _getRepositoryType(self): """ Property target used to get the repository type. """ return self._repositoryType def _setRepositoryPath(self, value): """ Property target used to set the repository path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Repository path must be an absolute path.") self._repositoryPath = encodePath(value) def _getRepositoryPath(self): """ Property target used to get the repository path. """ return self._repositoryPath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode repositoryType = property(_getRepositoryType, _setRepositoryType, None, doc="Type of this repository, for reference.") repositoryPath = property(_getRepositoryPath, _setRepositoryPath, None, doc="Path to the repository to collect.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this repository.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this repository.") ######################################################################## # SubversionConfig class definition ######################################################################## @total_ordering class SubversionConfig(object): """ Class representing Subversion configuration. Subversion configuration is used for backing up Subversion repositories. The following restrictions exist on data in this class: - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. - The repositories list must be a list of C{Repository} objects. - The repositoryDirs list must be a list of C{RepositoryDir} objects. For the two lists, validation is accomplished through the L{util.ObjectTypeList} list implementation that overrides common list methods and transparently ensures that each element has the correct type. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, collectMode, compressMode, repositories """ def __init__(self, collectMode=None, compressMode=None, repositories=None, repositoryDirs=None): """ Constructor for the C{SubversionConfig} class. @param collectMode: Default collect mode. @param compressMode: Default compress mode. @param repositories: List of Subversion repositories to back up. @param repositoryDirs: List of Subversion parent directories to back up. @raise ValueError: If one of the values is invalid. """ self._collectMode = None self._compressMode = None self._repositories = None self._repositoryDirs = None self.collectMode = collectMode self.compressMode = compressMode self.repositories = repositories self.repositoryDirs = repositoryDirs def __repr__(self): """ Official string representation for class instance. """ return "SubversionConfig(%s, %s, %s, %s)" % (self.collectMode, self.compressMode, self.repositories, self.repositoryDirs) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.collectMode != other.collectMode: if str(self.collectMode or "") < str(other.collectMode or ""): return -1 else: return 1 if self.compressMode != other.compressMode: if str(self.compressMode or "") < str(other.compressMode or ""): return -1 else: return 1 if self.repositories != other.repositories: if self.repositories < other.repositories: return -1 else: return 1 if self.repositoryDirs != other.repositoryDirs: if self.repositoryDirs < other.repositoryDirs: return -1 else: return 1 return 0 def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setRepositories(self, value): """ Property target used to set the repositories list. Either the value must be C{None} or each element must be a C{Repository}. @raise ValueError: If the value is not a C{Repository} """ if value is None: self._repositories = None else: try: saved = self._repositories self._repositories = ObjectTypeList(Repository, "Repository") self._repositories.extend(value) except Exception as e: self._repositories = saved raise e def _getRepositories(self): """ Property target used to get the repositories list. """ return self._repositories def _setRepositoryDirs(self, value): """ Property target used to set the repositoryDirs list. Either the value must be C{None} or each element must be a C{Repository}. @raise ValueError: If the value is not a C{Repository} """ if value is None: self._repositoryDirs = None else: try: saved = self._repositoryDirs self._repositoryDirs = ObjectTypeList(RepositoryDir, "RepositoryDir") self._repositoryDirs.extend(value) except Exception as e: self._repositoryDirs = saved raise e def _getRepositoryDirs(self): """ Property target used to get the repositoryDirs list. """ return self._repositoryDirs collectMode = property(_getCollectMode, _setCollectMode, None, doc="Default collect mode.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Default compress mode.") repositories = property(_getRepositories, _setRepositories, None, doc="List of Subversion repositories to back up.") repositoryDirs = property(_getRepositoryDirs, _setRepositoryDirs, None, doc="List of Subversion parent directories to back up.") ######################################################################## # LocalConfig class definition ######################################################################## @total_ordering class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit Subversion-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, subversion, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._subversion = None self.subversion = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: with open(xmlPath) as f: xmlData = f.read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.subversion) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.subversion != other.subversion: if self.subversion < other.subversion: return -1 else: return 1 return 0 def _setSubversion(self, value): """ Property target used to set the subversion configuration value. If not C{None}, the value must be a C{SubversionConfig} object. @raise ValueError: If the value is not a C{SubversionConfig} """ if value is None: self._subversion = None else: if not isinstance(value, SubversionConfig): raise ValueError("Value must be a C{SubversionConfig} object.") self._subversion = value def _getSubversion(self): """ Property target used to get the subversion configuration value. """ return self._subversion subversion = property(_getSubversion, _setSubversion, None, "Subversion configuration in terms of a C{SubversionConfig} object.") def validate(self): """ Validates configuration represented by the object. Subversion configuration must be filled in. Within that, the collect mode and compress mode are both optional, but the list of repositories must contain at least one entry. Each repository must contain a repository path, and then must be either able to take collect mode and compress mode configuration from the parent C{SubversionConfig} object, or must set each value on its own. @raise ValueError: If one of the validations fails. """ if self.subversion is None: raise ValueError("Subversion section is required.") if ((self.subversion.repositories is None or len(self.subversion.repositories) < 1) and (self.subversion.repositoryDirs is None or len(self.subversion.repositoryDirs) <1)): raise ValueError("At least one Subversion repository must be configured.") if self.subversion.repositories is not None: for repository in self.subversion.repositories: if repository.repositoryPath is None: raise ValueError("Each repository must set a repository path.") if self.subversion.collectMode is None and repository.collectMode is None: raise ValueError("Collect mode must either be set in parent section or individual repository.") if self.subversion.compressMode is None and repository.compressMode is None: raise ValueError("Compress mode must either be set in parent section or individual repository.") if self.subversion.repositoryDirs is not None: for repositoryDir in self.subversion.repositoryDirs: if repositoryDir.directoryPath is None: raise ValueError("Each repository directory must set a directory path.") if self.subversion.collectMode is None and repositoryDir.collectMode is None: raise ValueError("Collect mode must either be set in parent section or repository directory.") if self.subversion.compressMode is None and repositoryDir.compressMode is None: raise ValueError("Compress mode must either be set in parent section or repository directory.") def addConfig(self, xmlDom, parentNode): """ Adds a configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: collectMode //cb_config/subversion/collectMode compressMode //cb_config/subversion/compressMode We also add groups of the following items, one list element per item:: repository //cb_config/subversion/repository repository_dir //cb_config/subversion/repository_dir @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.subversion is not None: sectionNode = addContainerNode(xmlDom, parentNode, "subversion") addStringNode(xmlDom, sectionNode, "collect_mode", self.subversion.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", self.subversion.compressMode) if self.subversion.repositories is not None: for repository in self.subversion.repositories: LocalConfig._addRepository(xmlDom, sectionNode, repository) if self.subversion.repositoryDirs is not None: for repositoryDir in self.subversion.repositoryDirs: LocalConfig._addRepositoryDir(xmlDom, sectionNode, repositoryDir) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the subversion configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._subversion = LocalConfig._parseSubversion(parentNode) @staticmethod def _parseSubversion(parent): """ Parses a subversion configuration section. We read the following individual fields:: collectMode //cb_config/subversion/collect_mode compressMode //cb_config/subversion/compress_mode We also read groups of the following item, one list element per item:: repositories //cb_config/subversion/repository repository_dirs //cb_config/subversion/repository_dir The repositories are parsed by L{_parseRepositories}, and the repository dirs are parsed by L{_parseRepositoryDirs}. @param parent: Parent node to search beneath. @return: C{SubversionConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ subversion = None section = readFirstChild(parent, "subversion") if section is not None: subversion = SubversionConfig() subversion.collectMode = readString(section, "collect_mode") subversion.compressMode = readString(section, "compress_mode") subversion.repositories = LocalConfig._parseRepositories(section) subversion.repositoryDirs = LocalConfig._parseRepositoryDirs(section) return subversion @staticmethod def _parseRepositories(parent): """ Reads a list of C{Repository} objects from immediately beneath the parent. We read the following individual fields:: repositoryType type repositoryPath abs_path collectMode collect_mode compressMode compess_mode The type field is optional, and its value is kept around only for reference. @param parent: Parent node to search beneath. @return: List of C{Repository} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parent, "repository"): if isElement(entry): repository = Repository() repository.repositoryType = readString(entry, "type") repository.repositoryPath = readString(entry, "abs_path") repository.collectMode = readString(entry, "collect_mode") repository.compressMode = readString(entry, "compress_mode") lst.append(repository) if lst == []: lst = None return lst @staticmethod def _addRepository(xmlDom, parentNode, repository): """ Adds a repository container as the next child of a parent. We add the following fields to the document:: repositoryType repository/type repositoryPath repository/abs_path collectMode repository/collect_mode compressMode repository/compress_mode The node itself is created as the next child of the parent node. This method only adds one repository node. The parent must loop for each repository in the C{SubversionConfig} object. If C{repository} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. @param repository: Repository to be added to the document. """ if repository is not None: sectionNode = addContainerNode(xmlDom, parentNode, "repository") addStringNode(xmlDom, sectionNode, "type", repository.repositoryType) addStringNode(xmlDom, sectionNode, "abs_path", repository.repositoryPath) addStringNode(xmlDom, sectionNode, "collect_mode", repository.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", repository.compressMode) @staticmethod def _parseRepositoryDirs(parent): """ Reads a list of C{RepositoryDir} objects from immediately beneath the parent. We read the following individual fields:: repositoryType type directoryPath abs_path collectMode collect_mode compressMode compess_mode We also read groups of the following items, one list element per item:: relativeExcludePaths exclude/rel_path excludePatterns exclude/pattern The exclusions are parsed by L{_parseExclusions}. The type field is optional, and its value is kept around only for reference. @param parent: Parent node to search beneath. @return: List of C{RepositoryDir} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parent, "repository_dir"): if isElement(entry): repositoryDir = RepositoryDir() repositoryDir.repositoryType = readString(entry, "type") repositoryDir.directoryPath = readString(entry, "abs_path") repositoryDir.collectMode = readString(entry, "collect_mode") repositoryDir.compressMode = readString(entry, "compress_mode") (repositoryDir.relativeExcludePaths, repositoryDir.excludePatterns) = LocalConfig._parseExclusions(entry) lst.append(repositoryDir) if lst == []: lst = None return lst @staticmethod def _parseExclusions(parentNode): """ Reads exclusions data from immediately beneath the parent. We read groups of the following items, one list element per item:: relative exclude/rel_path patterns exclude/pattern If there are none of some pattern (i.e. no relative path items) then C{None} will be returned for that item in the tuple. @param parentNode: Parent node to search beneath. @return: Tuple of (relative, patterns) exclusions. """ section = readFirstChild(parentNode, "exclude") if section is None: return (None, None) else: relative = readStringList(section, "rel_path") patterns = readStringList(section, "pattern") return (relative, patterns) @staticmethod def _addRepositoryDir(xmlDom, parentNode, repositoryDir): """ Adds a repository dir container as the next child of a parent. We add the following fields to the document:: repositoryType repository_dir/type directoryPath repository_dir/abs_path collectMode repository_dir/collect_mode compressMode repository_dir/compress_mode We also add groups of the following items, one list element per item:: relativeExcludePaths dir/exclude/rel_path excludePatterns dir/exclude/pattern The node itself is created as the next child of the parent node. This method only adds one repository node. The parent must loop for each repository dir in the C{SubversionConfig} object. If C{repositoryDir} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. @param repositoryDir: Repository dir to be added to the document. """ if repositoryDir is not None: sectionNode = addContainerNode(xmlDom, parentNode, "repository_dir") addStringNode(xmlDom, sectionNode, "type", repositoryDir.repositoryType) addStringNode(xmlDom, sectionNode, "abs_path", repositoryDir.directoryPath) addStringNode(xmlDom, sectionNode, "collect_mode", repositoryDir.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", repositoryDir.compressMode) if ((repositoryDir.relativeExcludePaths is not None and repositoryDir.relativeExcludePaths != []) or (repositoryDir.excludePatterns is not None and repositoryDir.excludePatterns != [])): excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") if repositoryDir.relativeExcludePaths is not None: for relativePath in repositoryDir.relativeExcludePaths: addStringNode(xmlDom, excludeNode, "rel_path", relativePath) if repositoryDir.excludePatterns is not None: for pattern in repositoryDir.excludePatterns: addStringNode(xmlDom, excludeNode, "pattern", pattern) ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the Subversion backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If a backup could not be written for some reason. """ logger.debug("Executing Subversion extended action.") if config.options is None or config.collect is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) todayIsStart = isStartOfWeek(config.options.startingDay) fullBackup = options.full or todayIsStart logger.debug("Full backup flag is [%s]", fullBackup) if local.subversion.repositories is not None: for repository in local.subversion.repositories: _backupRepository(config, local, todayIsStart, fullBackup, repository) if local.subversion.repositoryDirs is not None: for repositoryDir in local.subversion.repositoryDirs: logger.debug("Working with repository directory [%s].", repositoryDir.directoryPath) for repositoryPath in _getRepositoryPaths(repositoryDir): repository = Repository(repositoryDir.repositoryType, repositoryPath, repositoryDir.collectMode, repositoryDir.compressMode) _backupRepository(config, local, todayIsStart, fullBackup, repository) logger.info("Completed backing up Subversion repository directory [%s].", repositoryDir.directoryPath) logger.info("Executed the Subversion extended action successfully.") def _getCollectMode(local, repository): """ Gets the collect mode that should be used for a repository. Use repository's if possible, otherwise take from subversion section. @param repository: Repository object. @return: Collect mode to use. """ if repository.collectMode is None: collectMode = local.subversion.collectMode else: collectMode = repository.collectMode logger.debug("Collect mode is [%s]", collectMode) return collectMode def _getCompressMode(local, repository): """ Gets the compress mode that should be used for a repository. Use repository's if possible, otherwise take from subversion section. @param local: LocalConfig object. @param repository: Repository object. @return: Compress mode to use. """ if repository.compressMode is None: compressMode = local.subversion.compressMode else: compressMode = repository.compressMode logger.debug("Compress mode is [%s]", compressMode) return compressMode def _getRevisionPath(config, repository): """ Gets the path to the revision file associated with a repository. @param config: Config object. @param repository: Repository object. @return: Absolute path to the revision file associated with the repository. """ normalized = buildNormalizedPath(repository.repositoryPath) filename = "%s.%s" % (normalized, REVISION_PATH_EXTENSION) revisionPath = os.path.join(config.options.workingDir, filename) logger.debug("Revision file path is [%s]", revisionPath) return revisionPath def _getBackupPath(config, repositoryPath, compressMode, startRevision, endRevision): """ Gets the backup file path (including correct extension) associated with a repository. @param config: Config object. @param repositoryPath: Path to the indicated repository @param compressMode: Compress mode to use for this repository. @param startRevision: Starting repository revision. @param endRevision: Ending repository revision. @return: Absolute path to the backup file associated with the repository. """ normalizedPath = buildNormalizedPath(repositoryPath) filename = "svndump-%d:%d-%s.txt" % (startRevision, endRevision, normalizedPath) if compressMode == 'gzip': filename = "%s.gz" % filename elif compressMode == 'bzip2': filename = "%s.bz2" % filename backupPath = os.path.join(config.collect.targetDir, filename) logger.debug("Backup file path is [%s]", backupPath) return backupPath def _getRepositoryPaths(repositoryDir): """ Gets a list of child repository paths within a repository directory. @param repositoryDir: RepositoryDirectory """ (excludePaths, excludePatterns) = _getExclusions(repositoryDir) fsList = FilesystemList() fsList.excludeFiles = True fsList.excludeLinks = True fsList.excludePaths = excludePaths fsList.excludePatterns = excludePatterns fsList.addDirContents(path=repositoryDir.directoryPath, recursive=False, addSelf=False) return fsList def _getExclusions(repositoryDir): """ Gets exclusions (file and patterns) associated with an repository directory. The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the repository directory's relative exclude paths. The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the repository directory's list of patterns. @param repositoryDir: Repository directory object. @return: Tuple (files, patterns) indicating what to exclude. """ paths = [] if repositoryDir.relativeExcludePaths is not None: for relativePath in repositoryDir.relativeExcludePaths: paths.append(os.path.join(repositoryDir.directoryPath, relativePath)) patterns = [] if repositoryDir.excludePatterns is not None: patterns.extend(repositoryDir.excludePatterns) logger.debug("Exclude paths: %s", paths) logger.debug("Exclude patterns: %s", patterns) return(paths, patterns) def _backupRepository(config, local, todayIsStart, fullBackup, repository): """ Backs up an individual Subversion repository. This internal method wraps the public methods and adds some functionality to work better with the extended action itself. @param config: Cedar Backup configuration. @param local: Local configuration @param todayIsStart: Indicates whether today is start of week @param fullBackup: Full backup flag @param repository: Repository to operate on @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the Subversion dump. """ logger.debug("Working with repository [%s]", repository.repositoryPath) logger.debug("Repository type is [%s]", repository.repositoryType) collectMode = _getCollectMode(local, repository) compressMode = _getCompressMode(local, repository) revisionPath = _getRevisionPath(config, repository) if not (fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart)): logger.debug("Repository will not be backed up, per collect mode.") return logger.debug("Repository meets criteria to be backed up today.") if collectMode != "incr" or fullBackup: startRevision = 0 endRevision = getYoungestRevision(repository.repositoryPath) logger.debug("Using full backup, revision: (%d, %d).", startRevision, endRevision) else: if fullBackup: startRevision = 0 endRevision = getYoungestRevision(repository.repositoryPath) else: startRevision = _loadLastRevision(revisionPath) + 1 endRevision = getYoungestRevision(repository.repositoryPath) if startRevision > endRevision: logger.info("No need to back up repository [%s]; no new revisions.", repository.repositoryPath) return logger.debug("Using incremental backup, revision: (%d, %d).", startRevision, endRevision) backupPath = _getBackupPath(config, repository.repositoryPath, compressMode, startRevision, endRevision) with _getOutputFile(backupPath, compressMode) as outputFile: backupRepository(repository.repositoryPath, outputFile, startRevision, endRevision) if not os.path.exists(backupPath): raise IOError("Dump file [%s] does not seem to exist after backup completed." % backupPath) changeOwnership(backupPath, config.options.backupUser, config.options.backupGroup) if collectMode == "incr": _writeLastRevision(config, revisionPath, endRevision) logger.info("Completed backing up Subversion repository [%s].", repository.repositoryPath) def _getOutputFile(backupPath, compressMode): """ Opens the output file used for saving the Subversion dump. If the compress mode is "gzip", we'll open a C{GzipFile}, and if the compress mode is "bzip2", we'll open a C{BZ2File}. Otherwise, we'll just return an object from the normal C{open()} method. @param backupPath: Path to file to open. @param compressMode: Compress mode of file ("none", "gzip", "bzip"). @return: Output file object, opened in binary mode for use with executeCommand() """ if compressMode == "gzip": return GzipFile(backupPath, "wb") elif compressMode == "bzip2": return BZ2File(backupPath, "wb") else: return open(backupPath, "wb") def _loadLastRevision(revisionPath): """ Loads the indicated revision file from disk into an integer. If we can't load the revision file successfully (either because it doesn't exist or for some other reason), then a revision of -1 will be returned - but the condition will be logged. This way, we err on the side of backing up too much, because anyone using this will presumably be adding 1 to the revision, so they don't duplicate any backups. @param revisionPath: Path to the revision file on disk. @return: Integer representing last backed-up revision, -1 on error or if none can be read. """ if not os.path.isfile(revisionPath): startRevision = -1 logger.debug("Revision file [%s] does not exist on disk.", revisionPath) else: try: with open(revisionPath, "rb") as f: startRevision = pickle.load(f, fix_imports=True) # be compatible with Python 2 logger.debug("Loaded revision file [%s] from disk: %d.", revisionPath, startRevision) except Exception as e: startRevision = -1 logger.error("Failed loading revision file [%s] from disk: %s", revisionPath, e) return startRevision def _writeLastRevision(config, revisionPath, endRevision): """ Writes the end revision to the indicated revision file on disk. If we can't write the revision file successfully for any reason, we'll log the condition but won't throw an exception. @param config: Config object. @param revisionPath: Path to the revision file on disk. @param endRevision: Last revision backed up on this run. """ try: with open(revisionPath, "wb") as f: pickle.dump(endRevision, f, 0, fix_imports=True) changeOwnership(revisionPath, config.options.backupUser, config.options.backupGroup) logger.debug("Wrote new revision file [%s] to disk: %d.", revisionPath, endRevision) except Exception as e: logger.error("Failed to write revision file [%s] to disk: %s", revisionPath, e) ############################## # backupRepository() function ############################## def backupRepository(repositoryPath, backupFile, startRevision=None, endRevision=None): """ Backs up an individual Subversion repository. The starting and ending revision values control an incremental backup. If the starting revision is not passed in, then revision zero (the start of the repository) is assumed. If the ending revision is not passed in, then the youngest revision in the database will be used as the endpoint. The backup data will be written into the passed-in back file. Normally, this would be an object as returned from C{open}, but it is possible to use something like a C{GzipFile} to write compressed output. The caller is responsible for closing the passed-in backup file. @note: This function should either be run as root or as the owner of the Subversion repository. @note: It is apparently I{not} a good idea to interrupt this function. Sometimes, this leaves the repository in a "wedged" state, which requires recovery using C{svnadmin recover}. @param repositoryPath: Path to Subversion repository to back up @type repositoryPath: String path representing Subversion repository on disk. @param backupFile: Python file object to use for writing backup. @type backupFile: Python file object as from C{open()} or C{file()}. @param startRevision: Starting repository revision to back up (for incremental backups) @type startRevision: Integer value >= 0. @param endRevision: Ending repository revision to back up (for incremental backups) @type endRevision: Integer value >= 0. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the Subversion dump. """ if startRevision is None: startRevision = 0 if endRevision is None: endRevision = getYoungestRevision(repositoryPath) if int(startRevision) < 0: raise ValueError("Start revision must be >= 0.") if int(endRevision) < 0: raise ValueError("End revision must be >= 0.") if startRevision > endRevision: raise ValueError("Start revision must be <= end revision.") args = [ "dump", "--quiet", "-r%s:%s" % (startRevision, endRevision), "--incremental", repositoryPath, ] command = resolveCommand(SVNADMIN_COMMAND) result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] if result != 0: raise IOError("Error [%d] executing Subversion dump for repository [%s]." % (result, repositoryPath)) logger.debug("Completed dumping subversion repository [%s].", repositoryPath) ################################# # getYoungestRevision() function ################################# def getYoungestRevision(repositoryPath): """ Gets the youngest (newest) revision in a Subversion repository using C{svnlook}. @note: This function should either be run as root or as the owner of the Subversion repository. @param repositoryPath: Path to Subversion repository to look in. @type repositoryPath: String path representing Subversion repository on disk. @return: Youngest revision as an integer. @raise ValueError: If there is a problem parsing the C{svnlook} output. @raise IOError: If there is a problem executing the C{svnlook} command. """ args = [ 'youngest', repositoryPath, ] command = resolveCommand(SVNLOOK_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) if result != 0: raise IOError("Error [%d] executing 'svnlook youngest' for repository [%s]." % (result, repositoryPath)) if len(output) != 1: raise ValueError("Unable to parse 'svnlook youngest' output.") return int(output[0]) ######################################################################## # Deprecated functionality ######################################################################## class BDBRepository(Repository): """ Class representing Subversion BDB (Berkeley Database) repository configuration. This object is deprecated. Use a simple L{Repository} instead. """ def __init__(self, repositoryPath=None, collectMode=None, compressMode=None): """ Constructor for the C{BDBRepository} class. """ super(BDBRepository, self).__init__("BDB", repositoryPath, collectMode, compressMode) def __repr__(self): """ Official string representation for class instance. """ return "BDBRepository(%s, %s, %s)" % (self.repositoryPath, self.collectMode, self.compressMode) class FSFSRepository(Repository): """ Class representing Subversion FSFS repository configuration. This object is deprecated. Use a simple L{Repository} instead. """ def __init__(self, repositoryPath=None, collectMode=None, compressMode=None): """ Constructor for the C{FSFSRepository} class. """ super(FSFSRepository, self).__init__("FSFS", repositoryPath, collectMode, compressMode) def __repr__(self): """ Official string representation for class instance. """ return "FSFSRepository(%s, %s, %s)" % (self.repositoryPath, self.collectMode, self.compressMode) def backupBDBRepository(repositoryPath, backupFile, startRevision=None, endRevision=None): """ Backs up an individual Subversion BDB repository. This function is deprecated. Use L{backupRepository} instead. """ return backupRepository(repositoryPath, backupFile, startRevision, endRevision) def backupFSFSRepository(repositoryPath, backupFile, startRevision=None, endRevision=None): """ Backs up an individual Subversion FSFS repository. This function is deprecated. Use L{backupRepository} instead. """ return backupRepository(repositoryPath, backupFile, startRevision, endRevision) CedarBackup3-3.1.6/CedarBackup3/extend/encrypt.py0000664000175000017500000005071212560007327023255 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Official Cedar Backup Extensions # Purpose : Provides an extension to encrypt staging directories. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to encrypt staging directories. When this extension is executed, all backed-up files in the configured Cedar Backup staging directory will be encrypted using gpg. Any directory which has already been encrypted (as indicated by the C{cback.encrypt} file) will be ignored. This extension requires a new configuration section and is intended to be run immediately after the standard stage action or immediately before the standard store action. Aside from its own configuration, it requires the options and staging configuration sections in the standard Cedar Backup configuration file. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging from functools import total_ordering # Cedar Backup modules from CedarBackup3.util import resolveCommand, executeCommand, changeOwnership from CedarBackup3.xmlutil import createInputDom, addContainerNode, addStringNode from CedarBackup3.xmlutil import readFirstChild, readString from CedarBackup3.actions.util import findDailyDirs, writeIndicatorFile, getBackupFiles ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.extend.encrypt") GPG_COMMAND = [ "gpg", ] VALID_ENCRYPT_MODES = [ "gpg", ] ENCRYPT_INDICATOR = "cback.encrypt" ######################################################################## # EncryptConfig class definition ######################################################################## @total_ordering class EncryptConfig(object): """ Class representing encrypt configuration. Encrypt configuration is used for encrypting staging directories. The following restrictions exist on data in this class: - The encrypt mode must be one of the values in L{VALID_ENCRYPT_MODES} - The encrypt target value must be a non-empty string @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, encryptMode, encryptTarget """ def __init__(self, encryptMode=None, encryptTarget=None): """ Constructor for the C{EncryptConfig} class. @param encryptMode: Encryption mode @param encryptTarget: Encryption target (for instance, GPG recipient) @raise ValueError: If one of the values is invalid. """ self._encryptMode = None self._encryptTarget = None self.encryptMode = encryptMode self.encryptTarget = encryptTarget def __repr__(self): """ Official string representation for class instance. """ return "EncryptConfig(%s, %s)" % (self.encryptMode, self.encryptTarget) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.encryptMode != other.encryptMode: if str(self.encryptMode or "") < str(other.encryptMode or ""): return -1 else: return 1 if self.encryptTarget != other.encryptTarget: if str(self.encryptTarget or "") < str(other.encryptTarget or ""): return -1 else: return 1 return 0 def _setEncryptMode(self, value): """ Property target used to set the encrypt mode. If not C{None}, the mode must be one of the values in L{VALID_ENCRYPT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_ENCRYPT_MODES: raise ValueError("Encrypt mode must be one of %s." % VALID_ENCRYPT_MODES) self._encryptMode = value def _getEncryptMode(self): """ Property target used to get the encrypt mode. """ return self._encryptMode def _setEncryptTarget(self, value): """ Property target used to set the encrypt target. """ if value is not None: if len(value) < 1: raise ValueError("Encrypt target must be non-empty string.") self._encryptTarget = value def _getEncryptTarget(self): """ Property target used to get the encrypt target. """ return self._encryptTarget encryptMode = property(_getEncryptMode, _setEncryptMode, None, doc="Encrypt mode.") encryptTarget = property(_getEncryptTarget, _setEncryptTarget, None, doc="Encrypt target (i.e. GPG recipient).") ######################################################################## # LocalConfig class definition ######################################################################## @total_ordering class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit encrypt-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, encrypt, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._encrypt = None self.encrypt = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: with open(xmlPath) as f: xmlData = f.read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.encrypt) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.encrypt != other.encrypt: if self.encrypt < other.encrypt: return -1 else: return 1 return 0 def _setEncrypt(self, value): """ Property target used to set the encrypt configuration value. If not C{None}, the value must be a C{EncryptConfig} object. @raise ValueError: If the value is not a C{EncryptConfig} """ if value is None: self._encrypt = None else: if not isinstance(value, EncryptConfig): raise ValueError("Value must be a C{EncryptConfig} object.") self._encrypt = value def _getEncrypt(self): """ Property target used to get the encrypt configuration value. """ return self._encrypt encrypt = property(_getEncrypt, _setEncrypt, None, "Encrypt configuration in terms of a C{EncryptConfig} object.") def validate(self): """ Validates configuration represented by the object. Encrypt configuration must be filled in. Within that, both the encrypt mode and encrypt target must be filled in. @raise ValueError: If one of the validations fails. """ if self.encrypt is None: raise ValueError("Encrypt section is required.") if self.encrypt.encryptMode is None: raise ValueError("Encrypt mode must be set.") if self.encrypt.encryptTarget is None: raise ValueError("Encrypt target must be set.") def addConfig(self, xmlDom, parentNode): """ Adds an configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: encryptMode //cb_config/encrypt/encrypt_mode encryptTarget //cb_config/encrypt/encrypt_target @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.encrypt is not None: sectionNode = addContainerNode(xmlDom, parentNode, "encrypt") addStringNode(xmlDom, sectionNode, "encrypt_mode", self.encrypt.encryptMode) addStringNode(xmlDom, sectionNode, "encrypt_target", self.encrypt.encryptTarget) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the encrypt configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._encrypt = LocalConfig._parseEncrypt(parentNode) @staticmethod def _parseEncrypt(parent): """ Parses an encrypt configuration section. We read the following individual fields:: encryptMode //cb_config/encrypt/encrypt_mode encryptTarget //cb_config/encrypt/encrypt_target @param parent: Parent node to search beneath. @return: C{EncryptConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ encrypt = None section = readFirstChild(parent, "encrypt") if section is not None: encrypt = EncryptConfig() encrypt.encryptMode = readString(section, "encrypt_mode") encrypt.encryptTarget = readString(section, "encrypt_target") return encrypt ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the encrypt backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are I/O problems reading or writing files """ logger.debug("Executing encrypt extended action.") if config.options is None or config.stage is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) if local.encrypt.encryptMode not in ["gpg", ]: raise ValueError("Unknown encrypt mode [%s]" % local.encrypt.encryptMode) if local.encrypt.encryptMode == "gpg": _confirmGpgRecipient(local.encrypt.encryptTarget) dailyDirs = findDailyDirs(config.stage.targetDir, ENCRYPT_INDICATOR) for dailyDir in dailyDirs: _encryptDailyDir(dailyDir, local.encrypt.encryptMode, local.encrypt.encryptTarget, config.options.backupUser, config.options.backupGroup) writeIndicatorFile(dailyDir, ENCRYPT_INDICATOR, config.options.backupUser, config.options.backupGroup) logger.info("Executed the encrypt extended action successfully.") ############################## # _encryptDailyDir() function ############################## def _encryptDailyDir(dailyDir, encryptMode, encryptTarget, backupUser, backupGroup): """ Encrypts the contents of a daily staging directory. Indicator files are ignored. All other files are encrypted. The only valid encrypt mode is C{"gpg"}. @param dailyDir: Daily directory to encrypt @param encryptMode: Encryption mode (only "gpg" is allowed) @param encryptTarget: Encryption target (GPG recipient for "gpg" mode) @param backupUser: User that target files should be owned by @param backupGroup: Group that target files should be owned by @raise ValueError: If the encrypt mode is not supported. @raise ValueError: If the daily staging directory does not exist. """ logger.debug("Begin encrypting contents of [%s].", dailyDir) fileList = getBackupFiles(dailyDir) # ignores indicator files for path in fileList: _encryptFile(path, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=True) logger.debug("Completed encrypting contents of [%s].", dailyDir) ########################## # _encryptFile() function ########################## def _encryptFile(sourcePath, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=False): """ Encrypts the source file using the indicated mode. The encrypted file will be owned by the indicated backup user and group. If C{removeSource} is C{True}, then the source file will be removed after it is successfully encrypted. Currently, only the C{"gpg"} encrypt mode is supported. @param sourcePath: Absolute path of the source file to encrypt @param encryptMode: Encryption mode (only "gpg" is allowed) @param encryptTarget: Encryption target (GPG recipient) @param backupUser: User that target files should be owned by @param backupGroup: Group that target files should be owned by @param removeSource: Indicates whether to remove the source file @return: Path to the newly-created encrypted file. @raise ValueError: If an invalid encrypt mode is passed in. @raise IOError: If there is a problem accessing, encrypting or removing the source file. """ if not os.path.exists(sourcePath): raise ValueError("Source path [%s] does not exist." % sourcePath) if encryptMode == 'gpg': encryptedPath = _encryptFileWithGpg(sourcePath, recipient=encryptTarget) else: raise ValueError("Unknown encrypt mode [%s]" % encryptMode) changeOwnership(encryptedPath, backupUser, backupGroup) if removeSource: if os.path.exists(sourcePath): try: os.remove(sourcePath) logger.debug("Completed removing old file [%s].", sourcePath) except: raise IOError("Failed to remove file [%s] after encrypting it." % (sourcePath)) return encryptedPath ################################# # _encryptFileWithGpg() function ################################# def _encryptFileWithGpg(sourcePath, recipient): """ Encrypts the indicated source file using GPG. The encrypted file will be in GPG's binary output format and will have the same name as the source file plus a C{".gpg"} extension. The source file will not be modified or removed by this function call. @param sourcePath: Absolute path of file to be encrypted. @param recipient: Recipient name to be passed to GPG's C{"-r"} option @return: Path to the newly-created encrypted file. @raise IOError: If there is a problem encrypting the file. """ encryptedPath = "%s.gpg" % sourcePath command = resolveCommand(GPG_COMMAND) args = [ "--batch", "--yes", "-e", "-r", recipient, "-o", encryptedPath, sourcePath, ] result = executeCommand(command, args)[0] if result != 0: raise IOError("Error [%d] calling gpg to encrypt [%s]." % (result, sourcePath)) if not os.path.exists(encryptedPath): raise IOError("After call to [%s], encrypted file [%s] does not exist." % (command, encryptedPath)) logger.debug("Completed encrypting file [%s] to [%s].", sourcePath, encryptedPath) return encryptedPath ################################# # _confirmGpgRecpient() function ################################# def _confirmGpgRecipient(recipient): """ Confirms that a recipient's public key is known to GPG. Throws an exception if there is a problem, or returns normally otherwise. @param recipient: Recipient name @raise IOError: If the recipient's public key is not known to GPG. """ command = resolveCommand(GPG_COMMAND) args = [ "--batch", "-k", recipient, ] # should use --with-colons if the output will be parsed result = executeCommand(command, args)[0] if result != 0: raise IOError("GPG unable to find public key for [%s]." % recipient) CedarBackup3-3.1.6/CedarBackup3/extend/postgresql.py0000664000175000017500000006030112642030433023762 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2006,2010,2015 Kenneth J. Pronovici. # Copyright (c) 2006 Antoine Beaupre. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Antoine Beaupre # Language : Python 3 (>= 3.4) # Project : Official Cedar Backup Extensions # Purpose : Provides an extension to back up PostgreSQL databases. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # This file was created with a width of 132 characters, and NO tabs. ######################################################################## # Module documentation ######################################################################## """ Provides an extension to back up PostgreSQL databases. This is a Cedar Backup extension used to back up PostgreSQL databases via the Cedar Backup command line. It requires a new configurations section and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file. The backup is done via the C{pg_dump} or C{pg_dumpall} commands included with the PostgreSQL product. Output can be compressed using C{gzip} or C{bzip2}. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the C{pg_dump} client. This can be accomplished using appropriate voodoo in the C{pg_hda.conf} file. Note that this code always produces a full backup. There is currently no facility for making incremental backups. You should always make C{/etc/cback3.conf} unreadble to non-root users once you place postgresql configuration into it, since postgresql configuration will contain information about available PostgreSQL databases and usernames. Use of this extension I{may} expose usernames in the process listing (via C{ps}) when the backup is running if the username is specified in the configuration. @author: Kenneth J. Pronovici @author: Antoine Beaupre """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging from gzip import GzipFile from bz2 import BZ2File from functools import total_ordering # Cedar Backup modules from CedarBackup3.xmlutil import createInputDom, addContainerNode, addStringNode, addBooleanNode from CedarBackup3.xmlutil import readFirstChild, readString, readStringList, readBoolean from CedarBackup3.config import VALID_COMPRESS_MODES from CedarBackup3.util import resolveCommand, executeCommand from CedarBackup3.util import ObjectTypeList, changeOwnership ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.extend.postgresql") POSTGRESQLDUMP_COMMAND = [ "pg_dump", ] POSTGRESQLDUMPALL_COMMAND = [ "pg_dumpall", ] ######################################################################## # PostgresqlConfig class definition ######################################################################## @total_ordering class PostgresqlConfig(object): """ Class representing PostgreSQL configuration. The PostgreSQL configuration information is used for backing up PostgreSQL databases. The following restrictions exist on data in this class: - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. - The 'all' flag must be 'Y' if no databases are defined. - The 'all' flag must be 'N' if any databases are defined. - Any values in the databases list must be strings. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, user, all, databases """ def __init__(self, user=None, compressMode=None, all=None, databases=None): # pylint: disable=W0622 """ Constructor for the C{PostgresqlConfig} class. @param user: User to execute backup as. @param compressMode: Compress mode for backed-up files. @param all: Indicates whether to back up all databases. @param databases: List of databases to back up. """ self._user = None self._compressMode = None self._all = None self._databases = None self.user = user self.compressMode = compressMode self.all = all self.databases = databases def __repr__(self): """ Official string representation for class instance. """ return "PostgresqlConfig(%s, %s, %s)" % (self.user, self.all, self.databases) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.user != other.user: if str(self.user or "") < str(other.user or ""): return -1 else: return 1 if self.compressMode != other.compressMode: if str(self.compressMode or "") < str(other.compressMode or ""): return -1 else: return 1 if self.all != other.all: if self.all < other.all: return -1 else: return 1 if self.databases != other.databases: if self.databases < other.databases: return -1 else: return 1 return 0 def _setUser(self, value): """ Property target used to set the user value. """ if value is not None: if len(value) < 1: raise ValueError("User must be non-empty string.") self._user = value def _getUser(self): """ Property target used to get the user value. """ return self._user def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setAll(self, value): """ Property target used to set the 'all' flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._all = True else: self._all = False def _getAll(self): """ Property target used to get the 'all' flag. """ return self._all def _setDatabases(self, value): """ Property target used to set the databases list. Either the value must be C{None} or each element must be a string. @raise ValueError: If the value is not a string. """ if value is None: self._databases = None else: for database in value: if len(database) < 1: raise ValueError("Each database must be a non-empty string.") try: saved = self._databases self._databases = ObjectTypeList(str, "string") self._databases.extend(value) except Exception as e: self._databases = saved raise e def _getDatabases(self): """ Property target used to get the databases list. """ return self._databases user = property(_getUser, _setUser, None, "User to execute backup as.") compressMode = property(_getCompressMode, _setCompressMode, None, "Compress mode to be used for backed-up files.") all = property(_getAll, _setAll, None, "Indicates whether to back up all databases.") databases = property(_getDatabases, _setDatabases, None, "List of databases to back up.") ######################################################################## # LocalConfig class definition ######################################################################## @total_ordering class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit PostgreSQL-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, postgresql, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._postgresql = None self.postgresql = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: with open(xmlPath) as f: xmlData = f.read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.postgresql) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.postgresql != other.postgresql: if self.postgresql < other.postgresql: return -1 else: return 1 return 0 def _setPostgresql(self, value): """ Property target used to set the postgresql configuration value. If not C{None}, the value must be a C{PostgresqlConfig} object. @raise ValueError: If the value is not a C{PostgresqlConfig} """ if value is None: self._postgresql = None else: if not isinstance(value, PostgresqlConfig): raise ValueError("Value must be a C{PostgresqlConfig} object.") self._postgresql = value def _getPostgresql(self): """ Property target used to get the postgresql configuration value. """ return self._postgresql postgresql = property(_getPostgresql, _setPostgresql, None, "Postgresql configuration in terms of a C{PostgresqlConfig} object.") def validate(self): """ Validates configuration represented by the object. The compress mode must be filled in. Then, if the 'all' flag I{is} set, no databases are allowed, and if the 'all' flag is I{not} set, at least one database is required. @raise ValueError: If one of the validations fails. """ if self.postgresql is None: raise ValueError("PostgreSQL section is required.") if self.postgresql.compressMode is None: raise ValueError("Compress mode value is required.") if self.postgresql.all: if self.postgresql.databases is not None and self.postgresql.databases != []: raise ValueError("Databases cannot be specified if 'all' flag is set.") else: if self.postgresql.databases is None or len(self.postgresql.databases) < 1: raise ValueError("At least one PostgreSQL database must be indicated if 'all' flag is not set.") def addConfig(self, xmlDom, parentNode): """ Adds a configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: user //cb_config/postgresql/user compressMode //cb_config/postgresql/compress_mode all //cb_config/postgresql/all We also add groups of the following items, one list element per item:: database //cb_config/postgresql/database @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.postgresql is not None: sectionNode = addContainerNode(xmlDom, parentNode, "postgresql") addStringNode(xmlDom, sectionNode, "user", self.postgresql.user) addStringNode(xmlDom, sectionNode, "compress_mode", self.postgresql.compressMode) addBooleanNode(xmlDom, sectionNode, "all", self.postgresql.all) if self.postgresql.databases is not None: for database in self.postgresql.databases: addStringNode(xmlDom, sectionNode, "database", database) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the postgresql configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._postgresql = LocalConfig._parsePostgresql(parentNode) @staticmethod def _parsePostgresql(parent): """ Parses a postgresql configuration section. We read the following fields:: user //cb_config/postgresql/user compressMode //cb_config/postgresql/compress_mode all //cb_config/postgresql/all We also read groups of the following item, one list element per item:: databases //cb_config/postgresql/database @param parent: Parent node to search beneath. @return: C{PostgresqlConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ postgresql = None section = readFirstChild(parent, "postgresql") if section is not None: postgresql = PostgresqlConfig() postgresql.user = readString(section, "user") postgresql.compressMode = readString(section, "compress_mode") postgresql.all = readBoolean(section, "all") postgresql.databases = readStringList(section, "database") return postgresql ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the PostgreSQL backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If a backup could not be written for some reason. """ logger.debug("Executing PostgreSQL extended action.") if config.options is None or config.collect is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) if local.postgresql.all: logger.info("Backing up all databases.") _backupDatabase(config.collect.targetDir, local.postgresql.compressMode, local.postgresql.user, config.options.backupUser, config.options.backupGroup, None) if local.postgresql.databases is not None and local.postgresql.databases != []: logger.debug("Backing up %d individual databases.", len(local.postgresql.databases)) for database in local.postgresql.databases: logger.info("Backing up database [%s].", database) _backupDatabase(config.collect.targetDir, local.postgresql.compressMode, local.postgresql.user, config.options.backupUser, config.options.backupGroup, database) logger.info("Executed the PostgreSQL extended action successfully.") def _backupDatabase(targetDir, compressMode, user, backupUser, backupGroup, database=None): """ Backs up an individual PostgreSQL database, or all databases. This internal method wraps the public method and adds some functionality, like figuring out a filename, etc. @param targetDir: Directory into which backups should be written. @param compressMode: Compress mode to be used for backed-up files. @param user: User to use for connecting to the database. @param backupUser: User to own resulting file. @param backupGroup: Group to own resulting file. @param database: Name of database, or C{None} for all databases. @return: Name of the generated backup file. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the PostgreSQL dump. """ (outputFile, filename) = _getOutputFile(targetDir, database, compressMode) with outputFile: backupDatabase(user, outputFile, database) if not os.path.exists(filename): raise IOError("Dump file [%s] does not seem to exist after backup completed." % filename) changeOwnership(filename, backupUser, backupGroup) #pylint: disable=R0204 def _getOutputFile(targetDir, database, compressMode): """ Opens the output file used for saving the PostgreSQL dump. The filename is either C{"postgresqldump.txt"} or C{"postgresqldump-.txt"}. The C{".gz"} or C{".bz2"} extension is added if C{compress} is C{True}. @param targetDir: Target directory to write file in. @param database: Name of the database (if any) @param compressMode: Compress mode to be used for backed-up files. @return: Tuple of (Output file object, filename), file opened in binary mode for use with executeCommand() """ if database is None: filename = os.path.join(targetDir, "postgresqldump.txt") else: filename = os.path.join(targetDir, "postgresqldump-%s.txt" % database) if compressMode == "gzip": filename = "%s.gz" % filename outputFile = GzipFile(filename, "wb") elif compressMode == "bzip2": filename = "%s.bz2" % filename outputFile = BZ2File(filename, "wb") else: outputFile = open(filename, "wb") logger.debug("PostgreSQL dump file will be [%s].", filename) return (outputFile, filename) ############################ # backupDatabase() function ############################ def backupDatabase(user, backupFile, database=None): """ Backs up an individual PostgreSQL database, or all databases. This function backs up either a named local PostgreSQL database or all local PostgreSQL databases, using the passed in user for connectivity. This is I{always} a full backup. There is no facility for incremental backups. The backup data will be written into the passed-in back file. Normally, this would be an object as returned from C{open()}, but it is possible to use something like a C{GzipFile} to write compressed output. The caller is responsible for closing the passed-in backup file. @note: Typically, you would use the C{root} user to back up all databases. @param user: User to use for connecting to the database. @type user: String representing PostgreSQL username. @param backupFile: File use for writing backup. @type backupFile: Python file object as from C{open()} or C{file()}. @param database: Name of the database to be backed up. @type database: String representing database name, or C{None} for all databases. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the PostgreSQL dump. """ args = [] if user is not None: args.append('-U') args.append(user) if database is None: command = resolveCommand(POSTGRESQLDUMPALL_COMMAND) else: command = resolveCommand(POSTGRESQLDUMP_COMMAND) args.append(database) result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] if result != 0: if database is None: raise IOError("Error [%d] executing PostgreSQL database dump for all databases." % result) else: raise IOError("Error [%d] executing PostgreSQL database dump for database [%s]." % (result, database)) CedarBackup3-3.1.6/CedarBackup3/extend/__init__.py0000664000175000017500000000263012560007327023324 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Official Cedar Backup Extensions # Purpose : Provides package initialization # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Official Cedar Backup Extensions This package provides official Cedar Backup extensions. These are Cedar Backup actions that are not part of the "standard" set of Cedar Backup actions, but are officially supported along with Cedar Backup. @author: Kenneth J. Pronovici """ ######################################################################## # Package initialization ######################################################################## # Using 'from CedarBackup3.extend import *' will just import the modules listed # in the __all__ variable. __all__ = [ 'amazons3', 'encrypt', 'mbox', 'mysql', 'postgresql', 'split', 'subversion', 'sysinfo', ] CedarBackup3-3.1.6/CedarBackup3/extend/mysql.py0000664000175000017500000006545412642031012022734 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2005,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Official Cedar Backup Extensions # Purpose : Provides an extension to back up MySQL databases. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to back up MySQL databases. This is a Cedar Backup extension used to back up MySQL databases via the Cedar Backup command line. It requires a new configuration section and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file. The backup is done via the C{mysqldump} command included with the MySQL product. Output can be compressed using C{gzip} or C{bzip2}. Administrators can configure the extension either to back up all databases or to back up only specific databases. Note that this code always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I'll update this extension or provide another. The extension assumes that all configured databases can be backed up by a single user. Often, the "root" database user will be used. An alternative is to create a separate MySQL "backup" user and grant that user rights to read (but not write) various databases as needed. This second option is probably the best choice. The extension accepts a username and password in configuration. However, you probably do not want to provide those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to C{mysqldump} via the command-line C{--user} and C{--password} switches, which will be visible to other users in the process listing. Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in C{/root/.my.cnf}:: [mysqldump] user = root password = Regardless of whether you are using C{~/.my.cnf} or C{/etc/cback3.conf} to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode C{0600}). @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging from gzip import GzipFile from bz2 import BZ2File from functools import total_ordering # Cedar Backup modules from CedarBackup3.xmlutil import createInputDom, addContainerNode, addStringNode, addBooleanNode from CedarBackup3.xmlutil import readFirstChild, readString, readStringList, readBoolean from CedarBackup3.config import VALID_COMPRESS_MODES from CedarBackup3.util import resolveCommand, executeCommand from CedarBackup3.util import ObjectTypeList, changeOwnership ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.extend.mysql") MYSQLDUMP_COMMAND = [ "mysqldump", ] ######################################################################## # MysqlConfig class definition ######################################################################## @total_ordering class MysqlConfig(object): """ Class representing MySQL configuration. The MySQL configuration information is used for backing up MySQL databases. The following restrictions exist on data in this class: - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. - The 'all' flag must be 'Y' if no databases are defined. - The 'all' flag must be 'N' if any databases are defined. - Any values in the databases list must be strings. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, user, password, all, databases """ def __init__(self, user=None, password=None, compressMode=None, all=None, databases=None): # pylint: disable=W0622 """ Constructor for the C{MysqlConfig} class. @param user: User to execute backup as. @param password: Password associated with user. @param compressMode: Compress mode for backed-up files. @param all: Indicates whether to back up all databases. @param databases: List of databases to back up. """ self._user = None self._password = None self._compressMode = None self._all = None self._databases = None self.user = user self.password = password self.compressMode = compressMode self.all = all self.databases = databases def __repr__(self): """ Official string representation for class instance. """ return "MysqlConfig(%s, %s, %s, %s)" % (self.user, self.password, self.all, self.databases) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.user != other.user: if str(self.user or "") < str(other.user or ""): return -1 else: return 1 if self.password != other.password: if str(self.password or "") < str(other.password or ""): return -1 else: return 1 if self.compressMode != other.compressMode: if str(self.compressMode or "") < str(other.compressMode or ""): return -1 else: return 1 if self.all != other.all: if self.all < other.all: return -1 else: return 1 if self.databases != other.databases: if self.databases < other.databases: return -1 else: return 1 return 0 def _setUser(self, value): """ Property target used to set the user value. """ if value is not None: if len(value) < 1: raise ValueError("User must be non-empty string.") self._user = value def _getUser(self): """ Property target used to get the user value. """ return self._user def _setPassword(self, value): """ Property target used to set the password value. """ if value is not None: if len(value) < 1: raise ValueError("Password must be non-empty string.") self._password = value def _getPassword(self): """ Property target used to get the password value. """ return self._password def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setAll(self, value): """ Property target used to set the 'all' flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._all = True else: self._all = False def _getAll(self): """ Property target used to get the 'all' flag. """ return self._all def _setDatabases(self, value): """ Property target used to set the databases list. Either the value must be C{None} or each element must be a string. @raise ValueError: If the value is not a string. """ if value is None: self._databases = None else: for database in value: if len(database) < 1: raise ValueError("Each database must be a non-empty string.") try: saved = self._databases self._databases = ObjectTypeList(str, "string") self._databases.extend(value) except Exception as e: self._databases = saved raise e def _getDatabases(self): """ Property target used to get the databases list. """ return self._databases user = property(_getUser, _setUser, None, "User to execute backup as.") password = property(_getPassword, _setPassword, None, "Password associated with user.") compressMode = property(_getCompressMode, _setCompressMode, None, "Compress mode to be used for backed-up files.") all = property(_getAll, _setAll, None, "Indicates whether to back up all databases.") databases = property(_getDatabases, _setDatabases, None, "List of databases to back up.") ######################################################################## # LocalConfig class definition ######################################################################## @total_ordering class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit MySQL-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, mysql, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._mysql = None self.mysql = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: with open(xmlPath) as f: xmlData = f.read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.mysql) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.mysql != other.mysql: if self.mysql < other.mysql: return -1 else: return 1 return 0 def _setMysql(self, value): """ Property target used to set the mysql configuration value. If not C{None}, the value must be a C{MysqlConfig} object. @raise ValueError: If the value is not a C{MysqlConfig} """ if value is None: self._mysql = None else: if not isinstance(value, MysqlConfig): raise ValueError("Value must be a C{MysqlConfig} object.") self._mysql = value def _getMysql(self): """ Property target used to get the mysql configuration value. """ return self._mysql mysql = property(_getMysql, _setMysql, None, "Mysql configuration in terms of a C{MysqlConfig} object.") def validate(self): """ Validates configuration represented by the object. The compress mode must be filled in. Then, if the 'all' flag I{is} set, no databases are allowed, and if the 'all' flag is I{not} set, at least one database is required. @raise ValueError: If one of the validations fails. """ if self.mysql is None: raise ValueError("Mysql section is required.") if self.mysql.compressMode is None: raise ValueError("Compress mode value is required.") if self.mysql.all: if self.mysql.databases is not None and self.mysql.databases != []: raise ValueError("Databases cannot be specified if 'all' flag is set.") else: if self.mysql.databases is None or len(self.mysql.databases) < 1: raise ValueError("At least one MySQL database must be indicated if 'all' flag is not set.") def addConfig(self, xmlDom, parentNode): """ Adds a configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: user //cb_config/mysql/user password //cb_config/mysql/password compressMode //cb_config/mysql/compress_mode all //cb_config/mysql/all We also add groups of the following items, one list element per item:: database //cb_config/mysql/database @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.mysql is not None: sectionNode = addContainerNode(xmlDom, parentNode, "mysql") addStringNode(xmlDom, sectionNode, "user", self.mysql.user) addStringNode(xmlDom, sectionNode, "password", self.mysql.password) addStringNode(xmlDom, sectionNode, "compress_mode", self.mysql.compressMode) addBooleanNode(xmlDom, sectionNode, "all", self.mysql.all) if self.mysql.databases is not None: for database in self.mysql.databases: addStringNode(xmlDom, sectionNode, "database", database) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the mysql configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._mysql = LocalConfig._parseMysql(parentNode) @staticmethod def _parseMysql(parentNode): """ Parses a mysql configuration section. We read the following fields:: user //cb_config/mysql/user password //cb_config/mysql/password compressMode //cb_config/mysql/compress_mode all //cb_config/mysql/all We also read groups of the following item, one list element per item:: databases //cb_config/mysql/database @param parentNode: Parent node to search beneath. @return: C{MysqlConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ mysql = None section = readFirstChild(parentNode, "mysql") if section is not None: mysql = MysqlConfig() mysql.user = readString(section, "user") mysql.password = readString(section, "password") mysql.compressMode = readString(section, "compress_mode") mysql.all = readBoolean(section, "all") mysql.databases = readStringList(section, "database") return mysql ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the MySQL backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If a backup could not be written for some reason. """ logger.debug("Executing MySQL extended action.") if config.options is None or config.collect is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) if local.mysql.all: logger.info("Backing up all databases.") _backupDatabase(config.collect.targetDir, local.mysql.compressMode, local.mysql.user, local.mysql.password, config.options.backupUser, config.options.backupGroup, None) else: logger.debug("Backing up %d individual databases.", len(local.mysql.databases)) for database in local.mysql.databases: logger.info("Backing up database [%s].", database) _backupDatabase(config.collect.targetDir, local.mysql.compressMode, local.mysql.user, local.mysql.password, config.options.backupUser, config.options.backupGroup, database) logger.info("Executed the MySQL extended action successfully.") def _backupDatabase(targetDir, compressMode, user, password, backupUser, backupGroup, database=None): """ Backs up an individual MySQL database, or all databases. This internal method wraps the public method and adds some functionality, like figuring out a filename, etc. @param targetDir: Directory into which backups should be written. @param compressMode: Compress mode to be used for backed-up files. @param user: User to use for connecting to the database (if any). @param password: Password associated with user (if any). @param backupUser: User to own resulting file. @param backupGroup: Group to own resulting file. @param database: Name of database, or C{None} for all databases. @return: Name of the generated backup file. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the MySQL dump. """ (outputFile, filename) = _getOutputFile(targetDir, database, compressMode) with outputFile: backupDatabase(user, password, outputFile, database) if not os.path.exists(filename): raise IOError("Dump file [%s] does not seem to exist after backup completed." % filename) changeOwnership(filename, backupUser, backupGroup) #pylint: disable=R0204 def _getOutputFile(targetDir, database, compressMode): """ Opens the output file used for saving the MySQL dump. The filename is either C{"mysqldump.txt"} or C{"mysqldump-.txt"}. The C{".bz2"} extension is added if C{compress} is C{True}. @param targetDir: Target directory to write file in. @param database: Name of the database (if any) @param compressMode: Compress mode to be used for backed-up files. @return: Tuple of (Output file object, filename), file opened in binary mode for use with executeCommand() """ if database is None: filename = os.path.join(targetDir, "mysqldump.txt") else: filename = os.path.join(targetDir, "mysqldump-%s.txt" % database) if compressMode == "gzip": filename = "%s.gz" % filename outputFile = GzipFile(filename, "wb") elif compressMode == "bzip2": filename = "%s.bz2" % filename outputFile = BZ2File(filename, "wb") else: outputFile = open(filename, "wb") logger.debug("MySQL dump file will be [%s].", filename) return (outputFile, filename) ############################ # backupDatabase() function ############################ def backupDatabase(user, password, backupFile, database=None): """ Backs up an individual MySQL database, or all databases. This function backs up either a named local MySQL database or all local MySQL databases, using the passed-in user and password (if provided) for connectivity. This function call I{always} results a full backup. There is no facility for incremental backups. The backup data will be written into the passed-in backup file. Normally, this would be an object as returned from C{open()}, but it is possible to use something like a C{GzipFile} to write compressed output. The caller is responsible for closing the passed-in backup file. Often, the "root" database user will be used when backing up all databases. An alternative is to create a separate MySQL "backup" user and grant that user rights to read (but not write) all of the databases that will be backed up. This function accepts a username and password. However, you probably do not want to pass those values in. This is because they will be provided to C{mysqldump} via the command-line C{--user} and C{--password} switches, which will be visible to other users in the process listing. Instead, you should configure the username and password in one of MySQL's configuration files. Typically, this would be done by putting a stanza like this in C{/root/.my.cnf}, to provide C{mysqldump} with the root database username and its password:: [mysqldump] user = root password = If you are executing this function as some system user other than root, then the C{.my.cnf} file would be placed in the home directory of that user. In either case, make sure to set restrictive permissions (typically, mode C{0600}) on C{.my.cnf} to make sure that other users cannot read the file. @param user: User to use for connecting to the database (if any) @type user: String representing MySQL username, or C{None} @param password: Password associated with user (if any) @type password: String representing MySQL password, or C{None} @param backupFile: File use for writing backup. @type backupFile: Python file object as from C{open()} or C{file()}. @param database: Name of the database to be backed up. @type database: String representing database name, or C{None} for all databases. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the MySQL dump. """ args = [ "-all", "--flush-logs", "--opt", ] if user is not None: logger.warning("Warning: MySQL username will be visible in process listing (consider using ~/.my.cnf).") args.append("--user=%s" % user) if password is not None: logger.warning("Warning: MySQL password will be visible in process listing (consider using ~/.my.cnf).") args.append("--password=%s" % password) if database is None: args.insert(0, "--all-databases") else: args.insert(0, "--databases") args.append(database) command = resolveCommand(MYSQLDUMP_COMMAND) result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] if result != 0: if database is None: raise IOError("Error [%d] executing MySQL database dump for all databases." % result) else: raise IOError("Error [%d] executing MySQL database dump for database [%s]." % (result, database)) CedarBackup3-3.1.6/CedarBackup3/extend/capacity.py0000664000175000017500000005167512560171077023403 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2008,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides an extension to check remaining media capacity. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to check remaining media capacity. Some users have asked for advance warning that their media is beginning to fill up. This is an extension that checks the current capacity of the media in the writer, and prints a warning if the media is more than X% full, or has fewer than X bytes of capacity remaining. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import logging from functools import total_ordering # Cedar Backup modules from CedarBackup3.util import displayBytes from CedarBackup3.config import ByteQuantity, readByteQuantity, addByteQuantityNode from CedarBackup3.xmlutil import createInputDom, addContainerNode, addStringNode from CedarBackup3.xmlutil import readFirstChild, readString from CedarBackup3.actions.util import createWriter, checkMediaState ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.extend.capacity") ######################################################################## # Percentage class definition ######################################################################## @total_ordering class PercentageQuantity(object): """ Class representing a percentage quantity. The percentage is maintained internally as a string so that issues of precision can be avoided. It really isn't possible to store a floating point number here while being able to losslessly translate back and forth between XML and object representations. (Perhaps the Python 2.4 Decimal class would have been an option, but I originally wanted to stay compatible with Python 2.3.) Even though the quantity is maintained as a string, the string must be in a valid floating point positive number. Technically, any floating point string format supported by Python is allowble. However, it does not make sense to have a negative percentage in this context. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, quantity """ def __init__(self, quantity=None): """ Constructor for the C{PercentageQuantity} class. @param quantity: Percentage quantity, as a string (i.e. "99.9" or "12") @raise ValueError: If the quantity value is invaid. """ self._quantity = None self.quantity = quantity def __repr__(self): """ Official string representation for class instance. """ return "PercentageQuantity(%s)" % (self.quantity) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.quantity != other.quantity: if float(self.quantity or 0.0) < float(other.quantity or 0.0): return -1 else: return 1 return 0 def _setQuantity(self, value): """ Property target used to set the quantity The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. @raise ValueError: If the value is not a valid floating point number @raise ValueError: If the value is less than zero """ if value is not None: if len(value) < 1: raise ValueError("Percentage must be a non-empty string.") floatValue = float(value) if floatValue < 0.0 or floatValue > 100.0: raise ValueError("Percentage must be a positive value from 0.0 to 100.0") self._quantity = value # keep around string def _getQuantity(self): """ Property target used to get the quantity. """ return self._quantity def _getPercentage(self): """ Property target used to get the quantity as a floating point number. If there is no quantity set, then a value of 0.0 is returned. """ if self.quantity is not None: return float(self.quantity) return 0.0 quantity = property(_getQuantity, _setQuantity, None, doc="Percentage value, as a string") percentage = property(_getPercentage, None, None, "Percentage value, as a floating point number.") ######################################################################## # CapacityConfig class definition ######################################################################## @total_ordering class CapacityConfig(object): """ Class representing capacity configuration. The following restrictions exist on data in this class: - The maximum percentage utilized must be a PercentageQuantity - The minimum bytes remaining must be a ByteQuantity @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, maxPercentage, minBytes """ def __init__(self, maxPercentage=None, minBytes=None): """ Constructor for the C{CapacityConfig} class. @param maxPercentage: Maximum percentage of the media that may be utilized @param minBytes: Minimum number of free bytes that must be available """ self._maxPercentage = None self._minBytes = None self.maxPercentage = maxPercentage self.minBytes = minBytes def __repr__(self): """ Official string representation for class instance. """ return "CapacityConfig(%s, %s)" % (self.maxPercentage, self.minBytes) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.maxPercentage != other.maxPercentage: if (self.maxPercentage or PercentageQuantity()) < (other.maxPercentage or PercentageQuantity()): return -1 else: return 1 if self.minBytes != other.minBytes: if (self.minBytes or ByteQuantity()) < (other.minBytes or ByteQuantity()): return -1 else: return 1 return 0 def _setMaxPercentage(self, value): """ Property target used to set the maxPercentage value. If not C{None}, the value must be a C{PercentageQuantity} object. @raise ValueError: If the value is not a C{PercentageQuantity} """ if value is None: self._maxPercentage = None else: if not isinstance(value, PercentageQuantity): raise ValueError("Value must be a C{PercentageQuantity} object.") self._maxPercentage = value def _getMaxPercentage(self): """ Property target used to get the maxPercentage value """ return self._maxPercentage def _setMinBytes(self, value): """ Property target used to set the bytes utilized value. If not C{None}, the value must be a C{ByteQuantity} object. @raise ValueError: If the value is not a C{ByteQuantity} """ if value is None: self._minBytes = None else: if not isinstance(value, ByteQuantity): raise ValueError("Value must be a C{ByteQuantity} object.") self._minBytes = value def _getMinBytes(self): """ Property target used to get the bytes remaining value. """ return self._minBytes maxPercentage = property(_getMaxPercentage, _setMaxPercentage, None, "Maximum percentage of the media that may be utilized.") minBytes = property(_getMinBytes, _setMinBytes, None, "Minimum number of free bytes that must be available.") ######################################################################## # LocalConfig class definition ######################################################################## @total_ordering class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit specific configuration values to this extension. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, capacity, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._capacity = None self.capacity = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: with open(xmlPath) as f: xmlData = f.read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.capacity) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.capacity != other.capacity: if self.capacity < other.capacity: return -1 else: return 1 return 0 def _setCapacity(self, value): """ Property target used to set the capacity configuration value. If not C{None}, the value must be a C{CapacityConfig} object. @raise ValueError: If the value is not a C{CapacityConfig} """ if value is None: self._capacity = None else: if not isinstance(value, CapacityConfig): raise ValueError("Value must be a C{CapacityConfig} object.") self._capacity = value def _getCapacity(self): """ Property target used to get the capacity configuration value. """ return self._capacity capacity = property(_getCapacity, _setCapacity, None, "Capacity configuration in terms of a C{CapacityConfig} object.") def validate(self): """ Validates configuration represented by the object. THere must be either a percentage, or a byte capacity, but not both. @raise ValueError: If one of the validations fails. """ if self.capacity is None: raise ValueError("Capacity section is required.") if self.capacity.maxPercentage is None and self.capacity.minBytes is None: raise ValueError("Must provide either max percentage or min bytes.") if self.capacity.maxPercentage is not None and self.capacity.minBytes is not None: raise ValueError("Must provide either max percentage or min bytes, but not both.") def addConfig(self, xmlDom, parentNode): """ Adds a configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: maxPercentage //cb_config/capacity/max_percentage minBytes //cb_config/capacity/min_bytes @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.capacity is not None: sectionNode = addContainerNode(xmlDom, parentNode, "capacity") LocalConfig._addPercentageQuantity(xmlDom, sectionNode, "max_percentage", self.capacity.maxPercentage) if self.capacity.minBytes is not None: # because utility function fills in empty section on None addByteQuantityNode(xmlDom, sectionNode, "min_bytes", self.capacity.minBytes) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the capacity configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._capacity = LocalConfig._parseCapacity(parentNode) @staticmethod def _parseCapacity(parentNode): """ Parses a capacity configuration section. We read the following fields:: maxPercentage //cb_config/capacity/max_percentage minBytes //cb_config/capacity/min_bytes @param parentNode: Parent node to search beneath. @return: C{CapacityConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ capacity = None section = readFirstChild(parentNode, "capacity") if section is not None: capacity = CapacityConfig() capacity.maxPercentage = LocalConfig._readPercentageQuantity(section, "max_percentage") capacity.minBytes = readByteQuantity(section, "min_bytes") return capacity @staticmethod def _readPercentageQuantity(parent, name): """ Read a percentage quantity value from an XML document. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: Percentage quantity parsed from XML document """ quantity = readString(parent, name) if quantity is None: return None return PercentageQuantity(quantity) @staticmethod def _addPercentageQuantity(xmlDom, parentNode, nodeName, percentageQuantity): """ Adds a text node as the next child of a parent, to contain a percentage quantity. If the C{percentageQuantity} is None, then no node will be created. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @param percentageQuantity: PercentageQuantity object to put into the XML document @return: Reference to the newly-created node. """ if percentageQuantity is not None: addStringNode(xmlDom, parentNode, nodeName, percentageQuantity.quantity) ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the capacity action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are I/O problems reading or writing files """ logger.debug("Executing capacity extended action.") if config.options is None or config.store is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) if config.store.checkMedia: checkMediaState(config.store) # raises exception if media is not initialized capacity = createWriter(config).retrieveCapacity() logger.debug("Media capacity: %s", capacity) if local.capacity.maxPercentage is not None: if capacity.utilized > local.capacity.maxPercentage.percentage: logger.error("Media has reached capacity limit of %s%%: %.2f%% utilized", local.capacity.maxPercentage.quantity, capacity.utilized) else: if capacity.bytesAvailable < local.capacity.minBytes: logger.error("Media has reached capacity limit of %s: only %s available", local.capacity.minBytes, displayBytes(capacity.bytesAvailable)) logger.info("Executed the capacity extended action successfully.") CedarBackup3-3.1.6/CedarBackup3/extend/amazons3.py0000664000175000017500000010434012642031022023307 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2014-2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Official Cedar Backup Extensions # Purpose : "Store" type extension that writes data to Amazon S3. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Store-type extension that writes data to Amazon S3. This extension requires a new configuration section and is intended to be run immediately after the standard stage action, replacing the standard store action. Aside from its own configuration, it requires the options and staging configuration sections in the standard Cedar Backup configuration file. Since it is intended to replace the store action, it does not rely on any store configuration. The underlying functionality relies on the U{AWS CLI interface }. Before you use this extension, you need to set up your Amazon S3 account and configure the AWS CLI connection per Amazon's documentation. The extension assumes that the backup is being executed as root, and switches over to the configured backup user to communicate with AWS. So, make sure you configure AWS CLI as the backup user and not root. You can optionally configure Cedar Backup to encrypt data before sending it to S3. To do that, provide a complete command line using the C{${input}} and C{${output}} variables to represent the original input file and the encrypted output file. This command will be executed as the backup user. For instance, you can use something like this with GPG:: /usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input} The GPG mechanism depends on a strong passphrase for security. One way to generate a strong passphrase is using your system random number generator, i.e.:: dd if=/dev/urandom count=20 bs=1 | xxd -ps (See U{StackExchange } for more details about that advice.) If you decide to use encryption, make sure you save off the passphrase in a safe place, so you can get at your backup data later if you need to. And obviously, make sure to set permissions on the passphrase file so it can only be read by the backup user. This extension was written for and tested on Linux. It will throw an exception if run on Windows. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import sys import os import logging import tempfile import datetime import json import shutil from functools import total_ordering # Cedar Backup modules from CedarBackup3.filesystem import FilesystemList, BackupFileList from CedarBackup3.util import resolveCommand, executeCommand, isRunningAsRoot, changeOwnership, isStartOfWeek from CedarBackup3.util import displayBytes, UNIT_BYTES from CedarBackup3.xmlutil import createInputDom, addContainerNode, addBooleanNode, addStringNode from CedarBackup3.xmlutil import readFirstChild, readString, readBoolean from CedarBackup3.actions.util import writeIndicatorFile from CedarBackup3.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR from CedarBackup3.config import ByteQuantity, readByteQuantity, addByteQuantityNode ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.extend.amazons3") SU_COMMAND = [ "su" ] AWS_COMMAND = [ "aws" ] STORE_INDICATOR = "cback.amazons3" ######################################################################## # AmazonS3Config class definition ######################################################################## @total_ordering class AmazonS3Config(object): """ Class representing Amazon S3 configuration. Amazon S3 configuration is used for storing backup data in Amazon's S3 cloud storage using the C{s3cmd} tool. The following restrictions exist on data in this class: - The s3Bucket value must be a non-empty string - The encryptCommand value, if set, must be a non-empty string - The full backup size limit, if set, must be a ByteQuantity >= 0 - The incremental backup size limit, if set, must be a ByteQuantity >= 0 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, warnMidnite, s3Bucket """ def __init__(self, warnMidnite=None, s3Bucket=None, encryptCommand=None, fullBackupSizeLimit=None, incrementalBackupSizeLimit=None): """ Constructor for the C{AmazonS3Config} class. @param warnMidnite: Whether to generate warnings for crossing midnite. @param s3Bucket: Name of the Amazon S3 bucket in which to store the data @param encryptCommand: Command used to encrypt backup data before upload to S3 @param fullBackupSizeLimit: Maximum size of a full backup, a ByteQuantity @param incrementalBackupSizeLimit: Maximum size of an incremental backup, a ByteQuantity @raise ValueError: If one of the values is invalid. """ self._warnMidnite = None self._s3Bucket = None self._encryptCommand = None self._fullBackupSizeLimit = None self._incrementalBackupSizeLimit = None self.warnMidnite = warnMidnite self.s3Bucket = s3Bucket self.encryptCommand = encryptCommand self.fullBackupSizeLimit = fullBackupSizeLimit self.incrementalBackupSizeLimit = incrementalBackupSizeLimit def __repr__(self): """ Official string representation for class instance. """ return "AmazonS3Config(%s, %s, %s, %s, %s)" % (self.warnMidnite, self.s3Bucket, self.encryptCommand, self.fullBackupSizeLimit, self.incrementalBackupSizeLimit) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.warnMidnite != other.warnMidnite: if self.warnMidnite < other.warnMidnite: return -1 else: return 1 if self.s3Bucket != other.s3Bucket: if str(self.s3Bucket or "") < str(other.s3Bucket or ""): return -1 else: return 1 if self.encryptCommand != other.encryptCommand: if str(self.encryptCommand or "") < str(other.encryptCommand or ""): return -1 else: return 1 if self.fullBackupSizeLimit != other.fullBackupSizeLimit: if (self.fullBackupSizeLimit or ByteQuantity()) < (other.fullBackupSizeLimit or ByteQuantity()): return -1 else: return 1 if self.incrementalBackupSizeLimit != other.incrementalBackupSizeLimit: if (self.incrementalBackupSizeLimit or ByteQuantity()) < (other.incrementalBackupSizeLimit or ByteQuantity()): return -1 else: return 1 return 0 def _setWarnMidnite(self, value): """ Property target used to set the midnite warning flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._warnMidnite = True else: self._warnMidnite = False def _getWarnMidnite(self): """ Property target used to get the midnite warning flag. """ return self._warnMidnite def _setS3Bucket(self, value): """ Property target used to set the S3 bucket. """ if value is not None: if len(value) < 1: raise ValueError("S3 bucket must be non-empty string.") self._s3Bucket = value def _getS3Bucket(self): """ Property target used to get the S3 bucket. """ return self._s3Bucket def _setEncryptCommand(self, value): """ Property target used to set the encrypt command. """ if value is not None: if len(value) < 1: raise ValueError("Encrypt command must be non-empty string.") self._encryptCommand = value def _getEncryptCommand(self): """ Property target used to get the encrypt command. """ return self._encryptCommand def _setFullBackupSizeLimit(self, value): """ Property target used to set the full backup size limit. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._fullBackupSizeLimit = None else: if isinstance(value, ByteQuantity): self._fullBackupSizeLimit = value else: self._fullBackupSizeLimit = ByteQuantity(value, UNIT_BYTES) def _getFullBackupSizeLimit(self): """ Property target used to get the full backup size limit. """ return self._fullBackupSizeLimit def _setIncrementalBackupSizeLimit(self, value): """ Property target used to set the incremental backup size limit. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._incrementalBackupSizeLimit = None else: if isinstance(value, ByteQuantity): self._incrementalBackupSizeLimit = value else: self._incrementalBackupSizeLimit = ByteQuantity(value, UNIT_BYTES) def _getIncrementalBackupSizeLimit(self): """ Property target used to get the incremental backup size limit. """ return self._incrementalBackupSizeLimit warnMidnite = property(_getWarnMidnite, _setWarnMidnite, None, "Whether to generate warnings for crossing midnite.") s3Bucket = property(_getS3Bucket, _setS3Bucket, None, doc="Amazon S3 Bucket in which to store data") encryptCommand = property(_getEncryptCommand, _setEncryptCommand, None, doc="Command used to encrypt data before upload to S3") fullBackupSizeLimit = property(_getFullBackupSizeLimit, _setFullBackupSizeLimit, None, doc="Maximum size of a full backup, as a ByteQuantity") incrementalBackupSizeLimit = property(_getIncrementalBackupSizeLimit, _setIncrementalBackupSizeLimit, None, doc="Maximum size of an incremental backup, as a ByteQuantity") ######################################################################## # LocalConfig class definition ######################################################################## @total_ordering class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit amazons3-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, amazons3, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._amazons3 = None self.amazons3 = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: with open(xmlPath) as f: xmlData = f.read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.amazons3) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.amazons3 != other.amazons3: if self.amazons3 < other.amazons3: return -1 else: return 1 return 0 def _setAmazonS3(self, value): """ Property target used to set the amazons3 configuration value. If not C{None}, the value must be a C{AmazonS3Config} object. @raise ValueError: If the value is not a C{AmazonS3Config} """ if value is None: self._amazons3 = None else: if not isinstance(value, AmazonS3Config): raise ValueError("Value must be a C{AmazonS3Config} object.") self._amazons3 = value def _getAmazonS3(self): """ Property target used to get the amazons3 configuration value. """ return self._amazons3 amazons3 = property(_getAmazonS3, _setAmazonS3, None, "AmazonS3 configuration in terms of a C{AmazonS3Config} object.") def validate(self): """ Validates configuration represented by the object. AmazonS3 configuration must be filled in. Within that, the s3Bucket target must be filled in @raise ValueError: If one of the validations fails. """ if self.amazons3 is None: raise ValueError("AmazonS3 section is required.") if self.amazons3.s3Bucket is None: raise ValueError("AmazonS3 s3Bucket must be set.") def addConfig(self, xmlDom, parentNode): """ Adds an configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: warnMidnite //cb_config/amazons3/warn_midnite s3Bucket //cb_config/amazons3/s3_bucket encryptCommand //cb_config/amazons3/encrypt fullBackupSizeLimit //cb_config/amazons3/full_size_limit incrementalBackupSizeLimit //cb_config/amazons3/incr_size_limit @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.amazons3 is not None: sectionNode = addContainerNode(xmlDom, parentNode, "amazons3") addBooleanNode(xmlDom, sectionNode, "warn_midnite", self.amazons3.warnMidnite) addStringNode(xmlDom, sectionNode, "s3_bucket", self.amazons3.s3Bucket) addStringNode(xmlDom, sectionNode, "encrypt", self.amazons3.encryptCommand) addByteQuantityNode(xmlDom, sectionNode, "full_size_limit", self.amazons3.fullBackupSizeLimit) addByteQuantityNode(xmlDom, sectionNode, "incr_size_limit", self.amazons3.incrementalBackupSizeLimit) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the amazons3 configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._amazons3 = LocalConfig._parseAmazonS3(parentNode) @staticmethod def _parseAmazonS3(parent): """ Parses an amazons3 configuration section. We read the following individual fields:: warnMidnite //cb_config/amazons3/warn_midnite s3Bucket //cb_config/amazons3/s3_bucket encryptCommand //cb_config/amazons3/encrypt fullBackupSizeLimit //cb_config/amazons3/full_size_limit incrementalBackupSizeLimit //cb_config/amazons3/incr_size_limit @param parent: Parent node to search beneath. @return: C{AmazonS3Config} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ amazons3 = None section = readFirstChild(parent, "amazons3") if section is not None: amazons3 = AmazonS3Config() amazons3.warnMidnite = readBoolean(section, "warn_midnite") amazons3.s3Bucket = readString(section, "s3_bucket") amazons3.encryptCommand = readString(section, "encrypt") amazons3.fullBackupSizeLimit = readByteQuantity(section, "full_size_limit") amazons3.incrementalBackupSizeLimit = readByteQuantity(section, "incr_size_limit") return amazons3 ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the amazons3 backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are I/O problems reading or writing files """ logger.debug("Executing amazons3 extended action.") if not isRunningAsRoot(): logger.error("Error: the amazons3 extended action must be run as root.") raise ValueError("The amazons3 extended action must be run as root.") if sys.platform == "win32": logger.error("Error: the amazons3 extended action is not supported on Windows.") raise ValueError("The amazons3 extended action is not supported on Windows.") if config.options is None or config.stage is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) stagingDirs = _findCorrectDailyDir(options, config, local) _applySizeLimits(options, config, local, stagingDirs) _writeToAmazonS3(config, local, stagingDirs) _writeStoreIndicator(config, stagingDirs) logger.info("Executed the amazons3 extended action successfully.") ######################################################################## # Private utility functions ######################################################################## ######################### # _findCorrectDailyDir() ######################### def _findCorrectDailyDir(options, config, local): """ Finds the correct daily staging directory to be written to Amazon S3. This is substantially similar to the same function in store.py. The main difference is that it doesn't rely on store configuration at all. @param options: Options object. @param config: Config object. @param local: Local config object. @return: Correct staging dir, as a dict mapping directory to date suffix. @raise IOError: If the staging directory cannot be found. """ oneDay = datetime.timedelta(days=1) today = datetime.date.today() yesterday = today - oneDay tomorrow = today + oneDay todayDate = today.strftime(DIR_TIME_FORMAT) yesterdayDate = yesterday.strftime(DIR_TIME_FORMAT) tomorrowDate = tomorrow.strftime(DIR_TIME_FORMAT) todayPath = os.path.join(config.stage.targetDir, todayDate) yesterdayPath = os.path.join(config.stage.targetDir, yesterdayDate) tomorrowPath = os.path.join(config.stage.targetDir, tomorrowDate) todayStageInd = os.path.join(todayPath, STAGE_INDICATOR) yesterdayStageInd = os.path.join(yesterdayPath, STAGE_INDICATOR) tomorrowStageInd = os.path.join(tomorrowPath, STAGE_INDICATOR) todayStoreInd = os.path.join(todayPath, STORE_INDICATOR) yesterdayStoreInd = os.path.join(yesterdayPath, STORE_INDICATOR) tomorrowStoreInd = os.path.join(tomorrowPath, STORE_INDICATOR) if options.full: if os.path.isdir(todayPath) and os.path.exists(todayStageInd): logger.info("Amazon S3 process will use current day's staging directory [%s]", todayPath) return { todayPath:todayDate } raise IOError("Unable to find staging directory to process (only tried today due to full option).") else: if os.path.isdir(todayPath) and os.path.exists(todayStageInd) and not os.path.exists(todayStoreInd): logger.info("Amazon S3 process will use current day's staging directory [%s]", todayPath) return { todayPath:todayDate } elif os.path.isdir(yesterdayPath) and os.path.exists(yesterdayStageInd) and not os.path.exists(yesterdayStoreInd): logger.info("Amazon S3 process will use previous day's staging directory [%s]", yesterdayPath) if local.amazons3.warnMidnite: logger.warning("Warning: Amazon S3 process crossed midnite boundary to find data.") return { yesterdayPath:yesterdayDate } elif os.path.isdir(tomorrowPath) and os.path.exists(tomorrowStageInd) and not os.path.exists(tomorrowStoreInd): logger.info("Amazon S3 process will use next day's staging directory [%s]", tomorrowPath) if local.amazons3.warnMidnite: logger.warning("Warning: Amazon S3 process crossed midnite boundary to find data.") return { tomorrowPath:tomorrowDate } raise IOError("Unable to find unused staging directory to process (tried today, yesterday, tomorrow).") ############################## # _applySizeLimits() function ############################## def _applySizeLimits(options, config, local, stagingDirs): """ Apply size limits, throwing an exception if any limits are exceeded. Size limits are optional. If a limit is set to None, it does not apply. The full size limit applies if the full option is set or if today is the start of the week. The incremental size limit applies otherwise. Limits are applied to the total size of all the relevant staging directories. @param options: Options object. @param config: Config object. @param local: Local config object. @param stagingDirs: Dictionary mapping directory path to date suffix. @raise ValueError: Under many generic error conditions @raise ValueError: If a size limit has been exceeded """ if options.full or isStartOfWeek(config.options.startingDay): logger.debug("Using Amazon S3 size limit for full backups.") limit = local.amazons3.fullBackupSizeLimit else: logger.debug("Using Amazon S3 size limit for incremental backups.") limit = local.amazons3.incrementalBackupSizeLimit if limit is None: logger.debug("No Amazon S3 size limit will be applied.") else: logger.debug("Amazon S3 size limit is: %s", limit) contents = BackupFileList() for stagingDir in stagingDirs: contents.addDirContents(stagingDir) total = contents.totalSize() logger.debug("Amazon S3 backup size is: %s", displayBytes(total)) if total > limit: logger.error("Amazon S3 size limit exceeded: %s > %s", displayBytes(total), limit) raise ValueError("Amazon S3 size limit exceeded: %s > %s" % (displayBytes(total), limit)) else: logger.info("Total size does not exceed Amazon S3 size limit, so backup can continue.") ############################## # _writeToAmazonS3() function ############################## def _writeToAmazonS3(config, local, stagingDirs): """ Writes the indicated staging directories to an Amazon S3 bucket. Each of the staging directories listed in C{stagingDirs} will be written to the configured Amazon S3 bucket from local configuration. The directories will be placed into the image at the root by date, so staging directory C{/opt/stage/2005/02/10} will be placed into the S3 bucket at C{/2005/02/10}. If an encrypt commmand is provided, the files will be encrypted first. @param config: Config object. @param local: Local config object. @param stagingDirs: Dictionary mapping directory path to date suffix. @raise ValueError: Under many generic error conditions @raise IOError: If there is a problem writing to Amazon S3 """ for stagingDir in list(stagingDirs.keys()): logger.debug("Storing stage directory to Amazon S3 [%s].", stagingDir) dateSuffix = stagingDirs[stagingDir] s3BucketUrl = "s3://%s/%s" % (local.amazons3.s3Bucket, dateSuffix) logger.debug("S3 bucket URL is [%s]", s3BucketUrl) _clearExistingBackup(config, s3BucketUrl) if local.amazons3.encryptCommand is None: logger.debug("Encryption is disabled; files will be uploaded in cleartext.") _uploadStagingDir(config, stagingDir, s3BucketUrl) _verifyUpload(config, stagingDir, s3BucketUrl) else: logger.debug("Encryption is enabled; files will be uploaded after being encrypted.") encryptedDir = tempfile.mkdtemp(dir=config.options.workingDir) changeOwnership(encryptedDir, config.options.backupUser, config.options.backupGroup) try: _encryptStagingDir(config, local, stagingDir, encryptedDir) _uploadStagingDir(config, encryptedDir, s3BucketUrl) _verifyUpload(config, encryptedDir, s3BucketUrl) finally: if os.path.exists(encryptedDir): shutil.rmtree(encryptedDir) ################################## # _writeStoreIndicator() function ################################## def _writeStoreIndicator(config, stagingDirs): """ Writes a store indicator file into staging directories. @param config: Config object. @param stagingDirs: Dictionary mapping directory path to date suffix. """ for stagingDir in list(stagingDirs.keys()): writeIndicatorFile(stagingDir, STORE_INDICATOR, config.options.backupUser, config.options.backupGroup) ################################## # _clearExistingBackup() function ################################## def _clearExistingBackup(config, s3BucketUrl): """ Clear any existing backup files for an S3 bucket URL. @param config: Config object. @param s3BucketUrl: S3 bucket URL associated with the staging directory """ suCommand = resolveCommand(SU_COMMAND) awsCommand = resolveCommand(AWS_COMMAND) actualCommand = "%s s3 rm --recursive %s/" % (awsCommand[0], s3BucketUrl) result = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand])[0] if result != 0: raise IOError("Error [%d] calling AWS CLI to clear existing backup for [%s]." % (result, s3BucketUrl)) logger.debug("Completed clearing any existing backup in S3 for [%s]", s3BucketUrl) ############################### # _uploadStagingDir() function ############################### def _uploadStagingDir(config, stagingDir, s3BucketUrl): """ Upload the contents of a staging directory out to the Amazon S3 cloud. @param config: Config object. @param stagingDir: Staging directory to upload @param s3BucketUrl: S3 bucket URL associated with the staging directory """ suCommand = resolveCommand(SU_COMMAND) awsCommand = resolveCommand(AWS_COMMAND) actualCommand = "%s s3 cp --recursive %s/ %s/" % (awsCommand[0], stagingDir, s3BucketUrl) result = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand])[0] if result != 0: raise IOError("Error [%d] calling AWS CLI to upload staging directory to [%s]." % (result, s3BucketUrl)) logger.debug("Completed uploading staging dir [%s] to [%s]", stagingDir, s3BucketUrl) ########################### # _verifyUpload() function ########################### def _verifyUpload(config, stagingDir, s3BucketUrl): """ Verify that a staging directory was properly uploaded to the Amazon S3 cloud. @param config: Config object. @param stagingDir: Staging directory to verify @param s3BucketUrl: S3 bucket URL associated with the staging directory """ (bucket, prefix) = s3BucketUrl.replace("s3://", "").split("/", 1) suCommand = resolveCommand(SU_COMMAND) awsCommand = resolveCommand(AWS_COMMAND) query = "Contents[].{Key: Key, Size: Size}" actualCommand = "%s s3api list-objects --bucket %s --prefix %s --query '%s'" % (awsCommand[0], bucket, prefix, query) (result, data) = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand], returnOutput=True) if result != 0: raise IOError("Error [%d] calling AWS CLI verify upload to [%s]." % (result, s3BucketUrl)) contents = { } for entry in json.loads("".join(data)): key = entry["Key"].replace(prefix, "") size = int(entry["Size"]) contents[key] = size files = FilesystemList() files.addDirContents(stagingDir) for entry in files: if os.path.isfile(entry): key = entry.replace(stagingDir, "") size = int(os.stat(entry).st_size) if not key in contents: raise IOError("File was apparently not uploaded: [%s]" % entry) else: if size != contents[key]: raise IOError("File size differs [%s], expected %s bytes but got %s bytes" % (entry, size, contents[key])) logger.debug("Completed verifying upload from [%s] to [%s].", stagingDir, s3BucketUrl) ################################ # _encryptStagingDir() function ################################ def _encryptStagingDir(config, local, stagingDir, encryptedDir): """ Encrypt a staging directory, creating a new directory in the process. @param config: Config object. @param stagingDir: Staging directory to use as source @param encryptedDir: Target directory into which encrypted files should be written """ suCommand = resolveCommand(SU_COMMAND) files = FilesystemList() files.addDirContents(stagingDir) for cleartext in files: if os.path.isfile(cleartext): encrypted = "%s%s" % (encryptedDir, cleartext.replace(stagingDir, "")) if int(os.stat(cleartext).st_size) == 0: with open(encrypted, 'a') as f: f.close() # don't bother encrypting empty files else: actualCommand = local.amazons3.encryptCommand.replace("${input}", cleartext).replace("${output}", encrypted) subdir = os.path.dirname(encrypted) if not os.path.isdir(subdir): os.makedirs(subdir) changeOwnership(subdir, config.options.backupUser, config.options.backupGroup) result = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand])[0] if result != 0: raise IOError("Error [%d] encrypting [%s]." % (result, cleartext)) logger.debug("Completed encrypting staging directory [%s] into [%s]", stagingDir, encryptedDir) CedarBackup3-3.1.6/CedarBackup3/extend/split.py0000664000175000017500000004624612560171202022725 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010,2013,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Official Cedar Backup Extensions # Purpose : Provides an extension to split up large files in staging directories. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to split up large files in staging directories. When this extension is executed, it will look through the configured Cedar Backup staging directory for files exceeding a specified size limit, and split them down into smaller files using the 'split' utility. Any directory which has already been split (as indicated by the C{cback.split} file) will be ignored. This extension requires a new configuration section and is intended to be run immediately after the standard stage action or immediately before the standard store action. Aside from its own configuration, it requires the options and staging configuration sections in the standard Cedar Backup configuration file. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import logging from functools import total_ordering # Cedar Backup modules from CedarBackup3.util import resolveCommand, executeCommand, changeOwnership from CedarBackup3.xmlutil import createInputDom, addContainerNode from CedarBackup3.xmlutil import readFirstChild from CedarBackup3.actions.util import findDailyDirs, writeIndicatorFile, getBackupFiles from CedarBackup3.config import ByteQuantity, readByteQuantity, addByteQuantityNode ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.extend.split") SPLIT_COMMAND = [ "split", ] SPLIT_INDICATOR = "cback.split" ######################################################################## # SplitConfig class definition ######################################################################## @total_ordering class SplitConfig(object): """ Class representing split configuration. Split configuration is used for splitting staging directories. The following restrictions exist on data in this class: - The size limit must be a ByteQuantity - The split size must be a ByteQuantity @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, sizeLimit, splitSize """ def __init__(self, sizeLimit=None, splitSize=None): """ Constructor for the C{SplitCOnfig} class. @param sizeLimit: Size limit of the files, in bytes @param splitSize: Size that files exceeding the limit will be split into, in bytes @raise ValueError: If one of the values is invalid. """ self._sizeLimit = None self._splitSize = None self.sizeLimit = sizeLimit self.splitSize = splitSize def __repr__(self): """ Official string representation for class instance. """ return "SplitConfig(%s, %s)" % (self.sizeLimit, self.splitSize) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.sizeLimit != other.sizeLimit: if (self.sizeLimit or ByteQuantity()) < (other.sizeLimit or ByteQuantity()): return -1 else: return 1 if self.splitSize != other.splitSize: if (self.splitSize or ByteQuantity()) < (other.splitSize or ByteQuantity()): return -1 else: return 1 return 0 def _setSizeLimit(self, value): """ Property target used to set the size limit. If not C{None}, the value must be a C{ByteQuantity} object. @raise ValueError: If the value is not a C{ByteQuantity} """ if value is None: self._sizeLimit = None else: if not isinstance(value, ByteQuantity): raise ValueError("Value must be a C{ByteQuantity} object.") self._sizeLimit = value def _getSizeLimit(self): """ Property target used to get the size limit. """ return self._sizeLimit def _setSplitSize(self, value): """ Property target used to set the split size. If not C{None}, the value must be a C{ByteQuantity} object. @raise ValueError: If the value is not a C{ByteQuantity} """ if value is None: self._splitSize = None else: if not isinstance(value, ByteQuantity): raise ValueError("Value must be a C{ByteQuantity} object.") self._splitSize = value def _getSplitSize(self): """ Property target used to get the split size. """ return self._splitSize sizeLimit = property(_getSizeLimit, _setSizeLimit, None, doc="Size limit, as a ByteQuantity") splitSize = property(_getSplitSize, _setSplitSize, None, doc="Split size, as a ByteQuantity") ######################################################################## # LocalConfig class definition ######################################################################## @total_ordering class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit split-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, split, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._split = None self.split = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: with open(xmlPath) as f: xmlData = f.read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.split) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.split != other.split: if self.split < other.split: return -1 else: return 1 return 0 def _setSplit(self, value): """ Property target used to set the split configuration value. If not C{None}, the value must be a C{SplitConfig} object. @raise ValueError: If the value is not a C{SplitConfig} """ if value is None: self._split = None else: if not isinstance(value, SplitConfig): raise ValueError("Value must be a C{SplitConfig} object.") self._split = value def _getSplit(self): """ Property target used to get the split configuration value. """ return self._split split = property(_getSplit, _setSplit, None, "Split configuration in terms of a C{SplitConfig} object.") def validate(self): """ Validates configuration represented by the object. Split configuration must be filled in. Within that, both the size limit and split size must be filled in. @raise ValueError: If one of the validations fails. """ if self.split is None: raise ValueError("Split section is required.") if self.split.sizeLimit is None: raise ValueError("Size limit must be set.") if self.split.splitSize is None: raise ValueError("Split size must be set.") def addConfig(self, xmlDom, parentNode): """ Adds a configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: sizeLimit //cb_config/split/size_limit splitSize //cb_config/split/split_size @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.split is not None: sectionNode = addContainerNode(xmlDom, parentNode, "split") addByteQuantityNode(xmlDom, sectionNode, "size_limit", self.split.sizeLimit) addByteQuantityNode(xmlDom, sectionNode, "split_size", self.split.splitSize) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the split configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._split = LocalConfig._parseSplit(parentNode) @staticmethod def _parseSplit(parent): """ Parses an split configuration section. We read the following individual fields:: sizeLimit //cb_config/split/size_limit splitSize //cb_config/split/split_size @param parent: Parent node to search beneath. @return: C{EncryptConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ split = None section = readFirstChild(parent, "split") if section is not None: split = SplitConfig() split.sizeLimit = readByteQuantity(section, "size_limit") split.splitSize = readByteQuantity(section, "split_size") return split ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the split backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are I/O problems reading or writing files """ logger.debug("Executing split extended action.") if config.options is None or config.stage is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) dailyDirs = findDailyDirs(config.stage.targetDir, SPLIT_INDICATOR) for dailyDir in dailyDirs: _splitDailyDir(dailyDir, local.split.sizeLimit, local.split.splitSize, config.options.backupUser, config.options.backupGroup) writeIndicatorFile(dailyDir, SPLIT_INDICATOR, config.options.backupUser, config.options.backupGroup) logger.info("Executed the split extended action successfully.") ############################## # _splitDailyDir() function ############################## def _splitDailyDir(dailyDir, sizeLimit, splitSize, backupUser, backupGroup): """ Splits large files in a daily staging directory. Files that match INDICATOR_PATTERNS (i.e. C{"cback.store"}, C{"cback.stage"}, etc.) are assumed to be indicator files and are ignored. All other files are split. @param dailyDir: Daily directory to encrypt @param sizeLimit: Size limit, in bytes @param splitSize: Split size, in bytes @param backupUser: User that target files should be owned by @param backupGroup: Group that target files should be owned by @raise ValueError: If the encrypt mode is not supported. @raise ValueError: If the daily staging directory does not exist. """ logger.debug("Begin splitting contents of [%s].", dailyDir) fileList = getBackupFiles(dailyDir) # ignores indicator files for path in fileList: size = float(os.stat(path).st_size) if size > sizeLimit: _splitFile(path, splitSize, backupUser, backupGroup, removeSource=True) logger.debug("Completed splitting contents of [%s].", dailyDir) ######################## # _splitFile() function ######################## def _splitFile(sourcePath, splitSize, backupUser, backupGroup, removeSource=False): """ Splits the source file into chunks of the indicated size. The split files will be owned by the indicated backup user and group. If C{removeSource} is C{True}, then the source file will be removed after it is successfully split. @param sourcePath: Absolute path of the source file to split @param splitSize: Encryption mode (only "gpg" is allowed) @param backupUser: User that target files should be owned by @param backupGroup: Group that target files should be owned by @param removeSource: Indicates whether to remove the source file @raise IOError: If there is a problem accessing, splitting or removing the source file. """ cwd = os.getcwd() try: if not os.path.exists(sourcePath): raise ValueError("Source path [%s] does not exist." % sourcePath) dirname = os.path.dirname(sourcePath) filename = os.path.basename(sourcePath) prefix = "%s_" % filename bytes = int(splitSize.bytes) # pylint: disable=W0622 os.chdir(dirname) # need to operate from directory that we want files written to command = resolveCommand(SPLIT_COMMAND) args = [ "--verbose", "--numeric-suffixes", "--suffix-length=5", "--bytes=%d" % bytes, filename, prefix, ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=False) if result != 0: raise IOError("Error [%d] calling split for [%s]." % (result, sourcePath)) pattern = re.compile(r"(creating file [`'])(%s)(.*)(')" % prefix) match = pattern.search(output[-1:][0]) if match is None: raise IOError("Unable to parse output from split command.") value = int(match.group(3).strip()) for index in range(0, value): path = "%s%05d" % (prefix, index) if not os.path.exists(path): raise IOError("After call to split, expected file [%s] does not exist." % path) changeOwnership(path, backupUser, backupGroup) if removeSource: if os.path.exists(sourcePath): try: os.remove(sourcePath) logger.debug("Completed removing old file [%s].", sourcePath) except: raise IOError("Failed to remove file [%s] after splitting it." % (sourcePath)) finally: os.chdir(cwd) CedarBackup3-3.1.6/CedarBackup3/action.py0000664000175000017500000000321412560007327021552 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides implementation of various backup-related actions. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides interface backwards compatibility. In Cedar Backup 2.10.0, a refactoring effort took place to reorganize the code for the standard actions. The code formerly in action.py was split into various other files in the CedarBackup3.actions package. This mostly-empty file remains to preserve the Cedar Backup library interface. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # pylint: disable=W0611 from CedarBackup3.actions.collect import executeCollect from CedarBackup3.actions.stage import executeStage from CedarBackup3.actions.store import executeStore from CedarBackup3.actions.purge import executePurge from CedarBackup3.actions.rebuild import executeRebuild from CedarBackup3.actions.validate import executeValidate CedarBackup3-3.1.6/CedarBackup3/writer.py0000664000175000017500000000301412560007327021607 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides interface backwards compatibility. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides interface backwards compatibility. In Cedar Backup 2.10.0, a refactoring effort took place while adding code to support DVD hardware. All of the writer functionality was moved to the writers/ package. This mostly-empty file remains to preserve the Cedar Backup library interface. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # pylint: disable=W0611 from CedarBackup3.writers.util import validateScsiId, validateDriveSpeed from CedarBackup3.writers.cdwriter import MediaDefinition, MediaCapacity, CdWriter from CedarBackup3.writers.cdwriter import MEDIA_CDRW_74, MEDIA_CDR_74, MEDIA_CDRW_80, MEDIA_CDR_80 CedarBackup3-3.1.6/CedarBackup3/__init__.py0000664000175000017500000000404412560007327022036 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides package initialization # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements local and remote backups to CD or DVD media. Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python programming language. @author: Kenneth J. Pronovici """ ######################################################################## # Package initialization ######################################################################## # Using 'from CedarBackup3 import *' will just import the modules listed # in the __all__ variable. __all__ = [ 'actions', 'cli', 'config', 'extend', 'filesystem', 'knapsack', 'peer', 'release', 'tools', 'util', 'writers', ] CedarBackup3-3.1.6/CedarBackup3/filesystem.py0000664000175000017500000017146212562377101022476 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides filesystem-related objects. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides filesystem-related objects. @sort: FilesystemList, BackupFileList, PurgeItemList @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import math import logging import tarfile import hashlib # Cedar Backup modules from CedarBackup3.knapsack import firstFit, bestFit, worstFit, alternateFit from CedarBackup3.util import AbsolutePathList, UnorderedList, RegexList from CedarBackup3.util import removeKeys, displayBytes, calculateFileAge, encodePath, dereferenceLink ######################################################################## # Module-wide variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.filesystem") ######################################################################## # FilesystemList class definition ######################################################################## class FilesystemList(list): ###################### # Class documentation ###################### """ Represents a list of filesystem items. This is a generic class that represents a list of filesystem items. Callers can add individual files or directories to the list, or can recursively add the contents of a directory. The class also allows for up-front exclusions in several forms (all files, all directories, all items matching a pattern, all items whose basename matches a pattern, or all directories containing a specific "ignore file"). Symbolic links are typically backed up non-recursively, i.e. the link to a directory is backed up, but not the contents of that link (we don't want to deal with recursive loops, etc.). The custom methods such as L{addFile} will only add items if they exist on the filesystem and do not match any exclusions that are already in place. However, since a FilesystemList is a subclass of Python's standard list class, callers can also add items to the list in the usual way, using methods like C{append()} or C{insert()}. No validations apply to items added to the list in this way; however, many list-manipulation methods deal "gracefully" with items that don't exist in the filesystem, often by ignoring them. Once a list has been created, callers can remove individual items from the list using standard methods like C{pop()} or C{remove()} or they can use custom methods to remove specific types of entries or entries which match a particular pattern. @note: Regular expression patterns that apply to paths are assumed to be bounded at front and back by the beginning and end of the string, i.e. they are treated as if they begin with C{^} and end with C{$}. This is true whether we are matching a complete path or a basename. @sort: __init__, addFile, addDir, addDirContents, removeFiles, removeDirs, removeLinks, removeMatch, removeInvalid, normalize, excludeFiles, excludeDirs, excludeLinks, excludePaths, excludePatterns, excludeBasenamePatterns, ignoreFile """ ############## # Constructor ############## def __init__(self): """Initializes a list with no configured exclusions.""" list.__init__(self) self._excludeFiles = False self._excludeDirs = False self._excludeLinks = False self._excludePaths = None self._excludePatterns = None self._excludeBasenamePatterns = None self._ignoreFile = None self.excludeFiles = False self.excludeLinks = False self.excludeDirs = False self.excludePaths = [] self.excludePatterns = RegexList() self.excludeBasenamePatterns = RegexList() self.ignoreFile = None ############# # Properties ############# def _setExcludeFiles(self, value): """ Property target used to set the exclude files flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._excludeFiles = True else: self._excludeFiles = False def _getExcludeFiles(self): """ Property target used to get the exclude files flag. """ return self._excludeFiles def _setExcludeDirs(self, value): """ Property target used to set the exclude directories flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._excludeDirs = True else: self._excludeDirs = False def _getExcludeDirs(self): """ Property target used to get the exclude directories flag. """ return self._excludeDirs def _setExcludeLinks(self, value): """ Property target used to set the exclude soft links flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._excludeLinks = True else: self._excludeLinks = False def _getExcludeLinks(self): """ Property target used to get the exclude soft links flag. """ return self._excludeLinks def _setExcludePaths(self, value): """ Property target used to set the exclude paths list. A C{None} value is converted to an empty list. Elements do not have to exist on disk at the time of assignment. @raise ValueError: If any list element is not an absolute path. """ self._excludePaths = AbsolutePathList() if value is not None: self._excludePaths.extend(value) def _getExcludePaths(self): """ Property target used to get the absolute exclude paths list. """ return self._excludePaths def _setExcludePatterns(self, value): """ Property target used to set the exclude patterns list. A C{None} value is converted to an empty list. """ self._excludePatterns = RegexList() if value is not None: self._excludePatterns.extend(value) def _getExcludePatterns(self): """ Property target used to get the exclude patterns list. """ return self._excludePatterns def _setExcludeBasenamePatterns(self, value): """ Property target used to set the exclude basename patterns list. A C{None} value is converted to an empty list. """ self._excludeBasenamePatterns = RegexList() if value is not None: self._excludeBasenamePatterns.extend(value) def _getExcludeBasenamePatterns(self): """ Property target used to get the exclude basename patterns list. """ return self._excludeBasenamePatterns def _setIgnoreFile(self, value): """ Property target used to set the ignore file. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The ignore file must be a non-empty string.") self._ignoreFile = value def _getIgnoreFile(self): """ Property target used to get the ignore file. """ return self._ignoreFile excludeFiles = property(_getExcludeFiles, _setExcludeFiles, None, "Boolean indicating whether files should be excluded.") excludeDirs = property(_getExcludeDirs, _setExcludeDirs, None, "Boolean indicating whether directories should be excluded.") excludeLinks = property(_getExcludeLinks, _setExcludeLinks, None, "Boolean indicating whether soft links should be excluded.") excludePaths = property(_getExcludePaths, _setExcludePaths, None, "List of absolute paths to be excluded.") excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns (matching complete path) to be excluded.") excludeBasenamePatterns = property(_getExcludeBasenamePatterns, _setExcludeBasenamePatterns, None, "List of regular expression patterns (matching basename) to be excluded.") ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, "Name of file which will cause directory contents to be ignored.") ############## # Add methods ############## def addFile(self, path): """ Adds a file to the list. The path must exist and must be a file or a link to an existing file. It will be added to the list subject to any exclusions that are in place. @param path: File path to be added to the list @type path: String representing a path on disk @return: Number of items added to the list. @raise ValueError: If path is not a file or does not exist. @raise ValueError: If the path could not be encoded properly. """ path = encodePath(path) if not os.path.exists(path) or not os.path.isfile(path): logger.debug("Path [%s] is not a file or does not exist on disk.", path) raise ValueError("Path is not a file or does not exist on disk.") if self.excludeLinks and os.path.islink(path): logger.debug("Path [%s] is excluded based on excludeLinks.", path) return 0 if self.excludeFiles: logger.debug("Path [%s] is excluded based on excludeFiles.", path) return 0 if path in self.excludePaths: logger.debug("Path [%s] is excluded based on excludePaths.", path) return 0 for pattern in self.excludePatterns: pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(path): # safe to assume all are valid due to RegexList logger.debug("Path [%s] is excluded based on pattern [%s].", path, pattern) return 0 for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): logger.debug("Path [%s] is excluded based on basename pattern [%s].", path, pattern) return 0 self.append(path) logger.debug("Added file to list: [%s]", path) return 1 def addDir(self, path): """ Adds a directory to the list. The path must exist and must be a directory or a link to an existing directory. It will be added to the list subject to any exclusions that are in place. The L{ignoreFile} does not apply to this method, only to L{addDirContents}. @param path: Directory path to be added to the list @type path: String representing a path on disk @return: Number of items added to the list. @raise ValueError: If path is not a directory or does not exist. @raise ValueError: If the path could not be encoded properly. """ path = encodePath(path) path = normalizeDir(path) if not os.path.exists(path) or not os.path.isdir(path): logger.debug("Path [%s] is not a directory or does not exist on disk.", path) raise ValueError("Path is not a directory or does not exist on disk.") if self.excludeLinks and os.path.islink(path): logger.debug("Path [%s] is excluded based on excludeLinks.", path) return 0 if self.excludeDirs: logger.debug("Path [%s] is excluded based on excludeDirs.", path) return 0 if path in self.excludePaths: logger.debug("Path [%s] is excluded based on excludePaths.", path) return 0 for pattern in self.excludePatterns: # safe to assume all are valid due to RegexList pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(path): logger.debug("Path [%s] is excluded based on pattern [%s].", path, pattern) return 0 for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): logger.debug("Path [%s] is excluded based on basename pattern [%s].", path, pattern) return 0 self.append(path) logger.debug("Added directory to list: [%s]", path) return 1 def addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False): """ Adds the contents of a directory to the list. The path must exist and must be a directory or a link to a directory. The contents of the directory (as well as the directory path itself) will be recursively added to the list, subject to any exclusions that are in place. If you only want the directory and its immediate contents to be added, then pass in C{recursive=False}. @note: If a directory's absolute path matches an exclude pattern or path, or if the directory contains the configured ignore file, then the directory and all of its contents will be recursively excluded from the list. @note: If the passed-in directory happens to be a soft link, it will be recursed. However, the linkDepth parameter controls whether any soft links I{within} the directory will be recursed. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links, a depth of 1 follows only links within the passed-in directory, a depth of 2 follows the links at the next level down, etc. @note: Any invalid soft links (i.e. soft links that point to non-existent items) will be silently ignored. @note: The L{excludeDirs} flag only controls whether any given directory path itself is added to the list once it has been discovered. It does I{not} modify any behavior related to directory recursion. @note: If you call this method I{on a link to a directory} that link will never be dereferenced (it may, however, be followed). @param path: Directory path whose contents should be added to the list @type path: String representing a path on disk @param recursive: Indicates whether directory contents should be added recursively. @type recursive: Boolean value @param addSelf: Indicates whether the directory itself should be added to the list. @type addSelf: Boolean value @param linkDepth: Maximum depth of the tree at which soft links should be followed @type linkDepth: Integer value, where zero means not to follow any soft links @param dereference: Indicates whether soft links, if followed, should be dereferenced @type dereference: Boolean value @return: Number of items recursively added to the list @raise ValueError: If path is not a directory or does not exist. @raise ValueError: If the path could not be encoded properly. """ path = encodePath(path) path = normalizeDir(path) return self._addDirContentsInternal(path, addSelf, recursive, linkDepth, dereference) def _addDirContentsInternal(self, path, includePath=True, recursive=True, linkDepth=0, dereference=False): """ Internal implementation of C{addDirContents}. This internal implementation exists due to some refactoring. Basically, some subclasses have a need to add the contents of a directory, but not the directory itself. This is different than the standard C{FilesystemList} behavior and actually ends up making a special case out of the first call in the recursive chain. Since I don't want to expose the modified interface, C{addDirContents} ends up being wholly implemented in terms of this method. The linkDepth parameter controls whether soft links are followed when we are adding the contents recursively. Any recursive calls reduce the value by one. If the value zero or less, then soft links will just be added as directories, but will not be followed. This means that links are followed to a I{constant depth} starting from the top-most directory. There is one difference between soft links and directories: soft links that are added recursively are not placed into the list explicitly. This is because if we do add the links recursively, the resulting tar file gets a little confused (it has a link and a directory with the same name). @note: If you call this method I{on a link to a directory} that link will never be dereferenced (it may, however, be followed). @param path: Directory path whose contents should be added to the list. @param includePath: Indicates whether to include the path as well as contents. @param recursive: Indicates whether directory contents should be added recursively. @param linkDepth: Depth of soft links that should be followed @param dereference: Indicates whether soft links, if followed, should be dereferenced @return: Number of items recursively added to the list @raise ValueError: If path is not a directory or does not exist. """ added = 0 if not os.path.exists(path) or not os.path.isdir(path): logger.debug("Path [%s] is not a directory or does not exist on disk.", path) raise ValueError("Path is not a directory or does not exist on disk.") if path in self.excludePaths: logger.debug("Path [%s] is excluded based on excludePaths.", path) return added for pattern in self.excludePatterns: # safe to assume all are valid due to RegexList pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(path): logger.debug("Path [%s] is excluded based on pattern [%s].", path, pattern) return added for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): logger.debug("Path [%s] is excluded based on basename pattern [%s].", path, pattern) return added if self.ignoreFile is not None and os.path.exists(os.path.join(path, self.ignoreFile)): logger.debug("Path [%s] is excluded based on ignore file.", path) return added if includePath: added += self.addDir(path) # could actually be excluded by addDir, yet for entry in os.listdir(path): entrypath = os.path.join(path, entry) if os.path.isfile(entrypath): if linkDepth > 0 and dereference: derefpath = dereferenceLink(entrypath) if derefpath != entrypath: added += self.addFile(derefpath) added += self.addFile(entrypath) elif os.path.isdir(entrypath): if os.path.islink(entrypath): if recursive: if linkDepth > 0: newDepth = linkDepth - 1 if dereference: derefpath = dereferenceLink(entrypath) if derefpath != entrypath: added += self._addDirContentsInternal(derefpath, True, recursive, newDepth, dereference) added += self.addDir(entrypath) else: added += self._addDirContentsInternal(entrypath, False, recursive, newDepth, dereference) else: added += self.addDir(entrypath) else: added += self.addDir(entrypath) else: if recursive: newDepth = linkDepth - 1 added += self._addDirContentsInternal(entrypath, True, recursive, newDepth, dereference) else: added += self.addDir(entrypath) return added ################# # Remove methods ################# def removeFiles(self, pattern=None): """ Removes file entries from the list. If C{pattern} is not passed in or is C{None}, then all file entries will be removed from the list. Otherwise, only those file entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use L{removeInvalid} to purge those entries). This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all files, then you will be better off setting L{excludeFiles} to C{True} before adding items to the list. @param pattern: Regular expression pattern representing entries to remove @return: Number of entries removed @raise ValueError: If the passed-in pattern is not a valid regular expression. """ removed = 0 if pattern is None: for entry in self[:]: if os.path.exists(entry) and os.path.isfile(entry): self.remove(entry) logger.debug("Removed path [%s] from list.", entry) removed += 1 else: try: pattern = encodePath(pattern) # use same encoding as filenames compiled = re.compile(pattern) except re.error: raise ValueError("Pattern is not a valid regular expression.") for entry in self[:]: if os.path.exists(entry) and os.path.isfile(entry): if compiled.match(entry): self.remove(entry) logger.debug("Removed path [%s] from list.", entry) removed += 1 logger.debug("Removed a total of %d entries.", removed) return removed def removeDirs(self, pattern=None): """ Removes directory entries from the list. If C{pattern} is not passed in or is C{None}, then all directory entries will be removed from the list. Otherwise, only those directory entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use L{removeInvalid} to purge those entries). This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all directories, then you will be better off setting L{excludeDirs} to C{True} before adding items to the list (note that this will not prevent you from recursively adding the I{contents} of directories). @param pattern: Regular expression pattern representing entries to remove @return: Number of entries removed @raise ValueError: If the passed-in pattern is not a valid regular expression. """ removed = 0 if pattern is None: for entry in self[:]: if os.path.exists(entry) and os.path.isdir(entry): self.remove(entry) logger.debug("Removed path [%s] from list.", entry) removed += 1 else: try: pattern = encodePath(pattern) # use same encoding as filenames compiled = re.compile(pattern) except re.error: raise ValueError("Pattern is not a valid regular expression.") for entry in self[:]: if os.path.exists(entry) and os.path.isdir(entry): if compiled.match(entry): self.remove(entry) logger.debug("Removed path [%s] from list based on pattern [%s].", entry, pattern) removed += 1 logger.debug("Removed a total of %d entries.", removed) return removed def removeLinks(self, pattern=None): """ Removes soft link entries from the list. If C{pattern} is not passed in or is C{None}, then all soft link entries will be removed from the list. Otherwise, only those soft link entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use L{removeInvalid} to purge those entries). This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all soft links, then you will be better off setting L{excludeLinks} to C{True} before adding items to the list. @param pattern: Regular expression pattern representing entries to remove @return: Number of entries removed @raise ValueError: If the passed-in pattern is not a valid regular expression. """ removed = 0 if pattern is None: for entry in self[:]: if os.path.exists(entry) and os.path.islink(entry): self.remove(entry) logger.debug("Removed path [%s] from list.", entry) removed += 1 else: try: pattern = encodePath(pattern) # use same encoding as filenames compiled = re.compile(pattern) except re.error: raise ValueError("Pattern is not a valid regular expression.") for entry in self[:]: if os.path.exists(entry) and os.path.islink(entry): if compiled.match(entry): self.remove(entry) logger.debug("Removed path [%s] from list based on pattern [%s].", entry, pattern) removed += 1 logger.debug("Removed a total of %d entries.", removed) return removed def removeMatch(self, pattern): """ Removes from the list all entries matching a pattern. This method removes from the list all entries which match the passed in C{pattern}. Since there is no need to check the type of each entry, it is faster to call this method than to call the L{removeFiles}, L{removeDirs} or L{removeLinks} methods individually. If you know which patterns you will want to remove ahead of time, you may be better off setting L{excludePatterns} or L{excludeBasenamePatterns} before adding items to the list. @note: Unlike when using the exclude lists, the pattern here is I{not} bounded at the front and the back of the string. You can use any pattern you want. @param pattern: Regular expression pattern representing entries to remove @return: Number of entries removed. @raise ValueError: If the passed-in pattern is not a valid regular expression. """ try: pattern = encodePath(pattern) # use same encoding as filenames compiled = re.compile(pattern) except re.error: raise ValueError("Pattern is not a valid regular expression.") removed = 0 for entry in self[:]: if compiled.match(entry): self.remove(entry) logger.debug("Removed path [%s] from list based on pattern [%s].", entry, pattern) removed += 1 logger.debug("Removed a total of %d entries.", removed) return removed def removeInvalid(self): """ Removes from the list all entries that do not exist on disk. This method removes from the list all entries which do not currently exist on disk in some form. No attention is paid to whether the entries are files or directories. @return: Number of entries removed. """ removed = 0 for entry in self[:]: if not os.path.exists(entry): self.remove(entry) logger.debug("Removed path [%s] from list.", entry) removed += 1 logger.debug("Removed a total of %d entries.", removed) return removed ################## # Utility methods ################## def normalize(self): """Normalizes the list, ensuring that each entry is unique.""" orig = len(self) self.sort() dups = list(filter(lambda x, self=self: self[x] == self[x+1], list(range(0, len(self) - 1)))) items = list(map(lambda x, self=self: self[x], dups)) list(map(self.remove, items)) new = len(self) logger.debug("Completed normalizing list; removed %d items (%d originally, %d now).", new-orig, orig, new) def verify(self): """ Verifies that all entries in the list exist on disk. @return: C{True} if all entries exist, C{False} otherwise. """ for entry in self: if not os.path.exists(entry): logger.debug("Path [%s] is invalid; list is not valid.", entry) return False logger.debug("All entries in list are valid.") return True ######################################################################## # SpanItem class definition ######################################################################## class SpanItem(object): # pylint: disable=R0903 """ Item returned by L{BackupFileList.generateSpan}. """ def __init__(self, fileList, size, capacity, utilization): """ Create object. @param fileList: List of files @param size: Size (in bytes) of files @param utilization: Utilization, as a percentage (0-100) """ self.fileList = fileList self.size = size self.capacity = capacity self.utilization = utilization ######################################################################## # BackupFileList class definition ######################################################################## class BackupFileList(FilesystemList): # pylint: disable=R0904 ###################### # Class documentation ###################### """ List of files to be backed up. A BackupFileList is a L{FilesystemList} containing a list of files to be backed up. It only contains files, not directories (soft links are treated like files). On top of the generic functionality provided by L{FilesystemList}, this class adds functionality to keep a hash (checksum) for each file in the list, and it also provides a method to calculate the total size of the files in the list and a way to export the list into tar form. @sort: __init__, addDir, totalSize, generateSizeMap, generateDigestMap, generateFitted, generateTarfile, removeUnchanged """ ############## # Constructor ############## def __init__(self): """Initializes a list with no configured exclusions.""" FilesystemList.__init__(self) ################################ # Overridden superclass methods ################################ def addDir(self, path): """ Adds a directory to the list. Note that this class does not allow directories to be added by themselves (a backup list contains only files). However, since links to directories are technically files, we allow them to be added. This method is implemented in terms of the superclass method, with one additional validation: the superclass method is only called if the passed-in path is both a directory and a link. All of the superclass's existing validations and restrictions apply. @param path: Directory path to be added to the list @type path: String representing a path on disk @return: Number of items added to the list. @raise ValueError: If path is not a directory or does not exist. @raise ValueError: If the path could not be encoded properly. """ path = encodePath(path) path = normalizeDir(path) if os.path.isdir(path) and not os.path.islink(path): return 0 else: return FilesystemList.addDir(self, path) ################## # Utility methods ################## def totalSize(self): """ Returns the total size among all files in the list. Only files are counted. Soft links that point at files are ignored. Entries which do not exist on disk are ignored. @return: Total size, in bytes """ total = 0.0 for entry in self: if os.path.isfile(entry) and not os.path.islink(entry): total += float(os.stat(entry).st_size) return total def generateSizeMap(self): """ Generates a mapping from file to file size in bytes. The mapping does include soft links, which are listed with size zero. Entries which do not exist on disk are ignored. @return: Dictionary mapping file to file size """ table = { } for entry in self: if os.path.islink(entry): table[entry] = 0.0 elif os.path.isfile(entry): table[entry] = float(os.stat(entry).st_size) return table def generateDigestMap(self, stripPrefix=None): """ Generates a mapping from file to file digest. Currently, the digest is an SHA hash, which should be pretty secure. In the future, this might be a different kind of hash, but we guarantee that the type of the hash will not change unless the library major version number is bumped. Entries which do not exist on disk are ignored. Soft links are ignored. We would end up generating a digest for the file that the soft link points at, which doesn't make any sense. If C{stripPrefix} is passed in, then that prefix will be stripped from each key when the map is generated. This can be useful in generating two "relative" digest maps to be compared to one another. @param stripPrefix: Common prefix to be stripped from paths @type stripPrefix: String with any contents @return: Dictionary mapping file to digest value @see: L{removeUnchanged} """ table = { } if stripPrefix is not None: for entry in self: if os.path.isfile(entry) and not os.path.islink(entry): table[entry.replace(stripPrefix, "", 1)] = BackupFileList._generateDigest(entry) else: for entry in self: if os.path.isfile(entry) and not os.path.islink(entry): table[entry] = BackupFileList._generateDigest(entry) return table @staticmethod def _generateDigest(path): """ Generates an SHA digest for a given file on disk. The original code for this function used this simplistic implementation, which requires reading the entire file into memory at once in order to generate a digest value:: sha.new(open(path).read()).hexdigest() Not surprisingly, this isn't an optimal solution. The U{Simple file hashing } Python Cookbook recipe describes how to incrementally generate a hash value by reading in chunks of data rather than reading the file all at once. The recipe relies on the the C{update()} method of the various Python hashing algorithms. In my tests using a 110 MB file on CD, the original implementation requires 111 seconds. This implementation requires only 40-45 seconds, which is a pretty substantial speed-up. Experience shows that reading in around 4kB (4096 bytes) at a time yields the best performance. Smaller reads are quite a bit slower, and larger reads don't make much of a difference. The 4kB number makes me a little suspicious, and I think it might be related to the size of a filesystem read at the hardware level. However, I've decided to just hardcode 4096 until I have evidence that shows it's worthwhile making the read size configurable. @param path: Path to generate digest for. @return: ASCII-safe SHA digest for the file. @raise OSError: If the file cannot be opened. """ # pylint: disable=C0103,E1101 s = hashlib.sha1() with open(path, mode="rb") as f: readBytes = 4096 # see notes above while readBytes > 0: readString = f.read(readBytes) s.update(readString) readBytes = len(readString) digest = s.hexdigest() logger.debug("Generated digest [%s] for file [%s].", digest, path) return digest def generateFitted(self, capacity, algorithm="worst_fit"): """ Generates a list of items that fit in the indicated capacity. Sometimes, callers would like to include every item in a list, but are unable to because not all of the items fit in the space available. This method returns a copy of the list, containing only the items that fit in a given capacity. A copy is returned so that we don't lose any information if for some reason the fitted list is unsatisfactory. The fitting is done using the functions in the knapsack module. By default, the first fit algorithm is used, but you can also choose from best fit, worst fit and alternate fit. @param capacity: Maximum capacity among the files in the new list @type capacity: Integer, in bytes @param algorithm: Knapsack (fit) algorithm to use @type algorithm: One of "first_fit", "best_fit", "worst_fit", "alternate_fit" @return: Copy of list with total size no larger than indicated capacity @raise ValueError: If the algorithm is invalid. """ table = self._getKnapsackTable() function = BackupFileList._getKnapsackFunction(algorithm) return function(table, capacity)[0] def generateSpan(self, capacity, algorithm="worst_fit"): """ Splits the list of items into sub-lists that fit in a given capacity. Sometimes, callers need split to a backup file list into a set of smaller lists. For instance, you could use this to "span" the files across a set of discs. The fitting is done using the functions in the knapsack module. By default, the first fit algorithm is used, but you can also choose from best fit, worst fit and alternate fit. @note: If any of your items are larger than the capacity, then it won't be possible to find a solution. In this case, a value error will be raised. @param capacity: Maximum capacity among the files in the new list @type capacity: Integer, in bytes @param algorithm: Knapsack (fit) algorithm to use @type algorithm: One of "first_fit", "best_fit", "worst_fit", "alternate_fit" @return: List of L{SpanItem} objects. @raise ValueError: If the algorithm is invalid. @raise ValueError: If it's not possible to fit some items """ spanItems = [] function = BackupFileList._getKnapsackFunction(algorithm) table = self._getKnapsackTable(capacity) iteration = 0 while len(table) > 0: iteration += 1 fit = function(table, capacity) if len(fit[0]) == 0: # Should never happen due to validations in _convertToKnapsackForm(), but let's be safe raise ValueError("After iteration %d, unable to add any new items." % iteration) removeKeys(table, fit[0]) utilization = (float(fit[1])/float(capacity))*100.0 item = SpanItem(fit[0], fit[1], capacity, utilization) spanItems.append(item) return spanItems def _getKnapsackTable(self, capacity=None): """ Converts the list into the form needed by the knapsack algorithms. @return: Dictionary mapping file name to tuple of (file path, file size). """ table = { } for entry in self: if os.path.islink(entry): table[entry] = (entry, 0.0) elif os.path.isfile(entry): size = float(os.stat(entry).st_size) if capacity is not None: if size > capacity: raise ValueError("File [%s] cannot fit in capacity %s." % (entry, displayBytes(capacity))) table[entry] = (entry, size) return table @staticmethod def _getKnapsackFunction(algorithm): """ Returns a reference to the function associated with an algorithm name. Algorithm name must be one of "first_fit", "best_fit", "worst_fit", "alternate_fit" @param algorithm: Name of the algorithm @return: Reference to knapsack function @raise ValueError: If the algorithm name is unknown. """ if algorithm == "first_fit": return firstFit elif algorithm == "best_fit": return bestFit elif algorithm == "worst_fit": return worstFit elif algorithm == "alternate_fit": return alternateFit else: raise ValueError("Algorithm [%s] is invalid." % algorithm) def generateTarfile(self, path, mode='tar', ignore=False, flat=False): """ Creates a tar file containing the files in the list. By default, this method will create uncompressed tar files. If you pass in mode C{'targz'}, then it will create gzipped tar files, and if you pass in mode C{'tarbz2'}, then it will create bzipped tar files. The tar file will be created as a GNU tar archive, which enables extended file name lengths, etc. Since GNU tar is so prevalent, I've decided that the extra functionality out-weighs the disadvantage of not being "standard". If you pass in C{flat=True}, then a "flat" archive will be created, and all of the files will be added to the root of the archive. So, the file C{/tmp/something/whatever.txt} would be added as just C{whatever.txt}. By default, the whole method call fails if there are problems adding any of the files to the archive, resulting in an exception. Under these circumstances, callers are advised that they might want to call L{removeInvalid()} and then attempt to extract the tar file a second time, since the most common cause of failures is a missing file (a file that existed when the list was built, but is gone again by the time the tar file is built). If you want to, you can pass in C{ignore=True}, and the method will ignore errors encountered when adding individual files to the archive (but not errors opening and closing the archive itself). We'll always attempt to remove the tarfile from disk if an exception will be thrown. @note: No validation is done as to whether the entries in the list are files, since only files or soft links should be in an object like this. However, to be safe, everything is explicitly added to the tar archive non-recursively so it's safe to include soft links to directories. @note: The Python C{tarfile} module, which is used internally here, is supposed to deal properly with long filenames and links. In my testing, I have found that it appears to be able to add long really long filenames to archives, but doesn't do a good job reading them back out, even out of an archive it created. Fortunately, all Cedar Backup does is add files to archives. @param path: Path of tar file to create on disk @type path: String representing a path on disk @param mode: Tar creation mode @type mode: One of either C{'tar'}, C{'targz'} or C{'tarbz2'} @param ignore: Indicates whether to ignore certain errors. @type ignore: Boolean @param flat: Creates "flat" archive by putting all items in root @type flat: Boolean @raise ValueError: If mode is not valid @raise ValueError: If list is empty @raise ValueError: If the path could not be encoded properly. @raise TarError: If there is a problem creating the tar file """ # pylint: disable=E1101 path = encodePath(path) if len(self) == 0: raise ValueError("Empty list cannot be used to generate tarfile.") if mode == 'tar': tarmode = "w:" elif mode == 'targz': tarmode = "w:gz" elif mode == 'tarbz2': tarmode = "w:bz2" else: raise ValueError("Mode [%s] is not valid." % mode) try: tar = tarfile.open(path, tarmode) try: tar.format = tarfile.GNU_FORMAT except AttributeError: tar.posix = False for entry in self: try: if flat: tar.add(entry, arcname=os.path.basename(entry), recursive=False) else: tar.add(entry, recursive=False) except tarfile.TarError as e: if not ignore: raise e logger.info("Unable to add file [%s]; going on anyway.", entry) except OSError as e: if not ignore: raise tarfile.TarError(e) logger.info("Unable to add file [%s]; going on anyway.", entry) tar.close() except tarfile.ReadError as e: try: tar.close() except: pass if os.path.exists(path): try: os.remove(path) except: pass raise tarfile.ReadError("Unable to open [%s]; maybe directory doesn't exist?" % path) except tarfile.TarError as e: try: tar.close() except: pass if os.path.exists(path): try: os.remove(path) except: pass raise e def removeUnchanged(self, digestMap, captureDigest=False): """ Removes unchanged entries from the list. This method relies on a digest map as returned from L{generateDigestMap}. For each entry in C{digestMap}, if the entry also exists in the current list I{and} the entry in the current list has the same digest value as in the map, the entry in the current list will be removed. This method offers a convenient way for callers to filter unneeded entries from a list. The idea is that a caller will capture a digest map from C{generateDigestMap} at some point in time (perhaps the beginning of the week), and will save off that map using C{pickle} or some other method. Then, the caller could use this method sometime in the future to filter out any unchanged files based on the saved-off map. If C{captureDigest} is passed-in as C{True}, then digest information will be captured for the entire list before the removal step occurs using the same rules as in L{generateDigestMap}. The check will involve a lookup into the complete digest map. If C{captureDigest} is passed in as C{False}, we will only generate a digest value for files we actually need to check, and we'll ignore any entry in the list which isn't a file that currently exists on disk. The return value varies depending on C{captureDigest}, as well. To preserve backwards compatibility, if C{captureDigest} is C{False}, then we'll just return a single value representing the number of entries removed. Otherwise, we'll return a tuple of C{(entries removed, digest map)}. The returned digest map will be in exactly the form returned by L{generateDigestMap}. @note: For performance reasons, this method actually ends up rebuilding the list from scratch. First, we build a temporary dictionary containing all of the items from the original list. Then, we remove items as needed from the dictionary (which is faster than the equivalent operation on a list). Finally, we replace the contents of the current list based on the keys left in the dictionary. This should be transparent to the caller. @param digestMap: Dictionary mapping file name to digest value. @type digestMap: Map as returned from L{generateDigestMap}. @param captureDigest: Indicates that digest information should be captured. @type captureDigest: Boolean @return: Results as discussed above (format varies based on arguments) """ if captureDigest: removed = 0 table = {} captured = {} for entry in self: if os.path.isfile(entry) and not os.path.islink(entry): table[entry] = BackupFileList._generateDigest(entry) captured[entry] = table[entry] else: table[entry] = None for entry in list(digestMap.keys()): if entry in table: if table[entry] is not None: # equivalent to file/link check in other case digest = table[entry] if digest == digestMap[entry]: removed += 1 del table[entry] logger.debug("Discarded unchanged file [%s].", entry) self[:] = list(table.keys()) return (removed, captured) else: removed = 0 table = {} for entry in self: table[entry] = None for entry in list(digestMap.keys()): if entry in table: if os.path.isfile(entry) and not os.path.islink(entry): digest = BackupFileList._generateDigest(entry) if digest == digestMap[entry]: removed += 1 del table[entry] logger.debug("Discarded unchanged file [%s].", entry) self[:] = list(table.keys()) return removed ######################################################################## # PurgeItemList class definition ######################################################################## class PurgeItemList(FilesystemList): # pylint: disable=R0904 ###################### # Class documentation ###################### """ List of files and directories to be purged. A PurgeItemList is a L{FilesystemList} containing a list of files and directories to be purged. On top of the generic functionality provided by L{FilesystemList}, this class adds functionality to remove items that are too young to be purged, and to actually remove each item in the list from the filesystem. The other main difference is that when you add a directory's contents to a purge item list, the directory itself is not added to the list. This way, if someone asks to purge within in C{/opt/backup/collect}, that directory doesn't get removed once all of the files within it is gone. """ ############## # Constructor ############## def __init__(self): """Initializes a list with no configured exclusions.""" FilesystemList.__init__(self) ############## # Add methods ############## def addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False): """ Adds the contents of a directory to the list. The path must exist and must be a directory or a link to a directory. The contents of the directory (but I{not} the directory path itself) will be recursively added to the list, subject to any exclusions that are in place. If you only want the directory and its contents to be added, then pass in C{recursive=False}. @note: If a directory's absolute path matches an exclude pattern or path, or if the directory contains the configured ignore file, then the directory and all of its contents will be recursively excluded from the list. @note: If the passed-in directory happens to be a soft link, it will be recursed. However, the linkDepth parameter controls whether any soft links I{within} the directory will be recursed. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links, a depth of 1 follows only links within the passed-in directory, a depth of 2 follows the links at the next level down, etc. @note: Any invalid soft links (i.e. soft links that point to non-existent items) will be silently ignored. @note: The L{excludeDirs} flag only controls whether any given soft link path itself is added to the list once it has been discovered. It does I{not} modify any behavior related to directory recursion. @note: The L{excludeDirs} flag only controls whether any given directory path itself is added to the list once it has been discovered. It does I{not} modify any behavior related to directory recursion. @note: If you call this method I{on a link to a directory} that link will never be dereferenced (it may, however, be followed). @param path: Directory path whose contents should be added to the list @type path: String representing a path on disk @param recursive: Indicates whether directory contents should be added recursively. @type recursive: Boolean value @param addSelf: Ignored in this subclass. @param linkDepth: Depth of soft links that should be followed @type linkDepth: Integer value, where zero means not to follow any soft links @param dereference: Indicates whether soft links, if followed, should be dereferenced @type dereference: Boolean value @return: Number of items recursively added to the list @raise ValueError: If path is not a directory or does not exist. @raise ValueError: If the path could not be encoded properly. """ path = encodePath(path) path = normalizeDir(path) return super(PurgeItemList, self)._addDirContentsInternal(path, False, recursive, linkDepth, dereference) ################## # Utility methods ################## def removeYoungFiles(self, daysOld): """ Removes from the list files younger than a certain age (in days). Any file whose "age" in days is less than (C{<}) the value of the C{daysOld} parameter will be removed from the list so that it will not be purged later when L{purgeItems} is called. Directories and soft links will be ignored. The "age" of a file is the amount of time since the file was last used, per the most recent of the file's C{st_atime} and C{st_mtime} values. @note: Some people find the "sense" of this method confusing or "backwards". Keep in mind that this method is used to remove items I{from the list}, not from the filesystem! It removes from the list those items that you would I{not} want to purge because they are too young. As an example, passing in C{daysOld} of zero (0) would remove from the list no files, which would result in purging all of the files later. I would be happy to make a synonym of this method with an easier-to-understand "sense", if someone can suggest one. @param daysOld: Minimum age of files that are to be kept in the list. @type daysOld: Integer value >= 0. @return: Number of entries removed """ removed = 0 daysOld = int(daysOld) if daysOld < 0: raise ValueError("Days old value must be an integer >= 0.") for entry in self[:]: if os.path.isfile(entry) and not os.path.islink(entry): try: ageInDays = calculateFileAge(entry) ageInWholeDays = math.floor(ageInDays) if ageInWholeDays < 0: ageInWholeDays = 0 if ageInWholeDays < daysOld: removed += 1 self.remove(entry) except OSError: pass return removed def purgeItems(self): """ Purges all items in the list. Every item in the list will be purged. Directories in the list will I{not} be purged recursively, and hence will only be removed if they are empty. Errors will be ignored. To faciliate easy removal of directories that will end up being empty, the delete process happens in two passes: files first (including soft links), then directories. @return: Tuple containing count of (files, dirs) removed """ files = 0 dirs = 0 for entry in self: if os.path.exists(entry) and (os.path.isfile(entry) or os.path.islink(entry)): try: os.remove(entry) files += 1 logger.debug("Purged file [%s].", entry) except OSError: pass for entry in self: if os.path.exists(entry) and os.path.isdir(entry) and not os.path.islink(entry): try: os.rmdir(entry) dirs += 1 logger.debug("Purged empty directory [%s].", entry) except OSError: pass return (files, dirs) ######################################################################## # Public functions ######################################################################## ########################## # normalizeDir() function ########################## def normalizeDir(path): """ Normalizes a directory name. For our purposes, a directory name is normalized by removing the trailing path separator, if any. This is important because we want directories to appear within lists in a consistent way, although from the user's perspective passing in C{/path/to/dir/} and C{/path/to/dir} are equivalent. @param path: Path to be normalized. @type path: String representing a path on disk @return: Normalized path, which should be equivalent to the original. """ if path != os.sep and path[-1:] == os.sep: return path[:-1] return path ############################# # compareContents() function ############################# def compareContents(path1, path2, verbose=False): """ Compares the contents of two directories to see if they are equivalent. The two directories are recursively compared. First, we check whether they contain exactly the same set of files. Then, we check to see every given file has exactly the same contents in both directories. This is all relatively simple to implement through the magic of L{BackupFileList.generateDigestMap}, which knows how to strip a path prefix off the front of each entry in the mapping it generates. This makes our comparison as simple as creating a list for each path, then generating a digest map for each path and comparing the two. If no exception is thrown, the two directories are considered identical. If the C{verbose} flag is C{True}, then an alternate (but slower) method is used so that any thrown exception can indicate exactly which file caused the comparison to fail. The thrown C{ValueError} exception distinguishes between the directories containing different files, and containing the same files with differing content. @note: Symlinks are I{not} followed for the purposes of this comparison. @param path1: First path to compare. @type path1: String representing a path on disk @param path2: First path to compare. @type path2: String representing a path on disk @param verbose: Indicates whether a verbose response should be given. @type verbose: Boolean @raise ValueError: If a directory doesn't exist or can't be read. @raise ValueError: If the two directories are not equivalent. @raise IOError: If there is an unusual problem reading the directories. """ try: path1List = BackupFileList() path1List.addDirContents(path1) path1Digest = path1List.generateDigestMap(stripPrefix=normalizeDir(path1)) path2List = BackupFileList() path2List.addDirContents(path2) path2Digest = path2List.generateDigestMap(stripPrefix=normalizeDir(path2)) compareDigestMaps(path1Digest, path2Digest, verbose) except IOError as e: logger.error("I/O error encountered during consistency check.") raise e def compareDigestMaps(digest1, digest2, verbose=False): """ Compares two digest maps and throws an exception if they differ. @param digest1: First digest to compare. @type digest1: Digest as returned from BackupFileList.generateDigestMap() @param digest2: Second digest to compare. @type digest2: Digest as returned from BackupFileList.generateDigestMap() @param verbose: Indicates whether a verbose response should be given. @type verbose: Boolean @raise ValueError: If the two directories are not equivalent. """ if not verbose: if digest1 != digest2: raise ValueError("Consistency check failed.") else: list1 = UnorderedList(list(digest1.keys())) list2 = UnorderedList(list(digest2.keys())) if list1 != list2: raise ValueError("Directories contain a different set of files.") for key in list1: if digest1[key] != digest2[key]: raise ValueError("File contents for [%s] vary between directories." % key) CedarBackup3-3.1.6/CedarBackup3/image.py0000664000175000017500000000247512560007327021367 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides interface backwards compatibility. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides interface backwards compatibility. In Cedar Backup 2.10.0, a refactoring effort took place while adding code to support DVD hardware. All of the writer functionality was moved to the writers/ package. This mostly-empty file remains to preserve the Cedar Backup library interface. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## from CedarBackup3.writers.util import IsoImage # pylint: disable=W0611 CedarBackup3-3.1.6/CedarBackup3/testutil.py0000664000175000017500000003550112560007327022156 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2006,2008,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides unit-testing utilities. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides unit-testing utilities. These utilities are kept here, separate from util.py, because they provide common functionality that I do not want exported "publicly" once Cedar Backup is installed on a system. They are only used for unit testing, and are only useful within the source tree. Many of these functions are in here because they are "good enough" for unit test work but are not robust enough to be real public functions. Others (like L{removedir}) do what they are supposed to, but I don't want responsibility for making them available to others. @sort: findResources, commandAvailable, buildPath, removedir, extractTar, changeFileAge, getMaskAsMode, getLogin, failUnlessAssignRaises, runningAsRoot, platformDebian, platformMacOsX @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## import sys import os import tarfile import time import getpass import random import string # pylint: disable=W0402 import platform import logging from io import StringIO from CedarBackup3.util import encodePath, executeCommand from CedarBackup3.config import Config, OptionsConfig from CedarBackup3.customize import customizeOverrides from CedarBackup3.cli import setupPathResolver ######################################################################## # Public functions ######################################################################## ############################## # setupDebugLogger() function ############################## def setupDebugLogger(): """ Sets up a screen logger for debugging purposes. Normally, the CLI functionality configures the logger so that things get written to the right place. However, for debugging it's sometimes nice to just get everything -- debug information and output -- dumped to the screen. This function takes care of that. """ logger = logging.getLogger("CedarBackup3") logger.setLevel(logging.DEBUG) # let the logger see all messages formatter = logging.Formatter(fmt="%(message)s") handler = logging.StreamHandler(stream=sys.stdout) handler.setFormatter(formatter) handler.setLevel(logging.DEBUG) logger.addHandler(handler) ################# # setupOverrides ################# def setupOverrides(): """ Set up any platform-specific overrides that might be required. When packages are built, this is done manually (hardcoded) in customize.py and the overrides are set up in cli.cli(). This way, no runtime checks need to be done. This is safe, because the package maintainer knows exactly which platform (Debian or not) the package is being built for. Unit tests are different, because they might be run anywhere. So, we attempt to make a guess about plaform using platformDebian(), and use that to set up the custom overrides so that platform-specific unit tests continue to work. """ config = Config() config.options = OptionsConfig() if platformDebian(): customizeOverrides(config, platform="debian") else: customizeOverrides(config, platform="standard") setupPathResolver(config) ########################### # findResources() function ########################### def findResources(resources, dataDirs): """ Returns a dictionary of locations for various resources. @param resources: List of required resources. @param dataDirs: List of data directories to search within for resources. @return: Dictionary mapping resource name to resource path. @raise Exception: If some resource cannot be found. """ mapping = { } for resource in resources: for resourceDir in dataDirs: path = os.path.join(resourceDir, resource) if os.path.exists(path): mapping[resource] = path break else: raise Exception("Unable to find resource [%s]." % resource) return mapping ############################## # commandAvailable() function ############################## def commandAvailable(command): """ Indicates whether a command is available on $PATH somewhere. This should work on both Windows and UNIX platforms. @param command: Commang to search for @return: Boolean true/false depending on whether command is available. """ if "PATH" in os.environ: for path in os.environ["PATH"].split(os.sep): if os.path.exists(os.path.join(path, command)): return True return False ####################### # buildPath() function ####################### def buildPath(components): """ Builds a complete path from a list of components. For instance, constructs C{"/a/b/c"} from C{["/a", "b", "c",]}. @param components: List of components. @returns: String path constructed from components. @raise ValueError: If a path cannot be encoded properly. """ path = components[0] for component in components[1:]: path = os.path.join(path, component) return encodePath(path) ####################### # removedir() function ####################### def removedir(tree): """ Recursively removes an entire directory. This is basically taken from an example on python.com. @param tree: Directory tree to remove. @raise ValueError: If a path cannot be encoded properly. """ tree = encodePath(tree) for root, dirs, files in os.walk(tree, topdown=False): for name in files: path = os.path.join(root, name) if os.path.islink(path): os.remove(path) elif os.path.isfile(path): os.remove(path) for name in dirs: path = os.path.join(root, name) if os.path.islink(path): os.remove(path) elif os.path.isdir(path): os.rmdir(path) os.rmdir(tree) ######################## # extractTar() function ######################## def extractTar(tmpdir, filepath): """ Extracts the indicated tar file to the indicated tmpdir. @param tmpdir: Temp directory to extract to. @param filepath: Path to tarfile to extract. @raise ValueError: If a path cannot be encoded properly. """ # pylint: disable=E1101 tmpdir = encodePath(tmpdir) filepath = encodePath(filepath) with tarfile.open(filepath) as tar: try: tar.format = tarfile.GNU_FORMAT except AttributeError: tar.posix = False for tarinfo in tar: tar.extract(tarinfo, tmpdir) ########################### # changeFileAge() function ########################### def changeFileAge(filename, subtract=None): """ Changes a file age using the C{os.utime} function. @note: Some platforms don't seem to be able to set an age precisely. As a result, whereas we might have intended to set an age of 86400 seconds, we actually get an age of 86399.375 seconds. When util.calculateFileAge() looks at that the file, it calculates an age of 0.999992766204 days, which then gets truncated down to zero whole days. The tests get very confused. To work around this, I always subtract off one additional second as a fudge factor. That way, the file age will be I{at least} as old as requested later on. @param filename: File to operate on. @param subtract: Number of seconds to subtract from the current time. @raise ValueError: If a path cannot be encoded properly. """ filename = encodePath(filename) newTime = time.time() - 1 if subtract is not None: newTime -= subtract os.utime(filename, (newTime, newTime)) ########################### # getMaskAsMode() function ########################### def getMaskAsMode(): """ Returns the user's current umask inverted to a mode. A mode is mostly a bitwise inversion of a mask, i.e. mask 002 is mode 775. @return: Umask converted to a mode, as an integer. """ umask = os.umask(0o777) os.umask(umask) return int(~umask & 0o777) # invert, then use only lower bytes ###################### # getLogin() function ###################### def getLogin(): """ Returns the name of the currently-logged in user. This might fail under some circumstances - but if it does, our tests would fail anyway. """ return getpass.getuser() ############################ # randomFilename() function ############################ def randomFilename(length, prefix=None, suffix=None): """ Generates a random filename with the given length. @param length: Length of filename. @return Random filename. """ characters = [None] * length for i in range(length): characters[i] = random.choice(string.ascii_uppercase) if prefix is None: prefix = "" if suffix is None: suffix = "" return "%s%s%s" % (prefix, "".join(characters), suffix) #################################### # failUnlessAssignRaises() function #################################### def failUnlessAssignRaises(testCase, exception, obj, prop, value): """ Equivalent of C{failUnlessRaises}, but used for property assignments instead. It's nice to be able to use C{failUnlessRaises} to check that a method call raises the exception that you expect. Unfortunately, this method can't be used to check Python propery assignments, even though these property assignments are actually implemented underneath as methods. This function (which can be easily called by unit test classes) provides an easy way to wrap the assignment checks. It's not pretty, or as intuitive as the original check it's modeled on, but it does work. Let's assume you make this method call:: testCase.failUnlessAssignRaises(ValueError, collectDir, "absolutePath", absolutePath) If you do this, a test case failure will be raised unless the assignment:: collectDir.absolutePath = absolutePath fails with a C{ValueError} exception. The failure message differentiates between the case where no exception was raised and the case where the wrong exception was raised. @note: Internally, the C{missed} and C{instead} variables are used rather than directly calling C{testCase.fail} upon noticing a problem because the act of "failure" itself generates an exception that would be caught by the general C{except} clause. @param testCase: PyUnit test case object (i.e. self). @param exception: Exception that is expected to be raised. @param obj: Object whose property is to be assigned to. @param prop: Name of the property, as a string. @param value: Value that is to be assigned to the property. @see: C{unittest.TestCase.failUnlessRaises} """ missed = False instead = None try: exec("obj.%s = value" % prop) # pylint: disable=W0122 missed = True except exception: pass except Exception as e: instead = e if missed: testCase.fail("Expected assignment to raise %s, but got no exception." % (exception.__name__)) if instead is not None: testCase.fail("Expected assignment to raise %s, but got %s instead." % (ValueError, instead.__class__.__name__)) ########################### # captureOutput() function ########################### def captureOutput(c): """ Captures the output (stdout, stderr) of a function or a method. Some of our functions don't do anything other than just print output. We need a way to test these functions (at least nominally) but we don't want any of the output spoiling the test suite output. This function just creates a dummy file descriptor that can be used as a target by the callable function, rather than C{stdout} or C{stderr}. @note: This method assumes that C{callable} doesn't take any arguments besides keyword argument C{fd} to specify the file descriptor. @param c: Callable function or method. @return: Output of function, as one big string. """ fd = StringIO() c(fd=fd) result = fd.getvalue() fd.close() return result ######################### # _isPlatform() function ######################### def _isPlatform(name): """ Returns boolean indicating whether we're running on the indicated platform. @param name: Platform name to check, currently one of "windows" or "macosx" """ if name == "windows": return platform.platform(True, True).startswith("Windows") elif name == "macosx": return sys.platform == "darwin" elif name == "debian": return platform.platform(False, False).find("debian") > 0 elif name == "cygwin": return platform.platform(True, True).startswith("CYGWIN") else: raise ValueError("Unknown platform [%s]." % name) ############################ # platformDebian() function ############################ def platformDebian(): """ Returns boolean indicating whether this is the Debian platform. """ return _isPlatform("debian") ############################ # platformMacOsX() function ############################ def platformMacOsX(): """ Returns boolean indicating whether this is the Mac OS X platform. """ return _isPlatform("macosx") ########################### # runningAsRoot() function ########################### def runningAsRoot(): """ Returns boolean indicating whether the effective user id is root. """ return os.geteuid() == 0 ############################## # availableLocales() function ############################## def availableLocales(): """ Returns a list of available locales on the system @return: List of string locale names """ locales = [] output = executeCommand(["locale"], [ "-a", ], returnOutput=True, ignoreStderr=True)[1] for line in output: locales.append(line.rstrip()) return locales CedarBackup3-3.1.6/CedarBackup3/tools/0002775000175000017500000000000012657665551021105 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/CedarBackup3/tools/span.py0000775000175000017500000006163112562431536022415 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007-2008,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Spans staged data among multiple discs # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Notes ######################################################################## """ Spans staged data among multiple discs This is the Cedar Backup span tool. It is intended for use by people who stage more data than can fit on a single disc. It allows a user to split staged data among more than one disc. It can't be an extension because it requires user input when switching media. Most configuration is taken from the Cedar Backup configuration file, specifically the store section. A few pieces of configuration are taken directly from the user. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules and constants ######################################################################## # System modules import sys import os import logging import tempfile # Cedar Backup modules from CedarBackup3.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT from CedarBackup3.util import displayBytes, convertSize, mount, unmount from CedarBackup3.util import UNIT_SECTORS, UNIT_BYTES from CedarBackup3.config import Config from CedarBackup3.filesystem import BackupFileList, compareDigestMaps, normalizeDir from CedarBackup3.cli import Options, setupLogging, setupPathResolver from CedarBackup3.cli import DEFAULT_CONFIG, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP, DEFAULT_MODE from CedarBackup3.actions.constants import STORE_INDICATOR from CedarBackup3.actions.util import createWriter from CedarBackup3.actions.store import writeIndicatorFile from CedarBackup3.actions.util import findDailyDirs from CedarBackup3.util import Diagnostics ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.tools.span") ####################################################################### # SpanOptions class ####################################################################### class SpanOptions(Options): """ Tool-specific command-line options. Most of the cback3 command-line options are exactly what we need here -- logfile path, permissions, verbosity, etc. However, we need to make a few tweaks since we don't accept any actions. Also, a few extra command line options that we accept are really ignored underneath. I just don't care about that for a tool like this. """ def validate(self): """ Validates command-line options represented by the object. There are no validations here, because we don't use any actions. @raise ValueError: If one of the validations fails. """ pass ####################################################################### # Public functions ####################################################################### ################# # cli() function ################# def cli(): """ Implements the command-line interface for the C{cback3-span} script. Essentially, this is the "main routine" for the cback3-span script. It does all of the argument processing for the script, and then also implements the tool functionality. This function looks pretty similiar to C{CedarBackup3.cli.cli()}. It's not easy to refactor this code to make it reusable and also readable, so I've decided to just live with the duplication. A different error code is returned for each type of failure: - C{1}: The Python interpreter version is < 3.4 - C{2}: Error processing command-line arguments - C{3}: Error configuring logging - C{4}: Error parsing indicated configuration file - C{5}: Backup was interrupted with a CTRL-C or similar - C{6}: Error executing other parts of the script @note: This script uses print rather than logging to the INFO level, because it is interactive. Underlying Cedar Backup functionality uses the logging mechanism exclusively. @return: Error code as described above. """ try: if list(map(int, [sys.version_info[0], sys.version_info[1]])) < [3, 4]: sys.stderr.write("Python 3 version 3.4 or greater required.\n") return 1 except: # sys.version_info isn't available before 2.0 sys.stderr.write("Python 3 version 3.4 or greater required.\n") return 1 try: options = SpanOptions(argumentList=sys.argv[1:]) except Exception as e: _usage() sys.stderr.write(" *** Error: %s\n" % e) return 2 if options.help: _usage() return 0 if options.version: _version() return 0 if options.diagnostics: _diagnostics() return 0 if options.stacktrace: logfile = setupLogging(options) else: try: logfile = setupLogging(options) except Exception as e: sys.stderr.write("Error setting up logging: %s\n" % e) return 3 logger.info("Cedar Backup 'span' utility run started.") logger.info("Options were [%s]", options) logger.info("Logfile is [%s]", logfile) if options.config is None: logger.debug("Using default configuration file.") configPath = DEFAULT_CONFIG else: logger.debug("Using user-supplied configuration file.") configPath = options.config try: logger.info("Configuration path is [%s]", configPath) config = Config(xmlPath=configPath) setupPathResolver(config) except Exception as e: logger.error("Error reading or handling configuration: %s", e) logger.info("Cedar Backup 'span' utility run completed with status 4.") return 4 if options.stacktrace: _executeAction(options, config) else: try: _executeAction(options, config) except KeyboardInterrupt: logger.error("Backup interrupted.") logger.info("Cedar Backup 'span' utility run completed with status 5.") return 5 except Exception as e: logger.error("Error executing backup: %s", e) logger.info("Cedar Backup 'span' utility run completed with status 6.") return 6 logger.info("Cedar Backup 'span' utility run completed with status 0.") return 0 ####################################################################### # Utility functions ####################################################################### #################### # _usage() function #################### def _usage(fd=sys.stderr): """ Prints usage information for the cback3-span script. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write(" Usage: cback3-span [switches]\n") fd.write("\n") fd.write(" Cedar Backup 'span' tool.\n") fd.write("\n") fd.write(" This Cedar Backup utility spans staged data between multiple discs.\n") fd.write(" It is a utility, not an extension, and requires user interaction.\n") fd.write("\n") fd.write(" The following switches are accepted, mostly to set up underlying\n") fd.write(" Cedar Backup functionality:\n") fd.write("\n") fd.write(" -h, --help Display this usage/help listing\n") fd.write(" -V, --version Display version information\n") fd.write(" -b, --verbose Print verbose output as well as logging to disk\n") fd.write(" -c, --config Path to config file (default: %s)\n" % DEFAULT_CONFIG) fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE) fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1])) fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE) fd.write(" -O, --output Record some sub-command (i.e. tar) output to the log\n") fd.write(" -d, --debug Write debugging information to the log (implies --output)\n") fd.write(" -s, --stack Dump a Python stack trace instead of swallowing exceptions\n") fd.write("\n") ###################### # _version() function ###################### def _version(fd=sys.stdout): """ Prints version information for the cback3-span script. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write(" Cedar Backup 'span' tool.\n") fd.write(" Included with Cedar Backup version %s, released %s.\n" % (VERSION, DATE)) fd.write("\n") fd.write(" Copyright (c) %s %s <%s>.\n" % (COPYRIGHT, AUTHOR, EMAIL)) fd.write(" See CREDITS for a list of included code and other contributors.\n") fd.write(" This is free software; there is NO warranty. See the\n") fd.write(" GNU General Public License version 2 for copying conditions.\n") fd.write("\n") fd.write(" Use the --help option for usage information.\n") fd.write("\n") ########################## # _diagnostics() function ########################## def _diagnostics(fd=sys.stdout): """ Prints runtime diagnostics information. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write("Diagnostics:\n") fd.write("\n") Diagnostics().printDiagnostics(fd=fd, prefix=" ") fd.write("\n") ############################ # _executeAction() function ############################ def _executeAction(options, config): """ Implements the guts of the cback3-span tool. @param options: Program command-line options. @type options: SpanOptions object. @param config: Program configuration. @type config: Config object. @raise Exception: Under many generic error conditions """ print("") print("================================================") print(" Cedar Backup 'span' tool") print("================================================") print("") print("This the Cedar Backup span tool. It is used to split up staging") print("data when that staging data does not fit onto a single disc.") print("") print("This utility operates using Cedar Backup configuration. Configuration") print("specifies which staging directory to look at and which writer device") print("and media type to use.") print("") if not _getYesNoAnswer("Continue?", default="Y"): return print("===") print("") print("Cedar Backup store configuration looks like this:") print("") print(" Source Directory...: %s" % config.store.sourceDir) print(" Media Type.........: %s" % config.store.mediaType) print(" Device Type........: %s" % config.store.deviceType) print(" Device Path........: %s" % config.store.devicePath) print(" Device SCSI ID.....: %s" % config.store.deviceScsiId) print(" Drive Speed........: %s" % config.store.driveSpeed) print(" Check Data Flag....: %s" % config.store.checkData) print(" No Eject Flag......: %s" % config.store.noEject) print("") if not _getYesNoAnswer("Is this OK?", default="Y"): return print("===") (writer, mediaCapacity) = _getWriter(config) print("") print("Please wait, indexing the source directory (this may take a while)...") (dailyDirs, fileList) = _findDailyDirs(config.store.sourceDir) print("===") print("") print("The following daily staging directories have not yet been written to disc:") print("") for dailyDir in dailyDirs: print(" %s" % dailyDir) totalSize = fileList.totalSize() print("") print("The total size of the data in these directories is %s." % displayBytes(totalSize)) print("") if not _getYesNoAnswer("Continue?", default="Y"): return print("===") print("") print("Based on configuration, the capacity of your media is %s." % displayBytes(mediaCapacity)) print("") print("Since estimates are not perfect and there is some uncertainly in") print("media capacity calculations, it is good to have a \"cushion\",") print("a percentage of capacity to set aside. The cushion reduces the") print("capacity of your media, so a 1.5% cushion leaves 98.5% remaining.") print("") cushion = _getFloat("What cushion percentage?", default=4.5) print("===") realCapacity = ((100.0 - cushion)/100.0) * mediaCapacity minimumDiscs = (totalSize/realCapacity) + 1 print("") print("The real capacity, taking into account the %.2f%% cushion, is %s." % (cushion, displayBytes(realCapacity))) print("It will take at least %d disc(s) to store your %s of data." % (minimumDiscs, displayBytes(totalSize))) print("") if not _getYesNoAnswer("Continue?", default="Y"): return print("===") happy = False while not happy: print("") print("Which algorithm do you want to use to span your data across") print("multiple discs?") print("") print("The following algorithms are available:") print("") print(" first....: The \"first-fit\" algorithm") print(" best.....: The \"best-fit\" algorithm") print(" worst....: The \"worst-fit\" algorithm") print(" alternate: The \"alternate-fit\" algorithm") print("") print("If you don't like the results you will have a chance to try a") print("different one later.") print("") algorithm = _getChoiceAnswer("Which algorithm?", "worst", [ "first", "best", "worst", "alternate", ]) print("===") print("") print("Please wait, generating file lists (this may take a while)...") spanSet = fileList.generateSpan(capacity=realCapacity, algorithm="%s_fit" % algorithm) print("===") print("") print("Using the \"%s-fit\" algorithm, Cedar Backup can split your data" % algorithm) print("into %d discs." % len(spanSet)) print("") counter = 0 for item in spanSet: counter += 1 print("Disc %d: %d files, %s, %.2f%% utilization" % (counter, len(item.fileList), displayBytes(item.size), item.utilization)) print("") if _getYesNoAnswer("Accept this solution?", default="Y"): happy = True print("===") counter = 0 for spanItem in spanSet: counter += 1 if counter == 1: print("") _getReturn("Please place the first disc in your backup device.\nPress return when ready.") print("===") else: print("") _getReturn("Please replace the disc in your backup device.\nPress return when ready.") print("===") _writeDisc(config, writer, spanItem) _writeStoreIndicator(config, dailyDirs) print("") print("Completed writing all discs.") ############################ # _findDailyDirs() function ############################ def _findDailyDirs(stagingDir): """ Returns a list of all daily staging directories that have not yet been stored. The store indicator file C{cback.store} will be written to a daily staging directory once that directory is written to disc. So, this function looks at each daily staging directory within the configured staging directory, and returns a list of those which do not contain the indicator file. Returned is a tuple containing two items: a list of daily staging directories, and a BackupFileList containing all files among those staging directories. @param stagingDir: Configured staging directory @return: Tuple (staging dirs, backup file list) """ results = findDailyDirs(stagingDir, STORE_INDICATOR) fileList = BackupFileList() for item in results: fileList.addDirContents(item) return (results, fileList) ################################## # _writeStoreIndicator() function ################################## def _writeStoreIndicator(config, dailyDirs): """ Writes a store indicator file into daily directories. @param config: Config object. @param dailyDirs: List of daily directories """ for dailyDir in dailyDirs: writeIndicatorFile(dailyDir, STORE_INDICATOR, config.options.backupUser, config.options.backupGroup) ######################## # _getWriter() function ######################## def _getWriter(config): """ Gets a writer and media capacity from store configuration. Returned is a writer and a media capacity in bytes. @param config: Cedar Backup configuration @return: Tuple of (writer, mediaCapacity) """ writer = createWriter(config) mediaCapacity = convertSize(writer.media.capacity, UNIT_SECTORS, UNIT_BYTES) return (writer, mediaCapacity) ######################## # _writeDisc() function ######################## def _writeDisc(config, writer, spanItem): """ Writes a span item to disc. @param config: Cedar Backup configuration @param writer: Writer to use @param spanItem: Span item to write """ print("") _discInitializeImage(config, writer, spanItem) _discWriteImage(config, writer) _discConsistencyCheck(config, writer, spanItem) print("Write process is complete.") print("===") def _discInitializeImage(config, writer, spanItem): """ Initialize an ISO image for a span item. @param config: Cedar Backup configuration @param writer: Writer to use @param spanItem: Span item to write """ complete = False while not complete: try: print("Initializing image...") writer.initializeImage(newDisc=True, tmpdir=config.options.workingDir) for path in spanItem.fileList: graftPoint = os.path.dirname(path.replace(config.store.sourceDir, "", 1)) writer.addImageEntry(path, graftPoint) complete = True except KeyboardInterrupt as e: raise e except Exception as e: logger.error("Failed to initialize image: %s", e) if not _getYesNoAnswer("Retry initialization step?", default="Y"): raise e print("Ok, attempting retry.") print("===") print("Completed initializing image.") def _discWriteImage(config, writer): """ Writes a ISO image for a span item. @param config: Cedar Backup configuration @param writer: Writer to use """ complete = False while not complete: try: print("Writing image to disc...") writer.writeImage() complete = True except KeyboardInterrupt as e: raise e except Exception as e: logger.error("Failed to write image: %s", e) if not _getYesNoAnswer("Retry this step?", default="Y"): raise e print("Ok, attempting retry.") _getReturn("Please replace media if needed.\nPress return when ready.") print("===") print("Completed writing image.") def _discConsistencyCheck(config, writer, spanItem): """ Run a consistency check on an ISO image for a span item. @param config: Cedar Backup configuration @param writer: Writer to use @param spanItem: Span item to write """ if config.store.checkData: complete = False while not complete: try: print("Running consistency check...") _consistencyCheck(config, spanItem.fileList) complete = True except KeyboardInterrupt as e: raise e except Exception as e: logger.error("Consistency check failed: %s", e) if not _getYesNoAnswer("Retry the consistency check?", default="Y"): raise e if _getYesNoAnswer("Rewrite the disc first?", default="N"): print("Ok, attempting retry.") _getReturn("Please replace the disc in your backup device.\nPress return when ready.") print("===") _discWriteImage(config, writer) else: print("Ok, attempting retry.") print("===") print("Completed consistency check.") ############################### # _consistencyCheck() function ############################### def _consistencyCheck(config, fileList): """ Runs a consistency check against media in the backup device. The function mounts the device at a temporary mount point in the working directory, and then compares the passed-in file list's digest map with the one generated from the disc. The two lists should be identical. If no exceptions are thrown, there were no problems with the consistency check. @warning: The implementation of this function is very UNIX-specific. @param config: Config object. @param fileList: BackupFileList whose contents to check against @raise ValueError: If the check fails @raise IOError: If there is a problem working with the media. """ logger.debug("Running consistency check.") mountPoint = tempfile.mkdtemp(dir=config.options.workingDir) try: mount(config.store.devicePath, mountPoint, "iso9660") discList = BackupFileList() discList.addDirContents(mountPoint) sourceList = BackupFileList() sourceList.extend(fileList) discListDigest = discList.generateDigestMap(stripPrefix=normalizeDir(mountPoint)) sourceListDigest = sourceList.generateDigestMap(stripPrefix=normalizeDir(config.store.sourceDir)) compareDigestMaps(sourceListDigest, discListDigest, verbose=True) logger.info("Consistency check completed. No problems found.") finally: unmount(mountPoint, True, 5, 1) # try 5 times, and remove mount point when done ######################################################################### # User interface utilities ######################################################################## def _getYesNoAnswer(prompt, default): """ Get a yes/no answer from the user. The default will be placed at the end of the prompt. A "Y" or "y" is considered yes, anything else no. A blank (empty) response results in the default. @param prompt: Prompt to show. @param default: Default to set if the result is blank @return: Boolean true/false corresponding to Y/N """ if default == "Y": prompt = "%s [Y/n]: " % prompt else: prompt = "%s [y/N]: " % prompt answer = input(prompt) if answer in [ None, "", ]: answer = default if answer[0] in [ "Y", "y", ]: return True else: return False def _getChoiceAnswer(prompt, default, validChoices): """ Get a particular choice from the user. The default will be placed at the end of the prompt. The function loops until getting a valid choice. A blank (empty) response results in the default. @param prompt: Prompt to show. @param default: Default to set if the result is None or blank. @param validChoices: List of valid choices (strings) @return: Valid choice from user. """ prompt = "%s [%s]: " % (prompt, default) answer = input(prompt) if answer in [ None, "", ]: answer = default while answer not in validChoices: print("Choice must be one of %s" % validChoices) answer = input(prompt) return answer def _getFloat(prompt, default): """ Get a floating point number from the user. The default will be placed at the end of the prompt. The function loops until getting a valid floating point number. A blank (empty) response results in the default. @param prompt: Prompt to show. @param default: Default to set if the result is None or blank. @return: Floating point number from user """ prompt = "%s [%.2f]: " % (prompt, default) while True: answer = input(prompt) if answer in [ None, "" ]: return default else: try: return float(answer) except ValueError: print("Enter a floating point number.") def _getReturn(prompt): """ Get a return key from the user. @param prompt: Prompt to show. """ input(prompt) ######################################################################### # Main routine ######################################################################## if __name__ == "__main__": sys.exit(cli()) CedarBackup3-3.1.6/CedarBackup3/tools/__init__.py0000664000175000017500000000334112560007327023175 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Official Cedar Backup Tools # Purpose : Provides package initialization # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Official Cedar Backup Tools This package provides official Cedar Backup tools. Tools are things that feel a little like extensions, but don't fit the normal mold of extensions. For instance, they might not be intended to run from cron, or might need to interact dynamically with the user (i.e. accept user input). Tools are usually scripts that are run directly from the command line, just like the main C{cback3} script. Like the C{cback3} script, the majority of a tool is implemented in a .py module, and then the script just invokes the module's C{cli()} function. The actual scripts for tools are distributed in the util/ directory. @author: Kenneth J. Pronovici """ ######################################################################## # Package initialization ######################################################################## # Using 'from CedarBackup3.tools import *' will just import the modules listed # in the __all__ variable. __all__ = [ 'span', 'amazons3', ] CedarBackup3-3.1.6/CedarBackup3/tools/amazons3.py0000775000175000017500000012740012657663251023212 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2014,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Cedar Backup tool to synchronize an Amazon S3 bucket. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Notes ######################################################################## """ Synchonizes a local directory with an Amazon S3 bucket. No configuration is required; all necessary information is taken from the command-line. The only thing configuration would help with is the path resolver interface, and it doesn't seem worth it to require configuration just to get that. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules and constants ######################################################################## # System modules import sys import os import logging import getopt import json import warnings from functools import total_ordering from pathlib import Path import chardet # Cedar Backup modules from CedarBackup3.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT from CedarBackup3.filesystem import FilesystemList from CedarBackup3.cli import setupLogging, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP, DEFAULT_MODE from CedarBackup3.util import Diagnostics, splitCommandLine, encodePath from CedarBackup3.util import executeCommand ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.tools.amazons3") AWS_COMMAND = [ "aws" ] SHORT_SWITCHES = "hVbql:o:m:OdsDvw" LONG_SWITCHES = [ 'help', 'version', 'verbose', 'quiet', 'logfile=', 'owner=', 'mode=', 'output', 'debug', 'stack', 'diagnostics', 'verifyOnly', 'ignoreWarnings', ] ####################################################################### # Options class ####################################################################### @total_ordering class Options(object): ###################### # Class documentation ###################### """ Class representing command-line options for the cback3-amazons3-sync script. The C{Options} class is a Python object representation of the command-line options of the cback3-amazons3-sync script. The object representation is two-way: a command line string or a list of command line arguments can be used to create an C{Options} object, and then changes to the object can be propogated back to a list of command-line arguments or to a command-line string. An C{Options} object can even be created from scratch programmatically (if you have a need for that). There are two main levels of validation in the C{Options} class. The first is field-level validation. Field-level validation comes into play when a given field in an object is assigned to or updated. We use Python's C{property} functionality to enforce specific validations on field values, and in some places we even use customized list classes to enforce validations on list members. You should expect to catch a C{ValueError} exception when making assignments to fields if you are programmatically filling an object. The second level of validation is post-completion validation. Certain validations don't make sense until an object representation of options is fully "complete". We don't want these validations to apply all of the time, because it would make building up a valid object from scratch a real pain. For instance, we might have to do things in the right order to keep from throwing exceptions, etc. All of these post-completion validations are encapsulated in the L{Options.validate} method. This method can be called at any time by a client, and will always be called immediately after creating a C{Options} object from a command line and before exporting a C{Options} object back to a command line. This way, we get acceptable ease-of-use but we also don't accept or emit invalid command lines. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__ """ ############## # Constructor ############## def __init__(self, argumentList=None, argumentString=None, validate=True): """ Initializes an options object. If you initialize the object without passing either C{argumentList} or C{argumentString}, the object will be empty and will be invalid until it is filled in properly. No reference to the original arguments is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. The argument list is assumed to be a list of arguments, not including the name of the command, something like C{sys.argv[1:]}. If you pass C{sys.argv} instead, things are not going to work. The argument string will be parsed into an argument list by the L{util.splitCommandLine} function (see the documentation for that function for some important notes about its limitations). There is an assumption that the resulting list will be equivalent to C{sys.argv[1:]}, just like C{argumentList}. Unless the C{validate} argument is C{False}, the L{Options.validate} method will be called (with its default arguments) after successfully parsing any passed-in command line. This validation ensures that appropriate actions, etc. have been specified. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in command line, so an exception might still be raised. @note: The command line format is specified by the L{_usage} function. Call L{_usage} to see a usage statement for the cback3-amazons3-sync script. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid command line arguments. @param argumentList: Command line for a program. @type argumentList: List of arguments, i.e. C{sys.argv} @param argumentString: Command line for a program. @type argumentString: String, i.e. "cback3-amazons3-sync --verbose stage store" @param validate: Validate the command line after parsing it. @type validate: Boolean true/false. @raise getopt.GetoptError: If the command-line arguments could not be parsed. @raise ValueError: If the command-line arguments are invalid. """ self._help = False self._version = False self._verbose = False self._quiet = False self._logfile = None self._owner = None self._mode = None self._output = False self._debug = False self._stacktrace = False self._diagnostics = False self._verifyOnly = False self._ignoreWarnings = False self._sourceDir = None self._s3BucketUrl = None if argumentList is not None and argumentString is not None: raise ValueError("Use either argumentList or argumentString, but not both.") if argumentString is not None: argumentList = splitCommandLine(argumentString) if argumentList is not None: self._parseArgumentList(argumentList) if validate: self.validate() ######################### # String representations ######################### def __repr__(self): """ Official string representation for class instance. """ return self.buildArgumentString(validate=False) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() ############################# # Standard comparison method ############################# def __eq__(self, other): """Equals operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.help != other.help: if self.help < other.help: return -1 else: return 1 if self.version != other.version: if self.version < other.version: return -1 else: return 1 if self.verbose != other.verbose: if self.verbose < other.verbose: return -1 else: return 1 if self.quiet != other.quiet: if self.quiet < other.quiet: return -1 else: return 1 if self.logfile != other.logfile: if str(self.logfile or "") < str(other.logfile or ""): return -1 else: return 1 if self.owner != other.owner: if str(self.owner or "") < str(other.owner or ""): return -1 else: return 1 if self.mode != other.mode: if int(self.mode or 0) < int(other.mode or 0): return -1 else: return 1 if self.output != other.output: if self.output < other.output: return -1 else: return 1 if self.debug != other.debug: if self.debug < other.debug: return -1 else: return 1 if self.stacktrace != other.stacktrace: if self.stacktrace < other.stacktrace: return -1 else: return 1 if self.diagnostics != other.diagnostics: if self.diagnostics < other.diagnostics: return -1 else: return 1 if self.verifyOnly != other.verifyOnly: if self.verifyOnly < other.verifyOnly: return -1 else: return 1 if self.ignoreWarnings != other.ignoreWarnings: if self.ignoreWarnings < other.ignoreWarnings: return -1 else: return 1 if self.sourceDir != other.sourceDir: if str(self.sourceDir or "") < str(other.sourceDir or ""): return -1 else: return 1 if self.s3BucketUrl != other.s3BucketUrl: if str(self.s3BucketUrl or "") < str(other.s3BucketUrl or ""): return -1 else: return 1 return 0 ############# # Properties ############# def _setHelp(self, value): """ Property target used to set the help flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._help = True else: self._help = False def _getHelp(self): """ Property target used to get the help flag. """ return self._help def _setVersion(self, value): """ Property target used to set the version flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._version = True else: self._version = False def _getVersion(self): """ Property target used to get the version flag. """ return self._version def _setVerbose(self, value): """ Property target used to set the verbose flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._verbose = True else: self._verbose = False def _getVerbose(self): """ Property target used to get the verbose flag. """ return self._verbose def _setQuiet(self, value): """ Property target used to set the quiet flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._quiet = True else: self._quiet = False def _getQuiet(self): """ Property target used to get the quiet flag. """ return self._quiet def _setLogfile(self, value): """ Property target used to set the logfile parameter. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if len(value) < 1: raise ValueError("The logfile parameter must be a non-empty string.") self._logfile = encodePath(value) def _getLogfile(self): """ Property target used to get the logfile parameter. """ return self._logfile def _setOwner(self, value): """ Property target used to set the owner parameter. If not C{None}, the owner must be a C{(user,group)} tuple or list. Strings (and inherited children of strings) are explicitly disallowed. The value will be normalized to a tuple. @raise ValueError: If the value is not valid. """ if value is None: self._owner = None else: if isinstance(value, str): raise ValueError("Must specify user and group tuple for owner parameter.") if len(value) != 2: raise ValueError("Must specify user and group tuple for owner parameter.") if len(value[0]) < 1 or len(value[1]) < 1: raise ValueError("User and group tuple values must be non-empty strings.") self._owner = (value[0], value[1]) def _getOwner(self): """ Property target used to get the owner parameter. The parameter is a tuple of C{(user, group)}. """ return self._owner def _setMode(self, value): """ Property target used to set the mode parameter. """ if value is None: self._mode = None else: try: if isinstance(value, str): value = int(value, 8) else: value = int(value) except TypeError: raise ValueError("Mode must be an octal integer >= 0, i.e. 644.") if value < 0: raise ValueError("Mode must be an octal integer >= 0. i.e. 644.") self._mode = value def _getMode(self): """ Property target used to get the mode parameter. """ return self._mode def _setOutput(self, value): """ Property target used to set the output flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._output = True else: self._output = False def _getOutput(self): """ Property target used to get the output flag. """ return self._output def _setDebug(self, value): """ Property target used to set the debug flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._debug = True else: self._debug = False def _getDebug(self): """ Property target used to get the debug flag. """ return self._debug def _setStacktrace(self, value): """ Property target used to set the stacktrace flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._stacktrace = True else: self._stacktrace = False def _getStacktrace(self): """ Property target used to get the stacktrace flag. """ return self._stacktrace def _setDiagnostics(self, value): """ Property target used to set the diagnostics flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._diagnostics = True else: self._diagnostics = False def _getDiagnostics(self): """ Property target used to get the diagnostics flag. """ return self._diagnostics def _setVerifyOnly(self, value): """ Property target used to set the verifyOnly flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._verifyOnly = True else: self._verifyOnly = False def _getVerifyOnly(self): """ Property target used to get the verifyOnly flag. """ return self._verifyOnly def _setIgnoreWarnings(self, value): """ Property target used to set the ignoreWarnings flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._ignoreWarnings = True else: self._ignoreWarnings = False def _getIgnoreWarnings(self): """ Property target used to get the ignoreWarnings flag. """ return self._ignoreWarnings def _setSourceDir(self, value): """ Property target used to set the sourceDir parameter. """ if value is not None: if len(value) < 1: raise ValueError("The sourceDir parameter must be a non-empty string.") self._sourceDir = value def _getSourceDir(self): """ Property target used to get the sourceDir parameter. """ return self._sourceDir def _setS3BucketUrl(self, value): """ Property target used to set the s3BucketUrl parameter. """ if value is not None: if len(value) < 1: raise ValueError("The s3BucketUrl parameter must be a non-empty string.") self._s3BucketUrl = value def _getS3BucketUrl(self): """ Property target used to get the s3BucketUrl parameter. """ return self._s3BucketUrl help = property(_getHelp, _setHelp, None, "Command-line help (C{-h,--help}) flag.") version = property(_getVersion, _setVersion, None, "Command-line version (C{-V,--version}) flag.") verbose = property(_getVerbose, _setVerbose, None, "Command-line verbose (C{-b,--verbose}) flag.") quiet = property(_getQuiet, _setQuiet, None, "Command-line quiet (C{-q,--quiet}) flag.") logfile = property(_getLogfile, _setLogfile, None, "Command-line logfile (C{-l,--logfile}) parameter.") owner = property(_getOwner, _setOwner, None, "Command-line owner (C{-o,--owner}) parameter, as tuple C{(user,group)}.") mode = property(_getMode, _setMode, None, "Command-line mode (C{-m,--mode}) parameter.") output = property(_getOutput, _setOutput, None, "Command-line output (C{-O,--output}) flag.") debug = property(_getDebug, _setDebug, None, "Command-line debug (C{-d,--debug}) flag.") stacktrace = property(_getStacktrace, _setStacktrace, None, "Command-line stacktrace (C{-s,--stack}) flag.") diagnostics = property(_getDiagnostics, _setDiagnostics, None, "Command-line diagnostics (C{-D,--diagnostics}) flag.") verifyOnly = property(_getVerifyOnly, _setVerifyOnly, None, "Command-line verifyOnly (C{-v,--verifyOnly}) flag.") ignoreWarnings = property(_getIgnoreWarnings, _setIgnoreWarnings, None, "Command-line ignoreWarnings (C{-w,--ignoreWarnings}) flag.") sourceDir = property(_getSourceDir, _setSourceDir, None, "Command-line sourceDir, source of sync.") s3BucketUrl = property(_getS3BucketUrl, _setS3BucketUrl, None, "Command-line s3BucketUrl, target of sync.") ################## # Utility methods ################## def validate(self): """ Validates command-line options represented by the object. Unless C{--help} or C{--version} are supplied, at least one action must be specified. Other validations (as for allowed values for particular options) will be taken care of at assignment time by the properties functionality. @note: The command line format is specified by the L{_usage} function. Call L{_usage} to see a usage statement for the cback3-amazons3-sync script. @raise ValueError: If one of the validations fails. """ if not self.help and not self.version and not self.diagnostics: if self.sourceDir is None or self.s3BucketUrl is None: raise ValueError("Source directory and S3 bucket URL are both required.") def buildArgumentList(self, validate=True): """ Extracts options into a list of command line arguments. The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument list. Besides that, the argument list is normalized to use the long option names (i.e. --version rather than -V). The resulting list will be suitable for passing back to the constructor in the C{argumentList} parameter. Unlike L{buildArgumentString}, string arguments are not quoted here, because there is no need for it. Unless the C{validate} parameter is C{False}, the L{Options.validate} method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument list will not be extracted. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to extract an invalid command line. @param validate: Validate the options before extracting the command line. @type validate: Boolean true/false. @return: List representation of command-line arguments. @raise ValueError: If options within the object are invalid. """ if validate: self.validate() argumentList = [] if self._help: argumentList.append("--help") if self.version: argumentList.append("--version") if self.verbose: argumentList.append("--verbose") if self.quiet: argumentList.append("--quiet") if self.logfile is not None: argumentList.append("--logfile") argumentList.append(self.logfile) if self.owner is not None: argumentList.append("--owner") argumentList.append("%s:%s" % (self.owner[0], self.owner[1])) if self.mode is not None: argumentList.append("--mode") argumentList.append("%o" % self.mode) if self.output: argumentList.append("--output") if self.debug: argumentList.append("--debug") if self.stacktrace: argumentList.append("--stack") if self.diagnostics: argumentList.append("--diagnostics") if self.verifyOnly: argumentList.append("--verifyOnly") if self.ignoreWarnings: argumentList.append("--ignoreWarnings") if self.sourceDir is not None: argumentList.append(self.sourceDir) if self.s3BucketUrl is not None: argumentList.append(self.s3BucketUrl) return argumentList def buildArgumentString(self, validate=True): """ Extracts options into a string of command-line arguments. The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument string. Besides that, the argument string is normalized to use the long option names (i.e. --version rather than -V) and to quote all string arguments with double quotes (C{"}). The resulting string will be suitable for passing back to the constructor in the C{argumentString} parameter. Unless the C{validate} parameter is C{False}, the L{Options.validate} method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument string will not be extracted. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to extract an invalid command line. @param validate: Validate the options before extracting the command line. @type validate: Boolean true/false. @return: String representation of command-line arguments. @raise ValueError: If options within the object are invalid. """ if validate: self.validate() argumentString = "" if self._help: argumentString += "--help " if self.version: argumentString += "--version " if self.verbose: argumentString += "--verbose " if self.quiet: argumentString += "--quiet " if self.logfile is not None: argumentString += "--logfile \"%s\" " % self.logfile if self.owner is not None: argumentString += "--owner \"%s:%s\" " % (self.owner[0], self.owner[1]) if self.mode is not None: argumentString += "--mode %o " % self.mode if self.output: argumentString += "--output " if self.debug: argumentString += "--debug " if self.stacktrace: argumentString += "--stack " if self.diagnostics: argumentString += "--diagnostics " if self.verifyOnly: argumentString += "--verifyOnly " if self.ignoreWarnings: argumentString += "--ignoreWarnings " if self.sourceDir is not None: argumentString += "\"%s\" " % self.sourceDir if self.s3BucketUrl is not None: argumentString += "\"%s\" " % self.s3BucketUrl return argumentString def _parseArgumentList(self, argumentList): """ Internal method to parse a list of command-line arguments. Most of the validation we do here has to do with whether the arguments can be parsed and whether any values which exist are valid. We don't do any validation as to whether required elements exist or whether elements exist in the proper combination (instead, that's the job of the L{validate} method). For any of the options which supply parameters, if the option is duplicated with long and short switches (i.e. C{-l} and a C{--logfile}) then the long switch is used. If the same option is duplicated with the same switch (long or short), then the last entry on the command line is used. @param argumentList: List of arguments to a command. @type argumentList: List of arguments to a command, i.e. C{sys.argv[1:]} @raise ValueError: If the argument list cannot be successfully parsed. """ switches = { } opts, remaining = getopt.getopt(argumentList, SHORT_SWITCHES, LONG_SWITCHES) for o, a in opts: # push the switches into a hash switches[o] = a if "-h" in switches or "--help" in switches: self.help = True if "-V" in switches or "--version" in switches: self.version = True if "-b" in switches or "--verbose" in switches: self.verbose = True if "-q" in switches or "--quiet" in switches: self.quiet = True if "-l" in switches: self.logfile = switches["-l"] if "--logfile" in switches: self.logfile = switches["--logfile"] if "-o" in switches: self.owner = switches["-o"].split(":", 1) if "--owner" in switches: self.owner = switches["--owner"].split(":", 1) if "-m" in switches: self.mode = switches["-m"] if "--mode" in switches: self.mode = switches["--mode"] if "-O" in switches or "--output" in switches: self.output = True if "-d" in switches or "--debug" in switches: self.debug = True if "-s" in switches or "--stack" in switches: self.stacktrace = True if "-D" in switches or "--diagnostics" in switches: self.diagnostics = True if "-v" in switches or "--verifyOnly" in switches: self.verifyOnly = True if "-w" in switches or "--ignoreWarnings" in switches: self.ignoreWarnings = True try: (self.sourceDir, self.s3BucketUrl) = remaining except ValueError: pass ####################################################################### # Public functions ####################################################################### ################# # cli() function ################# def cli(): """ Implements the command-line interface for the C{cback3-amazons3-sync} script. Essentially, this is the "main routine" for the cback3-amazons3-sync script. It does all of the argument processing for the script, and then also implements the tool functionality. This function looks pretty similiar to C{CedarBackup3.cli.cli()}. It's not easy to refactor this code to make it reusable and also readable, so I've decided to just live with the duplication. A different error code is returned for each type of failure: - C{1}: The Python interpreter version is < 3.4 - C{2}: Error processing command-line arguments - C{3}: Error configuring logging - C{5}: Backup was interrupted with a CTRL-C or similar - C{6}: Error executing other parts of the script @note: This script uses print rather than logging to the INFO level, because it is interactive. Underlying Cedar Backup functionality uses the logging mechanism exclusively. @return: Error code as described above. """ try: if list(map(int, [sys.version_info[0], sys.version_info[1]])) < [3, 4]: sys.stderr.write("Python 3 version 3.4 or greater required.\n") return 1 except: # sys.version_info isn't available before 2.0 sys.stderr.write("Python 3 version 3.4 or greater required.\n") return 1 try: options = Options(argumentList=sys.argv[1:]) except Exception as e: _usage() sys.stderr.write(" *** Error: %s\n" % e) return 2 if options.help: _usage() return 0 if options.version: _version() return 0 if options.diagnostics: _diagnostics() return 0 if options.stacktrace: logfile = setupLogging(options) else: try: logfile = setupLogging(options) except Exception as e: sys.stderr.write("Error setting up logging: %s\n" % e) return 3 logger.info("Cedar Backup Amazon S3 sync run started.") logger.info("Options were [%s]", options) logger.info("Logfile is [%s]", logfile) Diagnostics().logDiagnostics(method=logger.info) if options.stacktrace: _executeAction(options) else: try: _executeAction(options) except KeyboardInterrupt: logger.error("Backup interrupted.") logger.info("Cedar Backup Amazon S3 sync run completed with status 5.") return 5 except Exception as e: logger.error("Error executing backup: %s", e) logger.info("Cedar Backup Amazon S3 sync run completed with status 6.") return 6 logger.info("Cedar Backup Amazon S3 sync run completed with status 0.") return 0 ####################################################################### # Utility functions ####################################################################### #################### # _usage() function #################### def _usage(fd=sys.stderr): """ Prints usage information for the cback3-amazons3-sync script. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write(" Usage: cback3-amazons3-sync [switches] sourceDir s3bucketUrl\n") fd.write("\n") fd.write(" Cedar Backup Amazon S3 sync tool.\n") fd.write("\n") fd.write(" This Cedar Backup utility synchronizes a local directory to an Amazon S3\n") fd.write(" bucket. After the sync is complete, a validation step is taken. An\n") fd.write(" error is reported if the contents of the bucket do not match the\n") fd.write(" source directory, or if the indicated size for any file differs.\n") fd.write(" This tool is a wrapper over the AWS CLI command-line tool.\n") fd.write("\n") fd.write(" The following arguments are required:\n") fd.write("\n") fd.write(" sourceDir The local source directory on disk (must exist)\n") fd.write(" s3BucketUrl The URL to the target Amazon S3 bucket\n") fd.write("\n") fd.write(" The following switches are accepted:\n") fd.write("\n") fd.write(" -h, --help Display this usage/help listing\n") fd.write(" -V, --version Display version information\n") fd.write(" -b, --verbose Print verbose output as well as logging to disk\n") fd.write(" -q, --quiet Run quietly (display no output to the screen)\n") fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE) fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1])) fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE) fd.write(" -O, --output Record some sub-command (i.e. aws) output to the log\n") fd.write(" -d, --debug Write debugging information to the log (implies --output)\n") fd.write(" -s, --stack Dump Python stack trace instead of swallowing exceptions\n") # exactly 80 characters in width! fd.write(" -D, --diagnostics Print runtime diagnostics to the screen and exit\n") fd.write(" -v, --verifyOnly Only verify the S3 bucket contents, do not make changes\n") fd.write(" -w, --ignoreWarnings Ignore warnings about problematic filename encodings\n") fd.write("\n") fd.write(" Typical usage would be something like:\n") fd.write("\n") fd.write(" cback3-amazons3-sync /home/myuser s3://example.com-backup/myuser\n") fd.write("\n") fd.write(" This will sync the contents of /home/myuser into the indicated bucket.\n") fd.write("\n") ###################### # _version() function ###################### def _version(fd=sys.stdout): """ Prints version information for the cback3-amazons3-sync script. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write(" Cedar Backup Amazon S3 sync tool.\n") fd.write(" Included with Cedar Backup version %s, released %s.\n" % (VERSION, DATE)) fd.write("\n") fd.write(" Copyright (c) %s %s <%s>.\n" % (COPYRIGHT, AUTHOR, EMAIL)) fd.write(" See CREDITS for a list of included code and other contributors.\n") fd.write(" This is free software; there is NO warranty. See the\n") fd.write(" GNU General Public License version 2 for copying conditions.\n") fd.write("\n") fd.write(" Use the --help option for usage information.\n") fd.write("\n") ########################## # _diagnostics() function ########################## def _diagnostics(fd=sys.stdout): """ Prints runtime diagnostics information. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write("Diagnostics:\n") fd.write("\n") Diagnostics().printDiagnostics(fd=fd, prefix=" ") fd.write("\n") ############################ # _executeAction() function ############################ def _executeAction(options): """ Implements the guts of the cback3-amazons3-sync tool. @param options: Program command-line options. @type options: Options object. @raise Exception: Under many generic error conditions """ sourceFiles = _buildSourceFiles(options.sourceDir) if not options.ignoreWarnings: _checkSourceFiles(options.sourceDir, sourceFiles) if not options.verifyOnly: _synchronizeBucket(options.sourceDir, options.s3BucketUrl) _verifyBucketContents(options.sourceDir, sourceFiles, options.s3BucketUrl) ################################ # _buildSourceFiles() function ################################ def _buildSourceFiles(sourceDir): """ Build a list of files in a source directory @param sourceDir: Local source directory @return: FilesystemList with contents of source directory """ if not os.path.isdir(sourceDir): raise ValueError("Source directory does not exist on disk.") sourceFiles = FilesystemList() sourceFiles.addDirContents(sourceDir) return sourceFiles ############################### # _checkSourceFiles() function ############################### def _checkSourceFiles(sourceDir, sourceFiles): """ Check source files, trying to guess which ones will have encoding problems. @param sourceDir: Local source directory @param sourceDir: Local source directory @raises ValueError: If a problem file is found @see U{http://opensourcehacker.com/2011/09/16/fix-linux-filename-encodings-with-python/} @see U{http://serverfault.com/questions/82821/how-to-tell-the-language-encoding-of-a-filename-on-linux} @see U{http://randysofia.com/2014/06/06/aws-cli-and-your-locale/} """ with warnings.catch_warnings(): encoding = Diagnostics().encoding # Note: this was difficult to fully test. As of the original Python 2 # implementation, I had a bunch of files on disk that had inconsistent # encodings, so I was able to prove that the check warned about these # files initially, and then didn't warn after I fixed them. I didn't # save off those files for a unit test (ugh) so by the time of the Python # 3 conversion -- which is subtly different because of the different way # Python 3 handles unicode strings -- I had to contrive some tests. I # think the tests I wrote are consistent with the earlier problems, and I # do get the same result for those tests in both CedarBackup 2 and Cedar # Backup 3. However, I can't be certain the implementation is # equivalent. If someone runs into a situation that this code doesn't # handle, you may need to revisit the implementation. failed = False for entry in sourceFiles: path = bytes(Path(entry)) result = chardet.detect(path) source = path.decode(result["encoding"]) try: target = path.decode(encoding) if source != target: logger.error("Inconsistent encoding for [%s]: got %s, but need %s", source, result["encoding"], encoding) failed = True except Exception: logger.error("Inconsistent encoding for [%s]: got %s, but need %s", source, result["encoding"], encoding) failed = True if not failed: logger.info("Completed checking source filename encoding (no problems found).") else: logger.error("Some filenames have inconsistent encodings and will likely cause sync problems.") logger.error("You may be able to fix this by setting a more sensible locale in your environment.") logger.error("Aternately, you can rename the problem files to be valid in the indicated locale.") logger.error("To ignore this warning and proceed anyway, use --ignoreWarnings") raise ValueError("Some filenames have inconsistent encodings and will likely cause sync problems.") ################################ # _synchronizeBucket() function ################################ def _synchronizeBucket(sourceDir, s3BucketUrl): """ Synchronize a local directory to an Amazon S3 bucket. @param sourceDir: Local source directory @param s3BucketUrl: Target S3 bucket URL """ logger.info("Synchronizing local source directory up to Amazon S3.") args = [ "s3", "sync", sourceDir, s3BucketUrl, "--delete", "--recursive", ] result = executeCommand(AWS_COMMAND, args, returnOutput=False)[0] if result != 0: raise IOError("Error [%d] calling AWS CLI synchronize bucket." % result) ################################### # _verifyBucketContents() function ################################### def _verifyBucketContents(sourceDir, sourceFiles, s3BucketUrl): """ Verify that a source directory is equivalent to an Amazon S3 bucket. @param sourceDir: Local source directory @param sourceFiles: Filesystem list containing contents of source directory @param s3BucketUrl: Target S3 bucket URL """ # As of this writing, the documentation for the S3 API that we're using # below says that up to 1000 elements at a time are returned, and that we # have to manually handle pagination by looking for the IsTruncated element. # However, in practice, this is not true. I have been testing with # "aws-cli/1.4.4 Python/2.7.3 Linux/3.2.0-4-686-pae", installed through PIP. # No matter how many items exist in my bucket and prefix, I get back a # single JSON result. I've tested with buckets containing nearly 6000 # elements. # # If I turn on debugging, it's clear that underneath, something in the API # is executing multiple list-object requests against AWS, and stiching # results together to give me back the final JSON result. The debug output # clearly incldues multiple requests, and each XML response (except for the # final one) contains true. # # This feature is not mentioned in the offical changelog for any of the # releases going back to 1.0.0. It appears to happen in the botocore # library, but I'll admit I can't actually find the code that implements it. # For now, all I can do is rely on this behavior and hope that the # documentation is out-of-date. I'm not going to write code that tries to # parse out IsTruncated if I can't actually test that code. (bucket, prefix) = s3BucketUrl.replace("s3://", "").split("/", 1) query = "Contents[].{Key: Key, Size: Size}" args = [ "s3api", "list-objects", "--bucket", bucket, "--prefix", prefix, "--query", query, ] (result, data) = executeCommand(AWS_COMMAND, args, returnOutput=True) if result != 0: raise IOError("Error [%d] calling AWS CLI verify bucket contents." % result) contents = { } for entry in json.loads("".join(data)): key = entry["Key"].replace(prefix, "") size = int(entry["Size"]) contents[key] = size failed = False for entry in sourceFiles: if os.path.isfile(entry): key = entry.replace(sourceDir, "") size = int(os.stat(entry).st_size) if not key in contents: logger.error("File was apparently not uploaded: [%s]", entry) failed = True else: if size != contents[key]: logger.error("File size differs [%s]: expected %s bytes but got %s bytes", entry, size, contents[key]) failed = True if not failed: logger.info("Completed verifying Amazon S3 bucket contents (no problems found).") else: logger.error("There were differences between source directory and target S3 bucket.") raise ValueError("There were differences between source directory and target S3 bucket.") ######################################################################### # Main routine ######################################################################## if __name__ == "__main__": sys.exit(cli()) CedarBackup3-3.1.6/CedarBackup3/customize.py0000664000175000017500000000661612560007327022330 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Implements customized behavior. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements customized behavior. Some behaviors need to vary when packaged for certain platforms. For instance, while Cedar Backup generally uses cdrecord and mkisofs, Debian ships compatible utilities called wodim and genisoimage. I want there to be one single place where Cedar Backup is patched for Debian, rather than having to maintain a variety of patches in different places. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import logging ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.customize") PLATFORM = "standard" #PLATFORM = "debian" DEBIAN_CDRECORD = "/usr/bin/wodim" DEBIAN_MKISOFS = "/usr/bin/genisoimage" ####################################################################### # Public functions ####################################################################### ################################ # customizeOverrides() function ################################ def customizeOverrides(config, platform=PLATFORM): """ Modify command overrides based on the configured platform. On some platforms, we want to add command overrides to configuration. Each override will only be added if the configuration does not already contain an override with the same name. That way, the user still has a way to choose their own version of the command if they want. @param config: Configuration to modify @param platform: Platform that is in use """ if platform == "debian": logger.info("Overriding cdrecord for Debian platform: %s", DEBIAN_CDRECORD) config.options.addOverride("cdrecord", DEBIAN_CDRECORD) logger.info("Overriding mkisofs for Debian platform: %s", DEBIAN_MKISOFS) config.options.addOverride("mkisofs", DEBIAN_MKISOFS) CedarBackup3-3.1.6/CedarBackup3/config.py0000664000175000017500000072512312642030327021551 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides configuration-related objects. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides configuration-related objects. Summary ======= Cedar Backup stores all of its configuration in an XML document typically called C{cback3.conf}. The standard location for this document is in C{/etc}, but users can specify a different location if they want to. The C{Config} class is a Python object representation of a Cedar Backup XML configuration file. The representation is two-way: XML data can be used to create a C{Config} object, and then changes to the object can be propogated back to disk. A C{Config} object can even be used to create a configuration file from scratch programmatically. The C{Config} class is intended to be the only Python-language interface to Cedar Backup configuration on disk. Cedar Backup will use the class as its internal representation of configuration, and applications external to Cedar Backup itself (such as a hypothetical third-party configuration tool written in Python or a third party extension module) should also use the class when they need to read and write configuration files. Backwards Compatibility ======================= The configuration file format has changed between Cedar Backup 1.x and Cedar Backup 2.x. Any Cedar Backup 1.x configuration file is also a valid Cedar Backup 2.x configuration file. However, it doesn't work to go the other direction, as the 2.x configuration files contains additional configuration is not accepted by older versions of the software. XML Configuration Structure =========================== A C{Config} object can either be created "empty", or can be created based on XML input (either in the form of a string or read in from a file on disk). Generally speaking, the XML input I{must} result in a C{Config} object which passes the validations laid out below in the I{Validation} section. An XML configuration file is composed of seven sections: - I{reference}: specifies reference information about the file (author, revision, etc) - I{extensions}: specifies mappings to Cedar Backup extensions (external code) - I{options}: specifies global configuration options - I{peers}: specifies the set of peers in a master's backup pool - I{collect}: specifies configuration related to the collect action - I{stage}: specifies configuration related to the stage action - I{store}: specifies configuration related to the store action - I{purge}: specifies configuration related to the purge action Each section is represented by an class in this module, and then the overall C{Config} class is a composition of the various other classes. Any configuration section that is missing in the XML document (or has not been filled into an "empty" document) will just be set to C{None} in the object representation. The same goes for individual fields within each configuration section. Keep in mind that the document might not be completely valid if some sections or fields aren't filled in - but that won't matter until validation takes place (see the I{Validation} section below). Unicode vs. String Data ======================= By default, all string data that comes out of XML documents in Python is unicode data (i.e. C{u"whatever"}). This is fine for many things, but when it comes to filesystem paths, it can cause us some problems. We really want strings to be encoded in the filesystem encoding rather than being unicode. So, most elements in configuration which represent filesystem paths are coverted to plain strings using L{util.encodePath}. The main exception is the various C{absoluteExcludePath} and C{relativeExcludePath} lists. These are I{not} converted, because they are generally only used for filtering, not for filesystem operations. Validation ========== There are two main levels of validation in the C{Config} class and its children. The first is field-level validation. Field-level validation comes into play when a given field in an object is assigned to or updated. We use Python's C{property} functionality to enforce specific validations on field values, and in some places we even use customized list classes to enforce validations on list members. You should expect to catch a C{ValueError} exception when making assignments to configuration class fields. The second level of validation is post-completion validation. Certain validations don't make sense until a document is fully "complete". We don't want these validations to apply all of the time, because it would make building up a document from scratch a real pain. For instance, we might have to do things in the right order to keep from throwing exceptions, etc. All of these post-completion validations are encapsulated in the L{Config.validate} method. This method can be called at any time by a client, and will always be called immediately after creating a C{Config} object from XML data and before exporting a C{Config} object to XML. This way, we get decent ease-of-use but we also don't accept or emit invalid configuration files. The L{Config.validate} implementation actually takes two passes to completely validate a configuration document. The first pass at validation is to ensure that the proper sections are filled into the document. There are default requirements, but the caller has the opportunity to override these defaults. The second pass at validation ensures that any filled-in section contains valid data. Any section which is not set to C{None} is validated according to the rules for that section (see below). I{Reference Validations} No validations. I{Extensions Validations} The list of actions may be either C{None} or an empty list C{[]} if desired. Each extended action must include a name, a module and a function. Then, an extended action must include either an index or dependency information. Which one is required depends on which order mode is configured. I{Options Validations} All fields must be filled in except the rsh command. The rcp and rsh commands are used as default values for all remote peers. Remote peers can also rely on the backup user as the default remote user name if they choose. I{Peers Validations} Local peers must be completely filled in, including both name and collect directory. Remote peers must also fill in the name and collect directory, but can leave the remote user and rcp command unset. In this case, the remote user is assumed to match the backup user from the options section and rcp command is taken directly from the options section. I{Collect Validations} The target directory must be filled in. The collect mode, archive mode and ignore file are all optional. The list of absolute paths to exclude and patterns to exclude may be either C{None} or an empty list C{[]} if desired. Each collect directory entry must contain an absolute path to collect, and then must either be able to take collect mode, archive mode and ignore file configuration from the parent C{CollectConfig} object, or must set each value on its own. The list of absolute paths to exclude, relative paths to exclude and patterns to exclude may be either C{None} or an empty list C{[]} if desired. Any list of absolute paths to exclude or patterns to exclude will be combined with the same list in the C{CollectConfig} object to make the complete list for a given directory. I{Stage Validations} The target directory must be filled in. There must be at least one peer (remote or local) between the two lists of peers. A list with no entries can be either C{None} or an empty list C{[]} if desired. If a set of peers is provided, this configuration completely overrides configuration in the peers configuration section, and the same validations apply. I{Store Validations} The device type and drive speed are optional, and all other values are required (missing booleans will be set to defaults, which is OK). The image writer functionality in the C{writer} module is supposed to be able to handle a device speed of C{None}. Any caller which needs a "real" (non-C{None}) value for the device type can use C{DEFAULT_DEVICE_TYPE}, which is guaranteed to be sensible. I{Purge Validations} The list of purge directories may be either C{None} or an empty list C{[]} if desired. All purge directories must contain a path and a retain days value. @sort: ActionDependencies, ActionHook, PreActionHook, PostActionHook, ExtendedAction, CommandOverride, CollectFile, CollectDir, PurgeDir, LocalPeer, RemotePeer, ReferenceConfig, ExtensionsConfig, OptionsConfig, PeersConfig, CollectConfig, StageConfig, StoreConfig, PurgeConfig, Config, DEFAULT_DEVICE_TYPE, DEFAULT_MEDIA_TYPE, VALID_DEVICE_TYPES, VALID_MEDIA_TYPES, VALID_COLLECT_MODES, VALID_ARCHIVE_MODES, VALID_ORDER_MODES @var DEFAULT_DEVICE_TYPE: The default device type. @var DEFAULT_MEDIA_TYPE: The default media type. @var VALID_DEVICE_TYPES: List of valid device types. @var VALID_MEDIA_TYPES: List of valid media types. @var VALID_COLLECT_MODES: List of valid collect modes. @var VALID_COMPRESS_MODES: List of valid compress modes. @var VALID_ARCHIVE_MODES: List of valid archive modes. @var VALID_ORDER_MODES: List of valid extension order modes. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import logging from functools import total_ordering # Cedar Backup modules from CedarBackup3.writers.util import validateScsiId, validateDriveSpeed from CedarBackup3.util import UnorderedList, AbsolutePathList, ObjectTypeList, parseCommaSeparatedString from CedarBackup3.util import RegexMatchList, RegexList, encodePath, checkUnique from CedarBackup3.util import convertSize, displayBytes, UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES from CedarBackup3.xmlutil import isElement, readChildren, readFirstChild from CedarBackup3.xmlutil import readStringList, readString, readInteger, readBoolean from CedarBackup3.xmlutil import addContainerNode, addStringNode, addIntegerNode, addBooleanNode from CedarBackup3.xmlutil import createInputDom, createOutputDom, serializeDom ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup3.log.config") DEFAULT_DEVICE_TYPE = "cdwriter" DEFAULT_MEDIA_TYPE = "cdrw-74" VALID_DEVICE_TYPES = [ "cdwriter", "dvdwriter", ] VALID_CD_MEDIA_TYPES = [ "cdr-74", "cdrw-74", "cdr-80", "cdrw-80", ] VALID_DVD_MEDIA_TYPES = [ "dvd+r", "dvd+rw", ] VALID_MEDIA_TYPES = VALID_CD_MEDIA_TYPES + VALID_DVD_MEDIA_TYPES VALID_COLLECT_MODES = [ "daily", "weekly", "incr", ] VALID_ARCHIVE_MODES = [ "tar", "targz", "tarbz2", ] VALID_COMPRESS_MODES = [ "none", "gzip", "bzip2", ] VALID_ORDER_MODES = [ "index", "dependency", ] VALID_BLANK_MODES = [ "daily", "weekly", ] VALID_BYTE_UNITS = [ UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES, ] VALID_FAILURE_MODES = [ "none", "all", "daily", "weekly", ] REWRITABLE_MEDIA_TYPES = [ "cdrw-74", "cdrw-80", "dvd+rw", ] ACTION_NAME_REGEX = r"^[a-z0-9]*$" ######################################################################## # ByteQuantity class definition ######################################################################## @total_ordering class ByteQuantity(object): """ Class representing a byte quantity. A byte quantity has both a quantity and a byte-related unit. Units are maintained using the constants from util.py. If no units are provided, C{UNIT_BYTES} is assumed. The quantity is maintained internally as a string so that issues of precision can be avoided. It really isn't possible to store a floating point number here while being able to losslessly translate back and forth between XML and object representations. (Perhaps the Python 2.4 Decimal class would have been an option, but I originally wanted to stay compatible with Python 2.3.) Even though the quantity is maintained as a string, the string must be in a valid floating point positive number. Technically, any floating point string format supported by Python is allowble. However, it does not make sense to have a negative quantity of bytes in this context. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, quantity, units, bytes """ def __init__(self, quantity=None, units=None): """ Constructor for the C{ByteQuantity} class. @param quantity: Quantity of bytes, something interpretable as a float @param units: Unit of bytes, one of VALID_BYTE_UNITS @raise ValueError: If one of the values is invalid. """ self._quantity = None self._units = None self.quantity = quantity self.units = units def __repr__(self): """ Official string representation for class instance. """ return "ByteQuantity(%s, %s)" % (self.quantity, self.units) def __str__(self): """ Informal string representation for class instance. """ return "%s" % displayBytes(self.bytes) def __eq__(self, other): """Equals operator, implemented in terms of Python 2-style compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of Python 2-style compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of Python 2-style compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Python 2-style comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 elif isinstance(other, ByteQuantity): if self.bytes != other.bytes: if self.bytes < other.bytes: return -1 else: return 1 return 0 else: return self.__cmp__(ByteQuantity(other, UNIT_BYTES)) # will fail if other can't be coverted to float def _setQuantity(self, value): """ Property target used to set the quantity The value must be interpretable as a float if it is not None @raise ValueError: If the value is an empty string. @raise ValueError: If the value is not a valid floating point number @raise ValueError: If the value is less than zero """ if value is None: self._quantity = None else: try: floatValue = float(value) # allow integer, float, string, etc. except: raise ValueError("Quantity must be interpretable as a float") if floatValue < 0.0: raise ValueError("Quantity cannot be negative.") self._quantity = str(value) # keep around string def _getQuantity(self): """ Property target used to get the quantity. """ return self._quantity def _setUnits(self, value): """ Property target used to set the units value. If not C{None}, the units value must be one of the values in L{VALID_BYTE_UNITS}. @raise ValueError: If the value is not valid. """ if value is None: self._units = UNIT_BYTES else: if value not in VALID_BYTE_UNITS: raise ValueError("Units value must be one of %s." % VALID_BYTE_UNITS) self._units = value def _getUnits(self): """ Property target used to get the units value. """ return self._units def _getBytes(self): """ Property target used to return the byte quantity as a floating point number. If there is no quantity set, then a value of 0.0 is returned. """ if self.quantity is not None and self.units is not None: return convertSize(self.quantity, self.units, UNIT_BYTES) return 0.0 quantity = property(_getQuantity, _setQuantity, None, doc="Byte quantity, as a string") units = property(_getUnits, _setUnits, None, doc="Units for byte quantity, for instance UNIT_BYTES") bytes = property(_getBytes, None, None, doc="Byte quantity, as a floating point number.") ######################################################################## # ActionDependencies class definition ######################################################################## @total_ordering class ActionDependencies(object): """ Class representing dependencies associated with an extended action. Execution ordering for extended actions is done in one of two ways: either by using index values (lower index gets run first) or by having the extended action specify dependencies in terms of other named actions. This class encapsulates the dependency information for an extended action. The following restrictions exist on data in this class: - Any action name must be a non-empty string matching C{ACTION_NAME_REGEX} @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, beforeList, afterList """ def __init__(self, beforeList=None, afterList=None): """ Constructor for the C{ActionDependencies} class. @param beforeList: List of named actions that this action must be run before @param afterList: List of named actions that this action must be run after @raise ValueError: If one of the values is invalid. """ self._beforeList = None self._afterList = None self.beforeList = beforeList self.afterList = afterList def __repr__(self): """ Official string representation for class instance. """ return "ActionDependencies(%s, %s)" % (self.beforeList, self.afterList) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.beforeList != other.beforeList: if self.beforeList < other.beforeList: return -1 else: return 1 if self.afterList != other.afterList: if self.afterList < other.afterList: return -1 else: return 1 return 0 def _setBeforeList(self, value): """ Property target used to set the "run before" list. Either the value must be C{None} or each element must be a string matching ACTION_NAME_REGEX. @raise ValueError: If the value does not match the regular expression. """ if value is None: self._beforeList = None else: try: saved = self._beforeList self._beforeList = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") self._beforeList.extend(value) except Exception as e: self._beforeList = saved raise e def _getBeforeList(self): """ Property target used to get the "run before" list. """ return self._beforeList def _setAfterList(self, value): """ Property target used to set the "run after" list. Either the value must be C{None} or each element must be a string matching ACTION_NAME_REGEX. @raise ValueError: If the value does not match the regular expression. """ if value is None: self._afterList = None else: try: saved = self._afterList self._afterList = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") self._afterList.extend(value) except Exception as e: self._afterList = saved raise e def _getAfterList(self): """ Property target used to get the "run after" list. """ return self._afterList beforeList = property(_getBeforeList, _setBeforeList, None, "List of named actions that this action must be run before.") afterList = property(_getAfterList, _setAfterList, None, "List of named actions that this action must be run after.") ######################################################################## # ActionHook class definition ######################################################################## @total_ordering class ActionHook(object): """ Class representing a hook associated with an action. A hook associated with an action is a shell command to be executed either before or after a named action is executed. The following restrictions exist on data in this class: - The action name must be a non-empty string matching C{ACTION_NAME_REGEX} - The shell command must be a non-empty string. The internal C{before} and C{after} instance variables are always set to False in this parent class. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, action, command, before, after """ def __init__(self, action=None, command=None): """ Constructor for the C{ActionHook} class. @param action: Action this hook is associated with @param command: Shell command to execute @raise ValueError: If one of the values is invalid. """ self._action = None self._command = None self._before = False self._after = False self.action = action self.command = command def __repr__(self): """ Official string representation for class instance. """ return "ActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.action != other.action: if str(self.action or "") < str(other.action or ""): return -1 else: return 1 if self.command != other.command: if str(self.command or "") < str(other.command or ""): return -1 else: return 1 if self.before != other.before: if self.before < other.before: return -1 else: return 1 if self.after != other.after: if self.after < other.after: return -1 else: return 1 return 0 def _setAction(self, value): """ Property target used to set the action name. The value must be a non-empty string if it is not C{None}. It must also consist only of lower-case letters and digits. @raise ValueError: If the value is an empty string. """ pattern = re.compile(ACTION_NAME_REGEX) if value is not None: if len(value) < 1: raise ValueError("The action name must be a non-empty string.") if not pattern.search(value): raise ValueError("The action name must consist of only lower-case letters and digits.") self._action = value def _getAction(self): """ Property target used to get the action name. """ return self._action def _setCommand(self, value): """ Property target used to set the command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The command must be a non-empty string.") self._command = value def _getCommand(self): """ Property target used to get the command. """ return self._command def _getBefore(self): """ Property target used to get the before flag. """ return self._before def _getAfter(self): """ Property target used to get the after flag. """ return self._after action = property(_getAction, _setAction, None, "Action this hook is associated with.") command = property(_getCommand, _setCommand, None, "Shell command to execute.") before = property(_getBefore, None, None, "Indicates whether command should be executed before action.") after = property(_getAfter, None, None, "Indicates whether command should be executed after action.") @total_ordering class PreActionHook(ActionHook): """ Class representing a pre-action hook associated with an action. A hook associated with an action is a shell command to be executed either before or after a named action is executed. In this case, a pre-action hook is executed before the named action. The following restrictions exist on data in this class: - The action name must be a non-empty string consisting of lower-case letters and digits. - The shell command must be a non-empty string. The internal C{before} instance variable is always set to True in this class. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, action, command, before, after """ def __init__(self, action=None, command=None): """ Constructor for the C{PreActionHook} class. @param action: Action this hook is associated with @param command: Shell command to execute @raise ValueError: If one of the values is invalid. """ ActionHook.__init__(self, action, command) self._before = True def __repr__(self): """ Official string representation for class instance. """ return "PreActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after) @total_ordering class PostActionHook(ActionHook): """ Class representing a pre-action hook associated with an action. A hook associated with an action is a shell command to be executed either before or after a named action is executed. In this case, a post-action hook is executed after the named action. The following restrictions exist on data in this class: - The action name must be a non-empty string consisting of lower-case letters and digits. - The shell command must be a non-empty string. The internal C{before} instance variable is always set to True in this class. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, action, command, before, after """ def __init__(self, action=None, command=None): """ Constructor for the C{PostActionHook} class. @param action: Action this hook is associated with @param command: Shell command to execute @raise ValueError: If one of the values is invalid. """ ActionHook.__init__(self, action, command) self._after = True def __repr__(self): """ Official string representation for class instance. """ return "PostActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after) ######################################################################## # BlankBehavior class definition ######################################################################## @total_ordering class BlankBehavior(object): """ Class representing optimized store-action media blanking behavior. The following restrictions exist on data in this class: - The blanking mode must be a one of the values in L{VALID_BLANK_MODES} - The blanking factor must be a positive floating point number @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, blankMode, blankFactor """ def __init__(self, blankMode=None, blankFactor=None): """ Constructor for the C{BlankBehavior} class. @param blankMode: Blanking mode @param blankFactor: Blanking factor @raise ValueError: If one of the values is invalid. """ self._blankMode = None self._blankFactor = None self.blankMode = blankMode self.blankFactor = blankFactor def __repr__(self): """ Official string representation for class instance. """ return "BlankBehavior(%s, %s)" % (self.blankMode, self.blankFactor) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.blankMode != other.blankMode: if str(self.blankMode or "") < str(other.blankMode or ""): return -1 else: return 1 if self.blankFactor != other.blankFactor: if float(self.blankFactor or 0.0) < float(other.blankFactor or 0.0): return -1 else: return 1 return 0 def _setBlankMode(self, value): """ Property target used to set the blanking mode. The value must be one of L{VALID_BLANK_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_BLANK_MODES: raise ValueError("Blanking mode must be one of %s." % VALID_BLANK_MODES) self._blankMode = value def _getBlankMode(self): """ Property target used to get the blanking mode. """ return self._blankMode def _setBlankFactor(self, value): """ Property target used to set the blanking factor. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. @raise ValueError: If the value is not a valid floating point number @raise ValueError: If the value is less than zero """ if value is not None: if len(value) < 1: raise ValueError("Blanking factor must be a non-empty string.") floatValue = float(value) if floatValue < 0.0: raise ValueError("Blanking factor cannot be negative.") self._blankFactor = value # keep around string def _getBlankFactor(self): """ Property target used to get the blanking factor. """ return self._blankFactor blankMode = property(_getBlankMode, _setBlankMode, None, "Blanking mode") blankFactor = property(_getBlankFactor, _setBlankFactor, None, "Blanking factor") ######################################################################## # ExtendedAction class definition ######################################################################## @total_ordering class ExtendedAction(object): """ Class representing an extended action. Essentially, an extended action needs to allow the following to happen:: exec("from %s import %s" % (module, function)) exec("%s(action, configPath")" % function) The following restrictions exist on data in this class: - The action name must be a non-empty string consisting of lower-case letters and digits. - The module must be a non-empty string and a valid Python identifier. - The function must be an on-empty string and a valid Python identifier. - If set, the index must be a positive integer. - If set, the dependencies attribute must be an C{ActionDependencies} object. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, name, module, function, index, dependencies """ def __init__(self, name=None, module=None, function=None, index=None, dependencies=None): """ Constructor for the C{ExtendedAction} class. @param name: Name of the extended action @param module: Name of the module containing the extended action function @param function: Name of the extended action function @param index: Index of action, used for execution ordering @param dependencies: Dependencies for action, used for execution ordering @raise ValueError: If one of the values is invalid. """ self._name = None self._module = None self._function = None self._index = None self._dependencies = None self.name = name self.module = module self.function = function self.index = index self.dependencies = dependencies def __repr__(self): """ Official string representation for class instance. """ return "ExtendedAction(%s, %s, %s, %s, %s)" % (self.name, self.module, self.function, self.index, self.dependencies) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.name != other.name: if str(self.name or "") < str(other.name or ""): return -1 else: return 1 if self.module != other.module: if str(self.module or "") < str(other.module or ""): return -1 else: return 1 if self.function != other.function: if str(self.function or "") < str(other.function or ""): return -1 else: return 1 if self.index != other.index: if int(self.index or 0) < int(other.index or 0): return -1 else: return 1 if self.dependencies != other.dependencies: if self.dependencies < other.dependencies: return -1 else: return 1 return 0 def _setName(self, value): """ Property target used to set the action name. The value must be a non-empty string if it is not C{None}. It must also consist only of lower-case letters and digits. @raise ValueError: If the value is an empty string. """ pattern = re.compile(ACTION_NAME_REGEX) if value is not None: if len(value) < 1: raise ValueError("The action name must be a non-empty string.") if not pattern.search(value): raise ValueError("The action name must consist of only lower-case letters and digits.") self._name = value def _getName(self): """ Property target used to get the action name. """ return self._name def _setModule(self, value): """ Property target used to set the module name. The value must be a non-empty string if it is not C{None}. It must also be a valid Python identifier. @raise ValueError: If the value is an empty string. """ pattern = re.compile(r"^([A-Za-z_][A-Za-z0-9_]*)(\.[A-Za-z_][A-Za-z0-9_]*)*$") if value is not None: if len(value) < 1: raise ValueError("The module name must be a non-empty string.") if not pattern.search(value): raise ValueError("The module name must be a valid Python identifier.") self._module = value def _getModule(self): """ Property target used to get the module name. """ return self._module def _setFunction(self, value): """ Property target used to set the function name. The value must be a non-empty string if it is not C{None}. It must also be a valid Python identifier. @raise ValueError: If the value is an empty string. """ pattern = re.compile(r"^[A-Za-z_][A-Za-z0-9_]*$") if value is not None: if len(value) < 1: raise ValueError("The function name must be a non-empty string.") if not pattern.search(value): raise ValueError("The function name must be a valid Python identifier.") self._function = value def _getFunction(self): """ Property target used to get the function name. """ return self._function def _setIndex(self, value): """ Property target used to set the action index. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._index = None else: try: value = int(value) except TypeError: raise ValueError("Action index value must be an integer >= 0.") if value < 0: raise ValueError("Action index value must be an integer >= 0.") self._index = value def _getIndex(self): """ Property target used to get the action index. """ return self._index def _setDependencies(self, value): """ Property target used to set the action dependencies information. If not C{None}, the value must be a C{ActionDependecies} object. @raise ValueError: If the value is not a C{ActionDependencies} object. """ if value is None: self._dependencies = None else: if not isinstance(value, ActionDependencies): raise ValueError("Value must be a C{ActionDependencies} object.") self._dependencies = value def _getDependencies(self): """ Property target used to get action dependencies information. """ return self._dependencies name = property(_getName, _setName, None, "Name of the extended action.") module = property(_getModule, _setModule, None, "Name of the module containing the extended action function.") function = property(_getFunction, _setFunction, None, "Name of the extended action function.") index = property(_getIndex, _setIndex, None, "Index of action, used for execution ordering.") dependencies = property(_getDependencies, _setDependencies, None, "Dependencies for action, used for execution ordering.") ######################################################################## # CommandOverride class definition ######################################################################## @total_ordering class CommandOverride(object): """ Class representing a piece of Cedar Backup command override configuration. The following restrictions exist on data in this class: - The absolute path must be absolute @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, command, absolutePath """ def __init__(self, command=None, absolutePath=None): """ Constructor for the C{CommandOverride} class. @param command: Name of command to be overridden. @param absolutePath: Absolute path of the overrridden command. @raise ValueError: If one of the values is invalid. """ self._command = None self._absolutePath = None self.command = command self.absolutePath = absolutePath def __repr__(self): """ Official string representation for class instance. """ return "CommandOverride(%s, %s)" % (self.command, self.absolutePath) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.command != other.command: if str(self.command or "") < str(other.command or ""): return -1 else: return 1 if self.absolutePath != other.absolutePath: if str(self.absolutePath or "") < str(other.absolutePath or ""): return -1 else: return 1 return 0 def _setCommand(self, value): """ Property target used to set the command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The command must be a non-empty string.") self._command = value def _getCommand(self): """ Property target used to get the command. """ return self._command def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Not an absolute path: [%s]" % value) self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath command = property(_getCommand, _setCommand, None, doc="Name of command to be overridden.") absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the overrridden command.") ######################################################################## # CollectFile class definition ######################################################################## @total_ordering class CollectFile(object): """ Class representing a Cedar Backup collect file. The following restrictions exist on data in this class: - Absolute paths must be absolute - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, absolutePath, collectMode, archiveMode """ def __init__(self, absolutePath=None, collectMode=None, archiveMode=None): """ Constructor for the C{CollectFile} class. @param absolutePath: Absolute path of the file to collect. @param collectMode: Overridden collect mode for this file. @param archiveMode: Overridden archive mode for this file. @raise ValueError: If one of the values is invalid. """ self._absolutePath = None self._collectMode = None self._archiveMode = None self.absolutePath = absolutePath self.collectMode = collectMode self.archiveMode = archiveMode def __repr__(self): """ Official string representation for class instance. """ return "CollectFile(%s, %s, %s)" % (self.absolutePath, self.collectMode, self.archiveMode) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.absolutePath != other.absolutePath: if str(self.absolutePath or "") < str(other.absolutePath or ""): return -1 else: return 1 if self.collectMode != other.collectMode: if str(self.collectMode or "") < str(other.collectMode or ""): return -1 else: return 1 if self.archiveMode != other.archiveMode: if str(self.archiveMode or "") < str(other.archiveMode or ""): return -1 else: return 1 return 0 def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Not an absolute path: [%s]" % value) self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setArchiveMode(self, value): """ Property target used to set the archive mode. If not C{None}, the mode must be one of the values in L{VALID_ARCHIVE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_ARCHIVE_MODES: raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) self._archiveMode = value def _getArchiveMode(self): """ Property target used to get the archive mode. """ return self._archiveMode absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the file to collect.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this file.") archiveMode = property(_getArchiveMode, _setArchiveMode, None, doc="Overridden archive mode for this file.") ######################################################################## # CollectDir class definition ######################################################################## @total_ordering class CollectDir(object): """ Class representing a Cedar Backup collect directory. The following restrictions exist on data in this class: - Absolute paths must be absolute - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. - The ignore file must be a non-empty string. For the C{absoluteExcludePaths} list, validation is accomplished through the L{util.AbsolutePathList} list implementation that overrides common list methods and transparently does the absolute path validation for us. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, absoluteExcludePaths, relativeExcludePaths, excludePatterns """ def __init__(self, absolutePath=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, relativeExcludePaths=None, excludePatterns=None, linkDepth=None, dereference=False, recursionLevel=None): """ Constructor for the C{CollectDir} class. @param absolutePath: Absolute path of the directory to collect. @param collectMode: Overridden collect mode for this directory. @param archiveMode: Overridden archive mode for this directory. @param ignoreFile: Overidden ignore file name for this directory. @param linkDepth: Maximum at which soft links should be followed. @param dereference: Whether to dereference links that are followed. @param absoluteExcludePaths: List of absolute paths to exclude. @param relativeExcludePaths: List of relative paths to exclude. @param excludePatterns: List of regular expression patterns to exclude. @raise ValueError: If one of the values is invalid. """ self._absolutePath = None self._collectMode = None self._archiveMode = None self._ignoreFile = None self._linkDepth = None self._dereference = None self._recursionLevel = None self._absoluteExcludePaths = None self._relativeExcludePaths = None self._excludePatterns = None self.absolutePath = absolutePath self.collectMode = collectMode self.archiveMode = archiveMode self.ignoreFile = ignoreFile self.linkDepth = linkDepth self.dereference = dereference self.recursionLevel = recursionLevel self.absoluteExcludePaths = absoluteExcludePaths self.relativeExcludePaths = relativeExcludePaths self.excludePatterns = excludePatterns def __repr__(self): """ Official string representation for class instance. """ return "CollectDir(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.absolutePath, self.collectMode, self.archiveMode, self.ignoreFile, self.absoluteExcludePaths, self.relativeExcludePaths, self.excludePatterns, self.linkDepth, self.dereference, self.recursionLevel) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.absolutePath != other.absolutePath: if str(self.absolutePath or "") < str(other.absolutePath or ""): return -1 else: return 1 if self.collectMode != other.collectMode: if str(self.collectMode or "") < str(other.collectMode or ""): return -1 else: return 1 if self.archiveMode != other.archiveMode: if str(self.archiveMode or "") < str(other.archiveMode or ""): return -1 else: return 1 if self.ignoreFile != other.ignoreFile: if str(self.ignoreFile or "") < str(other.ignoreFile or ""): return -1 else: return 1 if self.linkDepth != other.linkDepth: if int(self.linkDepth or 0) < int(other.linkDepth or 0): return -1 else: return 1 if self.dereference != other.dereference: if self.dereference < other.dereference: return -1 else: return 1 if self.recursionLevel != other.recursionLevel: if int(self.recursionLevel or 0) < int(other.recursionLevel or 0): return -1 else: return 1 if self.absoluteExcludePaths != other.absoluteExcludePaths: if self.absoluteExcludePaths < other.absoluteExcludePaths: return -1 else: return 1 if self.relativeExcludePaths != other.relativeExcludePaths: if self.relativeExcludePaths < other.relativeExcludePaths: return -1 else: return 1 if self.excludePatterns != other.excludePatterns: if self.excludePatterns < other.excludePatterns: return -1 else: return 1 return 0 def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Not an absolute path: [%s]" % value) self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setArchiveMode(self, value): """ Property target used to set the archive mode. If not C{None}, the mode must be one of the values in L{VALID_ARCHIVE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_ARCHIVE_MODES: raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) self._archiveMode = value def _getArchiveMode(self): """ Property target used to get the archive mode. """ return self._archiveMode def _setIgnoreFile(self, value): """ Property target used to set the ignore file. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The ignore file must be a non-empty string.") self._ignoreFile = value def _getIgnoreFile(self): """ Property target used to get the ignore file. """ return self._ignoreFile def _setLinkDepth(self, value): """ Property target used to set the link depth. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._linkDepth = None else: try: value = int(value) except TypeError: raise ValueError("Link depth value must be an integer >= 0.") if value < 0: raise ValueError("Link depth value must be an integer >= 0.") self._linkDepth = value def _getLinkDepth(self): """ Property target used to get the action linkDepth. """ return self._linkDepth def _setDereference(self, value): """ Property target used to set the dereference flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._dereference = True else: self._dereference = False def _getDereference(self): """ Property target used to get the dereference flag. """ return self._dereference def _setRecursionLevel(self, value): """ Property target used to set the recursionLevel. The value must be an integer. @raise ValueError: If the value is not valid. """ if value is None: self._recursionLevel = None else: try: value = int(value) except TypeError: raise ValueError("Recusion level value must be an integer.") self._recursionLevel = value def _getRecursionLevel(self): """ Property target used to get the action recursionLevel. """ return self._recursionLevel def _setAbsoluteExcludePaths(self, value): """ Property target used to set the absolute exclude paths list. Either the value must be C{None} or each element must be an absolute path. Elements do not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. """ if value is None: self._absoluteExcludePaths = None else: try: saved = self._absoluteExcludePaths self._absoluteExcludePaths = AbsolutePathList() self._absoluteExcludePaths.extend(value) except Exception as e: self._absoluteExcludePaths = saved raise e def _getAbsoluteExcludePaths(self): """ Property target used to get the absolute exclude paths list. """ return self._absoluteExcludePaths def _setRelativeExcludePaths(self, value): """ Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment. """ if value is None: self._relativeExcludePaths = None else: try: saved = self._relativeExcludePaths self._relativeExcludePaths = UnorderedList() self._relativeExcludePaths.extend(value) except Exception as e: self._relativeExcludePaths = saved raise e def _getRelativeExcludePaths(self): """ Property target used to get the relative exclude paths list. """ return self._relativeExcludePaths def _setExcludePatterns(self, value): """ Property target used to set the exclude patterns list. """ if value is None: self._excludePatterns = None else: try: saved = self._excludePatterns self._excludePatterns = RegexList() self._excludePatterns.extend(value) except Exception as e: self._excludePatterns = saved raise e def _getExcludePatterns(self): """ Property target used to get the exclude patterns list. """ return self._excludePatterns absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the directory to collect.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this directory.") archiveMode = property(_getArchiveMode, _setArchiveMode, None, doc="Overridden archive mode for this directory.") ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, doc="Overridden ignore file name for this directory.") linkDepth = property(_getLinkDepth, _setLinkDepth, None, doc="Maximum at which soft links should be followed.") dereference = property(_getDereference, _setDereference, None, doc="Whether to dereference links that are followed.") recursionLevel = property(_getRecursionLevel, _setRecursionLevel, None, "Recursion level to use for recursive directory collection") absoluteExcludePaths = property(_getAbsoluteExcludePaths, _setAbsoluteExcludePaths, None, "List of absolute paths to exclude.") relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.") ######################################################################## # PurgeDir class definition ######################################################################## @total_ordering class PurgeDir(object): """ Class representing a Cedar Backup purge directory. The following restrictions exist on data in this class: - The absolute path must be an absolute path - The retain days value must be an integer >= 0. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, absolutePath, retainDays """ def __init__(self, absolutePath=None, retainDays=None): """ Constructor for the C{PurgeDir} class. @param absolutePath: Absolute path of the directory to be purged. @param retainDays: Number of days content within directory should be retained. @raise ValueError: If one of the values is invalid. """ self._absolutePath = None self._retainDays = None self.absolutePath = absolutePath self.retainDays = retainDays def __repr__(self): """ Official string representation for class instance. """ return "PurgeDir(%s, %s)" % (self.absolutePath, self.retainDays) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.absolutePath != other.absolutePath: if str(self.absolutePath or "") < str(other.absolutePath or ""): return -1 else: return 1 if self.retainDays != other.retainDays: if int(self.retainDays or 0) < int(other.retainDays or 0): return -1 else: return 1 return 0 def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Absolute path must, er, be an absolute path.") self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath def _setRetainDays(self, value): """ Property target used to set the retain days value. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._retainDays = None else: try: value = int(value) except TypeError: raise ValueError("Retain days value must be an integer >= 0.") if value < 0: raise ValueError("Retain days value must be an integer >= 0.") self._retainDays = value def _getRetainDays(self): """ Property target used to get the absolute path. """ return self._retainDays absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, "Absolute path of directory to purge.") retainDays = property(_getRetainDays, _setRetainDays, None, "Number of days content within directory should be retained.") ######################################################################## # LocalPeer class definition ######################################################################## @total_ordering class LocalPeer(object): """ Class representing a Cedar Backup peer. The following restrictions exist on data in this class: - The peer name must be a non-empty string. - The collect directory must be an absolute path. - The ignore failure mode must be one of the values in L{VALID_FAILURE_MODES}. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, name, collectDir """ def __init__(self, name=None, collectDir=None, ignoreFailureMode=None): """ Constructor for the C{LocalPeer} class. @param name: Name of the peer, typically a valid hostname. @param collectDir: Collect directory to stage files from on peer. @param ignoreFailureMode: Ignore failure mode for peer. @raise ValueError: If one of the values is invalid. """ self._name = None self._collectDir = None self._ignoreFailureMode = None self.name = name self.collectDir = collectDir self.ignoreFailureMode = ignoreFailureMode def __repr__(self): """ Official string representation for class instance. """ return "LocalPeer(%s, %s, %s)" % (self.name, self.collectDir, self.ignoreFailureMode) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.name != other.name: if str(self.name or "") < str(other.name or ""): return -1 else: return 1 if self.collectDir != other.collectDir: if str(self.collectDir or "") < str(other.collectDir or ""): return -1 else: return 1 if self.ignoreFailureMode != other.ignoreFailureMode: if str(self.ignoreFailureMode or "") < str(other.ignoreFailureMode or ""): return -1 else: return 1 return 0 def _setName(self, value): """ Property target used to set the peer name. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The peer name must be a non-empty string.") self._name = value def _getName(self): """ Property target used to get the peer name. """ return self._name def _setCollectDir(self, value): """ Property target used to set the collect directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Collect directory must be an absolute path.") self._collectDir = encodePath(value) def _getCollectDir(self): """ Property target used to get the collect directory. """ return self._collectDir def _setIgnoreFailureMode(self, value): """ Property target used to set the ignoreFailure mode. If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_FAILURE_MODES: raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) self._ignoreFailureMode = value def _getIgnoreFailureMode(self): """ Property target used to get the ignoreFailure mode. """ return self._ignoreFailureMode name = property(_getName, _setName, None, "Name of the peer, typically a valid hostname.") collectDir = property(_getCollectDir, _setCollectDir, None, "Collect directory to stage files from on peer.") ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") ######################################################################## # RemotePeer class definition ######################################################################## @total_ordering class RemotePeer(object): """ Class representing a Cedar Backup peer. The following restrictions exist on data in this class: - The peer name must be a non-empty string. - The collect directory must be an absolute path. - The remote user must be a non-empty string. - The rcp command must be a non-empty string. - The rsh command must be a non-empty string. - The cback command must be a non-empty string. - Any managed action name must be a non-empty string matching C{ACTION_NAME_REGEX} - The ignore failure mode must be one of the values in L{VALID_FAILURE_MODES}. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, name, collectDir, remoteUser, rcpCommand """ def __init__(self, name=None, collectDir=None, remoteUser=None, rcpCommand=None, rshCommand=None, cbackCommand=None, managed=False, managedActions=None, ignoreFailureMode=None): """ Constructor for the C{RemotePeer} class. @param name: Name of the peer, must be a valid hostname. @param collectDir: Collect directory to stage files from on peer. @param remoteUser: Name of backup user on remote peer. @param rcpCommand: Overridden rcp-compatible copy command for peer. @param rshCommand: Overridden rsh-compatible remote shell command for peer. @param cbackCommand: Overridden cback-compatible command to use on remote peer. @param managed: Indicates whether this is a managed peer. @param managedActions: Overridden set of actions that are managed on the peer. @param ignoreFailureMode: Ignore failure mode for peer. @raise ValueError: If one of the values is invalid. """ self._name = None self._collectDir = None self._remoteUser = None self._rcpCommand = None self._rshCommand = None self._cbackCommand = None self._managed = None self._managedActions = None self._ignoreFailureMode = None self.name = name self.collectDir = collectDir self.remoteUser = remoteUser self.rcpCommand = rcpCommand self.rshCommand = rshCommand self.cbackCommand = cbackCommand self.managed = managed self.managedActions = managedActions self.ignoreFailureMode = ignoreFailureMode def __repr__(self): """ Official string representation for class instance. """ return "RemotePeer(%s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.name, self.collectDir, self.remoteUser, self.rcpCommand, self.rshCommand, self.cbackCommand, self.managed, self.managedActions, self.ignoreFailureMode) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.name != other.name: if str(self.name or "") < str(other.name or ""): return -1 else: return 1 if self.collectDir != other.collectDir: if str(self.collectDir or "") < str(other.collectDir or ""): return -1 else: return 1 if self.remoteUser != other.remoteUser: if str(self.remoteUser or "") < str(other.remoteUser or ""): return -1 else: return 1 if self.rcpCommand != other.rcpCommand: if str(self.rcpCommand or "") < str(other.rcpCommand or ""): return -1 else: return 1 if self.rshCommand != other.rshCommand: if str(self.rshCommand or "") < str(other.rshCommand or ""): return -1 else: return 1 if self.cbackCommand != other.cbackCommand: if str(self.cbackCommand or "") < str(other.cbackCommand or ""): return -1 else: return 1 if self.managed != other.managed: if str(self.managed or "") < str(other.managed or ""): return -1 else: return 1 if self.managedActions != other.managedActions: if self.managedActions < other.managedActions: return -1 else: return 1 if self.ignoreFailureMode != other.ignoreFailureMode: if str(self.ignoreFailureMode or "") < str(other.ignoreFailureMode or ""): return -1 else: return 1 return 0 def _setName(self, value): """ Property target used to set the peer name. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The peer name must be a non-empty string.") self._name = value def _getName(self): """ Property target used to get the peer name. """ return self._name def _setCollectDir(self, value): """ Property target used to set the collect directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Collect directory must be an absolute path.") self._collectDir = encodePath(value) def _getCollectDir(self): """ Property target used to get the collect directory. """ return self._collectDir def _setRemoteUser(self, value): """ Property target used to set the remote user. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The remote user must be a non-empty string.") self._remoteUser = value def _getRemoteUser(self): """ Property target used to get the remote user. """ return self._remoteUser def _setRcpCommand(self, value): """ Property target used to set the rcp command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The rcp command must be a non-empty string.") self._rcpCommand = value def _getRcpCommand(self): """ Property target used to get the rcp command. """ return self._rcpCommand def _setRshCommand(self, value): """ Property target used to set the rsh command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The rsh command must be a non-empty string.") self._rshCommand = value def _getRshCommand(self): """ Property target used to get the rsh command. """ return self._rshCommand def _setCbackCommand(self, value): """ Property target used to set the cback command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The cback command must be a non-empty string.") self._cbackCommand = value def _getCbackCommand(self): """ Property target used to get the cback command. """ return self._cbackCommand def _setManaged(self, value): """ Property target used to set the managed flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._managed = True else: self._managed = False def _getManaged(self): """ Property target used to get the managed flag. """ return self._managed def _setManagedActions(self, value): """ Property target used to set the managed actions list. Elements do not have to exist on disk at the time of assignment. """ if value is None: self._managedActions = None else: try: saved = self._managedActions self._managedActions = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") self._managedActions.extend(value) except Exception as e: self._managedActions = saved raise e def _getManagedActions(self): """ Property target used to get the managed actions list. """ return self._managedActions def _setIgnoreFailureMode(self, value): """ Property target used to set the ignoreFailure mode. If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_FAILURE_MODES: raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) self._ignoreFailureMode = value def _getIgnoreFailureMode(self): """ Property target used to get the ignoreFailure mode. """ return self._ignoreFailureMode name = property(_getName, _setName, None, "Name of the peer, must be a valid hostname.") collectDir = property(_getCollectDir, _setCollectDir, None, "Collect directory to stage files from on peer.") remoteUser = property(_getRemoteUser, _setRemoteUser, None, "Name of backup user on remote peer.") rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "Overridden rcp-compatible copy command for peer.") rshCommand = property(_getRshCommand, _setRshCommand, None, "Overridden rsh-compatible remote shell command for peer.") cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "Overridden cback-compatible command to use on remote peer.") managed = property(_getManaged, _setManaged, None, "Indicates whether this is a managed peer.") managedActions = property(_getManagedActions, _setManagedActions, None, "Overridden set of actions that are managed on the peer.") ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") ######################################################################## # ReferenceConfig class definition ######################################################################## @total_ordering class ReferenceConfig(object): """ Class representing a Cedar Backup reference configuration. The reference information is just used for saving off metadata about configuration and exists mostly for backwards-compatibility with Cedar Backup 1.x. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, author, revision, description, generator """ def __init__(self, author=None, revision=None, description=None, generator=None): """ Constructor for the C{ReferenceConfig} class. @param author: Author of the configuration file. @param revision: Revision of the configuration file. @param description: Description of the configuration file. @param generator: Tool that generated the configuration file. """ self._author = None self._revision = None self._description = None self._generator = None self.author = author self.revision = revision self.description = description self.generator = generator def __repr__(self): """ Official string representation for class instance. """ return "ReferenceConfig(%s, %s, %s, %s)" % (self.author, self.revision, self.description, self.generator) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.author != other.author: if str(self.author or "") < str(other.author or ""): return -1 else: return 1 if self.revision != other.revision: if str(self.revision or "") < str(other.revision or ""): return -1 else: return 1 if self.description != other.description: if str(self.description or "") < str(other.description or ""): return -1 else: return 1 if self.generator != other.generator: if str(self.generator or "") < str(other.generator or ""): return -1 else: return 1 return 0 def _setAuthor(self, value): """ Property target used to set the author value. No validations. """ self._author = value def _getAuthor(self): """ Property target used to get the author value. """ return self._author def _setRevision(self, value): """ Property target used to set the revision value. No validations. """ self._revision = value def _getRevision(self): """ Property target used to get the revision value. """ return self._revision def _setDescription(self, value): """ Property target used to set the description value. No validations. """ self._description = value def _getDescription(self): """ Property target used to get the description value. """ return self._description def _setGenerator(self, value): """ Property target used to set the generator value. No validations. """ self._generator = value def _getGenerator(self): """ Property target used to get the generator value. """ return self._generator author = property(_getAuthor, _setAuthor, None, "Author of the configuration file.") revision = property(_getRevision, _setRevision, None, "Revision of the configuration file.") description = property(_getDescription, _setDescription, None, "Description of the configuration file.") generator = property(_getGenerator, _setGenerator, None, "Tool that generated the configuration file.") ######################################################################## # ExtensionsConfig class definition ######################################################################## @total_ordering class ExtensionsConfig(object): """ Class representing Cedar Backup extensions configuration. Extensions configuration is used to specify "extended actions" implemented by code external to Cedar Backup. For instance, a hypothetical third party might write extension code to collect database repository data. If they write a properly-formatted extension function, they can use the extension configuration to map a command-line Cedar Backup action (i.e. "database") to their function. The following restrictions exist on data in this class: - If set, the order mode must be one of the values in C{VALID_ORDER_MODES} - The actions list must be a list of C{ExtendedAction} objects. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, orderMode, actions """ def __init__(self, actions=None, orderMode=None): """ Constructor for the C{ExtensionsConfig} class. @param actions: List of extended actions """ self._orderMode = None self._actions = None self.orderMode = orderMode self.actions = actions def __repr__(self): """ Official string representation for class instance. """ return "ExtensionsConfig(%s, %s)" % (self.orderMode, self.actions) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.orderMode != other.orderMode: if str(self.orderMode or "") < str(other.orderMode or ""): return -1 else: return 1 if self.actions != other.actions: if self.actions < other.actions: return -1 else: return 1 return 0 def _setOrderMode(self, value): """ Property target used to set the order mode. The value must be one of L{VALID_ORDER_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_ORDER_MODES: raise ValueError("Order mode must be one of %s." % VALID_ORDER_MODES) self._orderMode = value def _getOrderMode(self): """ Property target used to get the order mode. """ return self._orderMode def _setActions(self, value): """ Property target used to set the actions list. Either the value must be C{None} or each element must be an C{ExtendedAction}. @raise ValueError: If the value is not a C{ExtendedAction} """ if value is None: self._actions = None else: try: saved = self._actions self._actions = ObjectTypeList(ExtendedAction, "ExtendedAction") self._actions.extend(value) except Exception as e: self._actions = saved raise e def _getActions(self): """ Property target used to get the actions list. """ return self._actions orderMode = property(_getOrderMode, _setOrderMode, None, "Order mode for extensions, to control execution ordering.") actions = property(_getActions, _setActions, None, "List of extended actions.") ######################################################################## # OptionsConfig class definition ######################################################################## @total_ordering class OptionsConfig(object): """ Class representing a Cedar Backup global options configuration. The options section is used to store global configuration options and defaults that can be applied to other sections. The following restrictions exist on data in this class: - The working directory must be an absolute path. - The starting day must be a day of the week in English, i.e. C{"monday"}, C{"tuesday"}, etc. - All of the other values must be non-empty strings if they are set to something other than C{None}. - The overrides list must be a list of C{CommandOverride} objects. - The hooks list must be a list of C{ActionHook} objects. - The cback command must be a non-empty string. - Any managed action name must be a non-empty string matching C{ACTION_NAME_REGEX} @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, startingDay, workingDir, backupUser, backupGroup, rcpCommand, rshCommand, overrides """ def __init__(self, startingDay=None, workingDir=None, backupUser=None, backupGroup=None, rcpCommand=None, overrides=None, hooks=None, rshCommand=None, cbackCommand=None, managedActions=None): """ Constructor for the C{OptionsConfig} class. @param startingDay: Day that starts the week. @param workingDir: Working (temporary) directory to use for backups. @param backupUser: Effective user that backups should run as. @param backupGroup: Effective group that backups should run as. @param rcpCommand: Default rcp-compatible copy command for staging. @param rshCommand: Default rsh-compatible command to use for remote shells. @param cbackCommand: Default cback-compatible command to use on managed remote peers. @param overrides: List of configured command path overrides, if any. @param hooks: List of configured pre- and post-action hooks. @param managedActions: Default set of actions that are managed on remote peers. @raise ValueError: If one of the values is invalid. """ self._startingDay = None self._workingDir = None self._backupUser = None self._backupGroup = None self._rcpCommand = None self._rshCommand = None self._cbackCommand = None self._overrides = None self._hooks = None self._managedActions = None self.startingDay = startingDay self.workingDir = workingDir self.backupUser = backupUser self.backupGroup = backupGroup self.rcpCommand = rcpCommand self.rshCommand = rshCommand self.cbackCommand = cbackCommand self.overrides = overrides self.hooks = hooks self.managedActions = managedActions def __repr__(self): """ Official string representation for class instance. """ return "OptionsConfig(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.startingDay, self.workingDir, self.backupUser, self.backupGroup, self.rcpCommand, self.overrides, self.hooks, self.rshCommand, self.cbackCommand, self.managedActions) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.startingDay != other.startingDay: if str(self.startingDay or "") < str(other.startingDay or ""): return -1 else: return 1 if self.workingDir != other.workingDir: if str(self.workingDir or "") < str(other.workingDir or ""): return -1 else: return 1 if self.backupUser != other.backupUser: if str(self.backupUser or "") < str(other.backupUser or ""): return -1 else: return 1 if self.backupGroup != other.backupGroup: if str(self.backupGroup or "") < str(other.backupGroup or ""): return -1 else: return 1 if self.rcpCommand != other.rcpCommand: if str(self.rcpCommand or "") < str(other.rcpCommand or ""): return -1 else: return 1 if self.rshCommand != other.rshCommand: if str(self.rshCommand or "") < str(other.rshCommand or ""): return -1 else: return 1 if self.cbackCommand != other.cbackCommand: if str(self.cbackCommand or "") < str(other.cbackCommand or ""): return -1 else: return 1 if self.overrides != other.overrides: if self.overrides < other.overrides: return -1 else: return 1 if self.hooks != other.hooks: if self.hooks < other.hooks: return -1 else: return 1 if self.managedActions != other.managedActions: if self.managedActions < other.managedActions: return -1 else: return 1 return 0 def addOverride(self, command, absolutePath): """ If no override currently exists for the command, add one. @param command: Name of command to be overridden. @param absolutePath: Absolute path of the overrridden command. """ override = CommandOverride(command, absolutePath) if self.overrides is None: self.overrides = [ override, ] else: exists = False for obj in self.overrides: if obj.command == override.command: exists = True break if not exists: self.overrides.append(override) def replaceOverride(self, command, absolutePath): """ If override currently exists for the command, replace it; otherwise add it. @param command: Name of command to be overridden. @param absolutePath: Absolute path of the overrridden command. """ override = CommandOverride(command, absolutePath) if self.overrides is None: self.overrides = [ override, ] else: exists = False for obj in self.overrides: if obj.command == override.command: exists = True obj.absolutePath = override.absolutePath break if not exists: self.overrides.append(override) def _setStartingDay(self, value): """ Property target used to set the starting day. If it is not C{None}, the value must be a valid English day of the week, one of C{"monday"}, C{"tuesday"}, C{"wednesday"}, etc. @raise ValueError: If the value is not a valid day of the week. """ if value is not None: if value not in ["monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday", ]: raise ValueError("Starting day must be an English day of the week, i.e. \"monday\".") self._startingDay = value def _getStartingDay(self): """ Property target used to get the starting day. """ return self._startingDay def _setWorkingDir(self, value): """ Property target used to set the working directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Working directory must be an absolute path.") self._workingDir = encodePath(value) def _getWorkingDir(self): """ Property target used to get the working directory. """ return self._workingDir def _setBackupUser(self, value): """ Property target used to set the backup user. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("Backup user must be a non-empty string.") self._backupUser = value def _getBackupUser(self): """ Property target used to get the backup user. """ return self._backupUser def _setBackupGroup(self, value): """ Property target used to set the backup group. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("Backup group must be a non-empty string.") self._backupGroup = value def _getBackupGroup(self): """ Property target used to get the backup group. """ return self._backupGroup def _setRcpCommand(self, value): """ Property target used to set the rcp command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The rcp command must be a non-empty string.") self._rcpCommand = value def _getRcpCommand(self): """ Property target used to get the rcp command. """ return self._rcpCommand def _setRshCommand(self, value): """ Property target used to set the rsh command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The rsh command must be a non-empty string.") self._rshCommand = value def _getRshCommand(self): """ Property target used to get the rsh command. """ return self._rshCommand def _setCbackCommand(self, value): """ Property target used to set the cback command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The cback command must be a non-empty string.") self._cbackCommand = value def _getCbackCommand(self): """ Property target used to get the cback command. """ return self._cbackCommand def _setOverrides(self, value): """ Property target used to set the command path overrides list. Either the value must be C{None} or each element must be a C{CommandOverride}. @raise ValueError: If the value is not a C{CommandOverride} """ if value is None: self._overrides = None else: try: saved = self._overrides self._overrides = ObjectTypeList(CommandOverride, "CommandOverride") self._overrides.extend(value) except Exception as e: self._overrides = saved raise e def _getOverrides(self): """ Property target used to get the command path overrides list. """ return self._overrides def _setHooks(self, value): """ Property target used to set the pre- and post-action hooks list. Either the value must be C{None} or each element must be an C{ActionHook}. @raise ValueError: If the value is not a C{CommandOverride} """ if value is None: self._hooks = None else: try: saved = self._hooks self._hooks = ObjectTypeList(ActionHook, "ActionHook") self._hooks.extend(value) except Exception as e: self._hooks = saved raise e def _getHooks(self): """ Property target used to get the command path hooks list. """ return self._hooks def _setManagedActions(self, value): """ Property target used to set the managed actions list. Elements do not have to exist on disk at the time of assignment. """ if value is None: self._managedActions = None else: try: saved = self._managedActions self._managedActions = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") self._managedActions.extend(value) except Exception as e: self._managedActions = saved raise e def _getManagedActions(self): """ Property target used to get the managed actions list. """ return self._managedActions startingDay = property(_getStartingDay, _setStartingDay, None, "Day that starts the week.") workingDir = property(_getWorkingDir, _setWorkingDir, None, "Working (temporary) directory to use for backups.") backupUser = property(_getBackupUser, _setBackupUser, None, "Effective user that backups should run as.") backupGroup = property(_getBackupGroup, _setBackupGroup, None, "Effective group that backups should run as.") rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "Default rcp-compatible copy command for staging.") rshCommand = property(_getRshCommand, _setRshCommand, None, "Default rsh-compatible command to use for remote shells.") cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "Default cback-compatible command to use on managed remote peers.") overrides = property(_getOverrides, _setOverrides, None, "List of configured command path overrides, if any.") hooks = property(_getHooks, _setHooks, None, "List of configured pre- and post-action hooks.") managedActions = property(_getManagedActions, _setManagedActions, None, "Default set of actions that are managed on remote peers.") ######################################################################## # PeersConfig class definition ######################################################################## @total_ordering class PeersConfig(object): """ Class representing Cedar Backup global peer configuration. This section contains a list of local and remote peers in a master's backup pool. The section is optional. If a master does not define this section, then all peers are unmanaged, and the stage configuration section must explicitly list any peer that is to be staged. If this section is configured, then peers may be managed or unmanaged, and the stage section peer configuration (if any) completely overrides this configuration. The following restrictions exist on data in this class: - The list of local peers must contain only C{LocalPeer} objects - The list of remote peers must contain only C{RemotePeer} objects @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, localPeers, remotePeers """ def __init__(self, localPeers=None, remotePeers=None): """ Constructor for the C{PeersConfig} class. @param localPeers: List of local peers. @param remotePeers: List of remote peers. @raise ValueError: If one of the values is invalid. """ self._localPeers = None self._remotePeers = None self.localPeers = localPeers self.remotePeers = remotePeers def __repr__(self): """ Official string representation for class instance. """ return "PeersConfig(%s, %s)" % (self.localPeers, self.remotePeers) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.localPeers != other.localPeers: if self.localPeers < other.localPeers: return -1 else: return 1 if self.remotePeers != other.remotePeers: if self.remotePeers < other.remotePeers: return -1 else: return 1 return 0 def hasPeers(self): """ Indicates whether any peers are filled into this object. @return: Boolean true if any local or remote peers are filled in, false otherwise. """ return ((self.localPeers is not None and len(self.localPeers) > 0) or (self.remotePeers is not None and len(self.remotePeers) > 0)) def _setLocalPeers(self, value): """ Property target used to set the local peers list. Either the value must be C{None} or each element must be a C{LocalPeer}. @raise ValueError: If the value is not an absolute path. """ if value is None: self._localPeers = None else: try: saved = self._localPeers self._localPeers = ObjectTypeList(LocalPeer, "LocalPeer") self._localPeers.extend(value) except Exception as e: self._localPeers = saved raise e def _getLocalPeers(self): """ Property target used to get the local peers list. """ return self._localPeers def _setRemotePeers(self, value): """ Property target used to set the remote peers list. Either the value must be C{None} or each element must be a C{RemotePeer}. @raise ValueError: If the value is not a C{RemotePeer} """ if value is None: self._remotePeers = None else: try: saved = self._remotePeers self._remotePeers = ObjectTypeList(RemotePeer, "RemotePeer") self._remotePeers.extend(value) except Exception as e: self._remotePeers = saved raise e def _getRemotePeers(self): """ Property target used to get the remote peers list. """ return self._remotePeers localPeers = property(_getLocalPeers, _setLocalPeers, None, "List of local peers.") remotePeers = property(_getRemotePeers, _setRemotePeers, None, "List of remote peers.") ######################################################################## # CollectConfig class definition ######################################################################## @total_ordering class CollectConfig(object): """ Class representing a Cedar Backup collect configuration. The following restrictions exist on data in this class: - The target directory must be an absolute path. - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. - The ignore file must be a non-empty string. - Each of the paths in C{absoluteExcludePaths} must be an absolute path - The collect file list must be a list of C{CollectFile} objects. - The collect directory list must be a list of C{CollectDir} objects. For the C{absoluteExcludePaths} list, validation is accomplished through the L{util.AbsolutePathList} list implementation that overrides common list methods and transparently does the absolute path validation for us. For the C{collectFiles} and C{collectDirs} list, validation is accomplished through the L{util.ObjectTypeList} list implementation that overrides common list methods and transparently ensures that each element has an appropriate type. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, targetDir, collectMode, archiveMode, ignoreFile, absoluteExcludePaths, excludePatterns, collectFiles, collectDirs """ def __init__(self, targetDir=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, excludePatterns=None, collectFiles=None, collectDirs=None): """ Constructor for the C{CollectConfig} class. @param targetDir: Directory to collect files into. @param collectMode: Default collect mode. @param archiveMode: Default archive mode for collect files. @param ignoreFile: Default ignore file name. @param absoluteExcludePaths: List of absolute paths to exclude. @param excludePatterns: List of regular expression patterns to exclude. @param collectFiles: List of collect files. @param collectDirs: List of collect directories. @raise ValueError: If one of the values is invalid. """ self._targetDir = None self._collectMode = None self._archiveMode = None self._ignoreFile = None self._absoluteExcludePaths = None self._excludePatterns = None self._collectFiles = None self._collectDirs = None self.targetDir = targetDir self.collectMode = collectMode self.archiveMode = archiveMode self.ignoreFile = ignoreFile self.absoluteExcludePaths = absoluteExcludePaths self.excludePatterns = excludePatterns self.collectFiles = collectFiles self.collectDirs = collectDirs def __repr__(self): """ Official string representation for class instance. """ return "CollectConfig(%s, %s, %s, %s, %s, %s, %s, %s)" % (self.targetDir, self.collectMode, self.archiveMode, self.ignoreFile, self.absoluteExcludePaths, self.excludePatterns, self.collectFiles, self.collectDirs) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.targetDir != other.targetDir: if str(self.targetDir or "") < str(other.targetDir or ""): return -1 else: return 1 if self.collectMode != other.collectMode: if str(self.collectMode or "") < str(other.collectMode or ""): return -1 else: return 1 if self.archiveMode != other.archiveMode: if str(self.archiveMode or "") < str(other.archiveMode or ""): return -1 else: return 1 if self.ignoreFile != other.ignoreFile: if str(self.ignoreFile or "") < str(other.ignoreFile or ""): return -1 else: return 1 if self.absoluteExcludePaths != other.absoluteExcludePaths: if self.absoluteExcludePaths < other.absoluteExcludePaths: return -1 else: return 1 if self.excludePatterns != other.excludePatterns: if self.excludePatterns < other.excludePatterns: return -1 else: return 1 if self.collectFiles != other.collectFiles: if self.collectFiles < other.collectFiles: return -1 else: return 1 if self.collectDirs != other.collectDirs: if self.collectDirs < other.collectDirs: return -1 else: return 1 return 0 def _setTargetDir(self, value): """ Property target used to set the target directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Target directory must be an absolute path.") self._targetDir = encodePath(value) def _getTargetDir(self): """ Property target used to get the target directory. """ return self._targetDir def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setArchiveMode(self, value): """ Property target used to set the archive mode. If not C{None}, the mode must be one of L{VALID_ARCHIVE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_ARCHIVE_MODES: raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) self._archiveMode = value def _getArchiveMode(self): """ Property target used to get the archive mode. """ return self._archiveMode def _setIgnoreFile(self, value): """ Property target used to set the ignore file. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if len(value) < 1: raise ValueError("The ignore file must be a non-empty string.") self._ignoreFile = encodePath(value) def _getIgnoreFile(self): """ Property target used to get the ignore file. """ return self._ignoreFile def _setAbsoluteExcludePaths(self, value): """ Property target used to set the absolute exclude paths list. Either the value must be C{None} or each element must be an absolute path. Elements do not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. """ if value is None: self._absoluteExcludePaths = None else: try: saved = self._absoluteExcludePaths self._absoluteExcludePaths = AbsolutePathList() self._absoluteExcludePaths.extend(value) except Exception as e: self._absoluteExcludePaths = saved raise e def _getAbsoluteExcludePaths(self): """ Property target used to get the absolute exclude paths list. """ return self._absoluteExcludePaths def _setExcludePatterns(self, value): """ Property target used to set the exclude patterns list. """ if value is None: self._excludePatterns = None else: try: saved = self._excludePatterns self._excludePatterns = RegexList() self._excludePatterns.extend(value) except Exception as e: self._excludePatterns = saved raise e def _getExcludePatterns(self): """ Property target used to get the exclude patterns list. """ return self._excludePatterns def _setCollectFiles(self, value): """ Property target used to set the collect files list. Either the value must be C{None} or each element must be a C{CollectFile}. @raise ValueError: If the value is not a C{CollectFile} """ if value is None: self._collectFiles = None else: try: saved = self._collectFiles self._collectFiles = ObjectTypeList(CollectFile, "CollectFile") self._collectFiles.extend(value) except Exception as e: self._collectFiles = saved raise e def _getCollectFiles(self): """ Property target used to get the collect files list. """ return self._collectFiles def _setCollectDirs(self, value): """ Property target used to set the collect dirs list. Either the value must be C{None} or each element must be a C{CollectDir}. @raise ValueError: If the value is not a C{CollectDir} """ if value is None: self._collectDirs = None else: try: saved = self._collectDirs self._collectDirs = ObjectTypeList(CollectDir, "CollectDir") self._collectDirs.extend(value) except Exception as e: self._collectDirs = saved raise e def _getCollectDirs(self): """ Property target used to get the collect dirs list. """ return self._collectDirs targetDir = property(_getTargetDir, _setTargetDir, None, "Directory to collect files into.") collectMode = property(_getCollectMode, _setCollectMode, None, "Default collect mode.") archiveMode = property(_getArchiveMode, _setArchiveMode, None, "Default archive mode for collect files.") ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, "Default ignore file name.") absoluteExcludePaths = property(_getAbsoluteExcludePaths, _setAbsoluteExcludePaths, None, "List of absolute paths to exclude.") excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expressions patterns to exclude.") collectFiles = property(_getCollectFiles, _setCollectFiles, None, "List of collect files.") collectDirs = property(_getCollectDirs, _setCollectDirs, None, "List of collect directories.") ######################################################################## # StageConfig class definition ######################################################################## @total_ordering class StageConfig(object): """ Class representing a Cedar Backup stage configuration. The following restrictions exist on data in this class: - The target directory must be an absolute path - The list of local peers must contain only C{LocalPeer} objects - The list of remote peers must contain only C{RemotePeer} objects @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, targetDir, localPeers, remotePeers """ def __init__(self, targetDir=None, localPeers=None, remotePeers=None): """ Constructor for the C{StageConfig} class. @param targetDir: Directory to stage files into, by peer name. @param localPeers: List of local peers. @param remotePeers: List of remote peers. @raise ValueError: If one of the values is invalid. """ self._targetDir = None self._localPeers = None self._remotePeers = None self.targetDir = targetDir self.localPeers = localPeers self.remotePeers = remotePeers def __repr__(self): """ Official string representation for class instance. """ return "StageConfig(%s, %s, %s)" % (self.targetDir, self.localPeers, self.remotePeers) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.targetDir != other.targetDir: if str(self.targetDir or "") < str(other.targetDir or ""): return -1 else: return 1 if self.localPeers != other.localPeers: if self.localPeers < other.localPeers: return -1 else: return 1 if self.remotePeers != other.remotePeers: if self.remotePeers < other.remotePeers: return -1 else: return 1 return 0 def hasPeers(self): """ Indicates whether any peers are filled into this object. @return: Boolean true if any local or remote peers are filled in, false otherwise. """ return ((self.localPeers is not None and len(self.localPeers) > 0) or (self.remotePeers is not None and len(self.remotePeers) > 0)) def _setTargetDir(self, value): """ Property target used to set the target directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Target directory must be an absolute path.") self._targetDir = encodePath(value) def _getTargetDir(self): """ Property target used to get the target directory. """ return self._targetDir def _setLocalPeers(self, value): """ Property target used to set the local peers list. Either the value must be C{None} or each element must be a C{LocalPeer}. @raise ValueError: If the value is not an absolute path. """ if value is None: self._localPeers = None else: try: saved = self._localPeers self._localPeers = ObjectTypeList(LocalPeer, "LocalPeer") self._localPeers.extend(value) except Exception as e: self._localPeers = saved raise e def _getLocalPeers(self): """ Property target used to get the local peers list. """ return self._localPeers def _setRemotePeers(self, value): """ Property target used to set the remote peers list. Either the value must be C{None} or each element must be a C{RemotePeer}. @raise ValueError: If the value is not a C{RemotePeer} """ if value is None: self._remotePeers = None else: try: saved = self._remotePeers self._remotePeers = ObjectTypeList(RemotePeer, "RemotePeer") self._remotePeers.extend(value) except Exception as e: self._remotePeers = saved raise e def _getRemotePeers(self): """ Property target used to get the remote peers list. """ return self._remotePeers targetDir = property(_getTargetDir, _setTargetDir, None, "Directory to stage files into, by peer name.") localPeers = property(_getLocalPeers, _setLocalPeers, None, "List of local peers.") remotePeers = property(_getRemotePeers, _setRemotePeers, None, "List of remote peers.") ######################################################################## # StoreConfig class definition ######################################################################## @total_ordering class StoreConfig(object): """ Class representing a Cedar Backup store configuration. The following restrictions exist on data in this class: - The source directory must be an absolute path. - The media type must be one of the values in L{VALID_MEDIA_TYPES}. - The device type must be one of the values in L{VALID_DEVICE_TYPES}. - The device path must be an absolute path. - The SCSI id, if provided, must be in the form specified by L{validateScsiId}. - The drive speed must be an integer >= 1 - The blanking behavior must be a C{BlankBehavior} object - The refresh media delay must be an integer >= 0 - The eject delay must be an integer >= 0 Note that although the blanking factor must be a positive floating point number, it is stored as a string. This is done so that we can losslessly go back and forth between XML and object representations of configuration. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, sourceDir, mediaType, deviceType, devicePath, deviceScsiId, driveSpeed, checkData, checkMedia, warnMidnite, noEject, blankBehavior, refreshMediaDelay, ejectDelay """ def __init__(self, sourceDir=None, mediaType=None, deviceType=None, devicePath=None, deviceScsiId=None, driveSpeed=None, checkData=False, warnMidnite=False, noEject=False, checkMedia=False, blankBehavior=None, refreshMediaDelay=None, ejectDelay=None): """ Constructor for the C{StoreConfig} class. @param sourceDir: Directory whose contents should be written to media. @param mediaType: Type of the media (see notes above). @param deviceType: Type of the device (optional, see notes above). @param devicePath: Filesystem device name for writer device, i.e. C{/dev/cdrw}. @param deviceScsiId: SCSI id for writer device, i.e. C{[:]scsibus,target,lun}. @param driveSpeed: Speed of the drive, i.e. C{2} for 2x drive, etc. @param checkData: Whether resulting image should be validated. @param checkMedia: Whether media should be checked before being written to. @param warnMidnite: Whether to generate warnings for crossing midnite. @param noEject: Indicates that the writer device should not be ejected. @param blankBehavior: Controls optimized blanking behavior. @param refreshMediaDelay: Delay, in seconds, to add after refreshing media @param ejectDelay: Delay, in seconds, to add after ejecting media before closing the tray @raise ValueError: If one of the values is invalid. """ self._sourceDir = None self._mediaType = None self._deviceType = None self._devicePath = None self._deviceScsiId = None self._driveSpeed = None self._checkData = None self._checkMedia = None self._warnMidnite = None self._noEject = None self._blankBehavior = None self._refreshMediaDelay = None self._ejectDelay = None self.sourceDir = sourceDir self.mediaType = mediaType self.deviceType = deviceType self.devicePath = devicePath self.deviceScsiId = deviceScsiId self.driveSpeed = driveSpeed self.checkData = checkData self.checkMedia = checkMedia self.warnMidnite = warnMidnite self.noEject = noEject self.blankBehavior = blankBehavior self.refreshMediaDelay = refreshMediaDelay self.ejectDelay = ejectDelay def __repr__(self): """ Official string representation for class instance. """ return "StoreConfig(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % ( self.sourceDir, self.mediaType, self.deviceType, self.devicePath, self.deviceScsiId, self.driveSpeed, self.checkData, self.warnMidnite, self.noEject, self.checkMedia, self.blankBehavior, self.refreshMediaDelay, self.ejectDelay) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.sourceDir != other.sourceDir: if str(self.sourceDir or "") < str(other.sourceDir or ""): return -1 else: return 1 if self.mediaType != other.mediaType: if str(self.mediaType or "") < str(other.mediaType or ""): return -1 else: return 1 if self.deviceType != other.deviceType: if str(self.deviceType or "") < str(other.deviceType or ""): return -1 else: return 1 if self.devicePath != other.devicePath: if str(self.devicePath or "") < str(other.devicePath or ""): return -1 else: return 1 if self.deviceScsiId != other.deviceScsiId: if str(self.deviceScsiId or "") < str(other.deviceScsiId or ""): return -1 else: return 1 if self.driveSpeed != other.driveSpeed: if str(self.driveSpeed or "") < str(other.driveSpeed or ""): return -1 else: return 1 if self.checkData != other.checkData: if self.checkData < other.checkData: return -1 else: return 1 if self.checkMedia != other.checkMedia: if self.checkMedia < other.checkMedia: return -1 else: return 1 if self.warnMidnite != other.warnMidnite: if self.warnMidnite < other.warnMidnite: return -1 else: return 1 if self.noEject != other.noEject: if self.noEject < other.noEject: return -1 else: return 1 if self.blankBehavior != other.blankBehavior: if str(self.blankBehavior or "") < str(other.blankBehavior or ""): return -1 else: return 1 if self.refreshMediaDelay != other.refreshMediaDelay: if int(self.refreshMediaDelay or 0) < int(other.refreshMediaDelay or 0): return -1 else: return 1 if self.ejectDelay != other.ejectDelay: if int(self.ejectDelay or 0) < int(other.ejectDelay or 0): return -1 else: return 1 return 0 def _setSourceDir(self, value): """ Property target used to set the source directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Source directory must be an absolute path.") self._sourceDir = encodePath(value) def _getSourceDir(self): """ Property target used to get the source directory. """ return self._sourceDir def _setMediaType(self, value): """ Property target used to set the media type. The value must be one of L{VALID_MEDIA_TYPES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_MEDIA_TYPES: raise ValueError("Media type must be one of %s." % VALID_MEDIA_TYPES) self._mediaType = value def _getMediaType(self): """ Property target used to get the media type. """ return self._mediaType def _setDeviceType(self, value): """ Property target used to set the device type. The value must be one of L{VALID_DEVICE_TYPES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_DEVICE_TYPES: raise ValueError("Device type must be one of %s." % VALID_DEVICE_TYPES) self._deviceType = value def _getDeviceType(self): """ Property target used to get the device type. """ return self._deviceType def _setDevicePath(self, value): """ Property target used to set the device path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Device path must be an absolute path.") self._devicePath = encodePath(value) def _getDevicePath(self): """ Property target used to get the device path. """ return self._devicePath def _setDeviceScsiId(self, value): """ Property target used to set the SCSI id The SCSI id must be valid per L{validateScsiId}. @raise ValueError: If the value is not valid. """ if value is None: self._deviceScsiId = None else: self._deviceScsiId = validateScsiId(value) def _getDeviceScsiId(self): """ Property target used to get the SCSI id. """ return self._deviceScsiId def _setDriveSpeed(self, value): """ Property target used to set the drive speed. The drive speed must be valid per L{validateDriveSpeed}. @raise ValueError: If the value is not valid. """ self._driveSpeed = validateDriveSpeed(value) def _getDriveSpeed(self): """ Property target used to get the drive speed. """ return self._driveSpeed def _setCheckData(self, value): """ Property target used to set the check data flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._checkData = True else: self._checkData = False def _getCheckData(self): """ Property target used to get the check data flag. """ return self._checkData def _setCheckMedia(self, value): """ Property target used to set the check media flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._checkMedia = True else: self._checkMedia = False def _getCheckMedia(self): """ Property target used to get the check media flag. """ return self._checkMedia def _setWarnMidnite(self, value): """ Property target used to set the midnite warning flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._warnMidnite = True else: self._warnMidnite = False def _getWarnMidnite(self): """ Property target used to get the midnite warning flag. """ return self._warnMidnite def _setNoEject(self, value): """ Property target used to set the no-eject flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._noEject = True else: self._noEject = False def _getNoEject(self): """ Property target used to get the no-eject flag. """ return self._noEject def _setBlankBehavior(self, value): """ Property target used to set blanking behavior configuration. If not C{None}, the value must be a C{BlankBehavior} object. @raise ValueError: If the value is not a C{BlankBehavior} """ if value is None: self._blankBehavior = None else: if not isinstance(value, BlankBehavior): raise ValueError("Value must be a C{BlankBehavior} object.") self._blankBehavior = value def _getBlankBehavior(self): """ Property target used to get the blanking behavior configuration. """ return self._blankBehavior def _setRefreshMediaDelay(self, value): """ Property target used to set the refreshMediaDelay. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._refreshMediaDelay = None else: try: value = int(value) except TypeError: raise ValueError("Action refreshMediaDelay value must be an integer >= 0.") if value < 0: raise ValueError("Action refreshMediaDelay value must be an integer >= 0.") if value == 0: value = None # normalize this out, since it's the default self._refreshMediaDelay = value def _getRefreshMediaDelay(self): """ Property target used to get the action refreshMediaDelay. """ return self._refreshMediaDelay def _setEjectDelay(self, value): """ Property target used to set the ejectDelay. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._ejectDelay = None else: try: value = int(value) except TypeError: raise ValueError("Action ejectDelay value must be an integer >= 0.") if value < 0: raise ValueError("Action ejectDelay value must be an integer >= 0.") if value == 0: value = None # normalize this out, since it's the default self._ejectDelay = value def _getEjectDelay(self): """ Property target used to get the action ejectDelay. """ return self._ejectDelay sourceDir = property(_getSourceDir, _setSourceDir, None, "Directory whose contents should be written to media.") mediaType = property(_getMediaType, _setMediaType, None, "Type of the media (see notes above).") deviceType = property(_getDeviceType, _setDeviceType, None, "Type of the device (optional, see notes above).") devicePath = property(_getDevicePath, _setDevicePath, None, "Filesystem device name for writer device.") deviceScsiId = property(_getDeviceScsiId, _setDeviceScsiId, None, "SCSI id for writer device (optional, see notes above).") driveSpeed = property(_getDriveSpeed, _setDriveSpeed, None, "Speed of the drive.") checkData = property(_getCheckData, _setCheckData, None, "Whether resulting image should be validated.") checkMedia = property(_getCheckMedia, _setCheckMedia, None, "Whether media should be checked before being written to.") warnMidnite = property(_getWarnMidnite, _setWarnMidnite, None, "Whether to generate warnings for crossing midnite.") noEject = property(_getNoEject, _setNoEject, None, "Indicates that the writer device should not be ejected.") blankBehavior = property(_getBlankBehavior, _setBlankBehavior, None, "Controls optimized blanking behavior.") refreshMediaDelay = property(_getRefreshMediaDelay, _setRefreshMediaDelay, None, "Delay, in seconds, to add after refreshing media.") ejectDelay = property(_getEjectDelay, _setEjectDelay, None, "Delay, in seconds, to add after ejecting media before closing the tray") ######################################################################## # PurgeConfig class definition ######################################################################## @total_ordering class PurgeConfig(object): """ Class representing a Cedar Backup purge configuration. The following restrictions exist on data in this class: - The purge directory list must be a list of C{PurgeDir} objects. For the C{purgeDirs} list, validation is accomplished through the L{util.ObjectTypeList} list implementation that overrides common list methods and transparently ensures that each element is a C{PurgeDir}. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, purgeDirs """ def __init__(self, purgeDirs=None): """ Constructor for the C{Purge} class. @param purgeDirs: List of purge directories. @raise ValueError: If one of the values is invalid. """ self._purgeDirs = None self.purgeDirs = purgeDirs def __repr__(self): """ Official string representation for class instance. """ return "PurgeConfig(%s)" % self.purgeDirs def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.purgeDirs != other.purgeDirs: if self.purgeDirs < other.purgeDirs: return -1 else: return 1 return 0 def _setPurgeDirs(self, value): """ Property target used to set the purge dirs list. Either the value must be C{None} or each element must be a C{PurgeDir}. @raise ValueError: If the value is not a C{PurgeDir} """ if value is None: self._purgeDirs = None else: try: saved = self._purgeDirs self._purgeDirs = ObjectTypeList(PurgeDir, "PurgeDir") self._purgeDirs.extend(value) except Exception as e: self._purgeDirs = saved raise e def _getPurgeDirs(self): """ Property target used to get the purge dirs list. """ return self._purgeDirs purgeDirs = property(_getPurgeDirs, _setPurgeDirs, None, "List of directories to purge.") ######################################################################## # Config class definition ######################################################################## @total_ordering class Config(object): ###################### # Class documentation ###################### """ Class representing a Cedar Backup XML configuration document. The C{Config} class is a Python object representation of a Cedar Backup XML configuration file. It is intended to be the only Python-language interface to Cedar Backup configuration on disk for both Cedar Backup itself and for external applications. The object representation is two-way: XML data can be used to create a C{Config} object, and then changes to the object can be propogated back to disk. A C{Config} object can even be used to create a configuration file from scratch programmatically. This class and the classes it is composed from often use Python's C{property} construct to validate input and limit access to values. Some validations can only be done once a document is considered "complete" (see module notes for more details). Assignments to the various instance variables must match the expected type, i.e. C{reference} must be a C{ReferenceConfig}. The internal check uses the built-in C{isinstance} function, so it should be OK to use subclasses if you want to. If an instance variable is not set, its value will be C{None}. When an object is initialized without using an XML document, all of the values will be C{None}. Even when an object is initialized using XML, some of the values might be C{None} because not every section is required. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, extractXml, validate, reference, extensions, options, collect, stage, store, purge, _getReference, _setReference, _getExtensions, _setExtensions, _getOptions, _setOptions, _getPeers, _setPeers, _getCollect, _setCollect, _getStage, _setStage, _getStore, _setStore, _getPurge, _setPurge """ ############## # Constructor ############## def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath}, then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{Config.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._reference = None self._extensions = None self._options = None self._peers = None self._collect = None self._stage = None self._store = None self._purge = None self.reference = None self.extensions = None self.options = None self.peers = None self.collect = None self.stage = None self.store = None self.purge = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: with open(xmlPath) as f: xmlData = f.read() self._parseXmlData(xmlData) if validate: self.validate() ######################### # String representations ######################### def __repr__(self): """ Official string representation for class instance. """ return "Config(%s, %s, %s, %s, %s, %s, %s, %s)" % (self.reference, self.extensions, self.options, self.peers, self.collect, self.stage, self.store, self.purge) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() ############################# # Standard comparison method ############################# def __eq__(self, other): """Equals operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) == 0 def __lt__(self, other): """Less-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) < 0 def __gt__(self, other): """Greater-than operator, implemented in terms of original Python 2 compare operator.""" return self.__cmp__(other) > 0 def __cmp__(self, other): """ Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.reference != other.reference: if self.reference < other.reference: return -1 else: return 1 if self.extensions != other.extensions: if self.extensions < other.extensions: return -1 else: return 1 if self.options != other.options: if self.options < other.options: return -1 else: return 1 if self.peers != other.peers: if self.peers < other.peers: return -1 else: return 1 if self.collect != other.collect: if self.collect < other.collect: return -1 else: return 1 if self.stage != other.stage: if self.stage < other.stage: return -1 else: return 1 if self.store != other.store: if self.store < other.store: return -1 else: return 1 if self.purge != other.purge: if self.purge < other.purge: return -1 else: return 1 return 0 ############# # Properties ############# def _setReference(self, value): """ Property target used to set the reference configuration value. If not C{None}, the value must be a C{ReferenceConfig} object. @raise ValueError: If the value is not a C{ReferenceConfig} """ if value is None: self._reference = None else: if not isinstance(value, ReferenceConfig): raise ValueError("Value must be a C{ReferenceConfig} object.") self._reference = value def _getReference(self): """ Property target used to get the reference configuration value. """ return self._reference def _setExtensions(self, value): """ Property target used to set the extensions configuration value. If not C{None}, the value must be a C{ExtensionsConfig} object. @raise ValueError: If the value is not a C{ExtensionsConfig} """ if value is None: self._extensions = None else: if not isinstance(value, ExtensionsConfig): raise ValueError("Value must be a C{ExtensionsConfig} object.") self._extensions = value def _getExtensions(self): """ Property target used to get the extensions configuration value. """ return self._extensions def _setOptions(self, value): """ Property target used to set the options configuration value. If not C{None}, the value must be an C{OptionsConfig} object. @raise ValueError: If the value is not a C{OptionsConfig} """ if value is None: self._options = None else: if not isinstance(value, OptionsConfig): raise ValueError("Value must be a C{OptionsConfig} object.") self._options = value def _getOptions(self): """ Property target used to get the options configuration value. """ return self._options def _setPeers(self, value): """ Property target used to set the peers configuration value. If not C{None}, the value must be an C{PeersConfig} object. @raise ValueError: If the value is not a C{PeersConfig} """ if value is None: self._peers = None else: if not isinstance(value, PeersConfig): raise ValueError("Value must be a C{PeersConfig} object.") self._peers = value def _getPeers(self): """ Property target used to get the peers configuration value. """ return self._peers def _setCollect(self, value): """ Property target used to set the collect configuration value. If not C{None}, the value must be a C{CollectConfig} object. @raise ValueError: If the value is not a C{CollectConfig} """ if value is None: self._collect = None else: if not isinstance(value, CollectConfig): raise ValueError("Value must be a C{CollectConfig} object.") self._collect = value def _getCollect(self): """ Property target used to get the collect configuration value. """ return self._collect def _setStage(self, value): """ Property target used to set the stage configuration value. If not C{None}, the value must be a C{StageConfig} object. @raise ValueError: If the value is not a C{StageConfig} """ if value is None: self._stage = None else: if not isinstance(value, StageConfig): raise ValueError("Value must be a C{StageConfig} object.") self._stage = value def _getStage(self): """ Property target used to get the stage configuration value. """ return self._stage def _setStore(self, value): """ Property target used to set the store configuration value. If not C{None}, the value must be a C{StoreConfig} object. @raise ValueError: If the value is not a C{StoreConfig} """ if value is None: self._store = None else: if not isinstance(value, StoreConfig): raise ValueError("Value must be a C{StoreConfig} object.") self._store = value def _getStore(self): """ Property target used to get the store configuration value. """ return self._store def _setPurge(self, value): """ Property target used to set the purge configuration value. If not C{None}, the value must be a C{PurgeConfig} object. @raise ValueError: If the value is not a C{PurgeConfig} """ if value is None: self._purge = None else: if not isinstance(value, PurgeConfig): raise ValueError("Value must be a C{PurgeConfig} object.") self._purge = value def _getPurge(self): """ Property target used to get the purge configuration value. """ return self._purge reference = property(_getReference, _setReference, None, "Reference configuration in terms of a C{ReferenceConfig} object.") extensions = property(_getExtensions, _setExtensions, None, "Extensions configuration in terms of a C{ExtensionsConfig} object.") options = property(_getOptions, _setOptions, None, "Options configuration in terms of a C{OptionsConfig} object.") peers = property(_getPeers, _setPeers, None, "Peers configuration in terms of a C{PeersConfig} object.") collect = property(_getCollect, _setCollect, None, "Collect configuration in terms of a C{CollectConfig} object.") stage = property(_getStage, _setStage, None, "Stage configuration in terms of a C{StageConfig} object.") store = property(_getStore, _setStore, None, "Store configuration in terms of a C{StoreConfig} object.") purge = property(_getPurge, _setPurge, None, "Purge configuration in terms of a C{PurgeConfig} object.") ################# # Public methods ################# def extractXml(self, xmlPath=None, validate=True): """ Extracts configuration into an XML document. If C{xmlPath} is not provided, then the XML document will be returned as a string. If C{xmlPath} is provided, then the XML document will be written to the file and C{None} will be returned. Unless the C{validate} parameter is C{False}, the L{Config.validate} method will be called (with its default arguments) against the configuration before extracting the XML. If configuration is not valid, then an XML document will not be extracted. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to write an invalid configuration file to disk. @param xmlPath: Path to an XML file to create on disk. @type xmlPath: Absolute path to a file. @param validate: Validate the document before extracting it. @type validate: Boolean true/false. @return: XML string data or C{None} as described above. @raise ValueError: If configuration within the object is not valid. @raise IOError: If there is an error writing to the file. @raise OSError: If there is an error writing to the file. """ if validate: self.validate() xmlData = self._extractXml() if xmlPath is not None: with open(xmlPath, "w") as f: f.write(xmlData) return None else: return xmlData def validate(self, requireOneAction=True, requireReference=False, requireExtensions=False, requireOptions=True, requireCollect=False, requireStage=False, requireStore=False, requirePurge=False, requirePeers=False): """ Validates configuration represented by the object. This method encapsulates all of the validations that should apply to a fully "complete" document but are not already taken care of by earlier validations. It also provides some extra convenience functionality which might be useful to some people. The process of validation is laid out in the I{Validation} section in the class notes (above). @param requireOneAction: Require at least one of the collect, stage, store or purge sections. @param requireReference: Require the reference section. @param requireExtensions: Require the extensions section. @param requireOptions: Require the options section. @param requirePeers: Require the peers section. @param requireCollect: Require the collect section. @param requireStage: Require the stage section. @param requireStore: Require the store section. @param requirePurge: Require the purge section. @raise ValueError: If one of the validations fails. """ if requireOneAction and (self.collect, self.stage, self.store, self.purge) == (None, None, None, None): raise ValueError("At least one of the collect, stage, store and purge sections is required.") if requireReference and self.reference is None: raise ValueError("The reference is section is required.") if requireExtensions and self.extensions is None: raise ValueError("The extensions is section is required.") if requireOptions and self.options is None: raise ValueError("The options is section is required.") if requirePeers and self.peers is None: raise ValueError("The peers is section is required.") if requireCollect and self.collect is None: raise ValueError("The collect is section is required.") if requireStage and self.stage is None: raise ValueError("The stage is section is required.") if requireStore and self.store is None: raise ValueError("The store is section is required.") if requirePurge and self.purge is None: raise ValueError("The purge is section is required.") self._validateContents() ##################################### # High-level methods for parsing XML ##################################### def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls individual static methods to parse each of the individual configuration sections. Most of the validation we do here has to do with whether the document can be parsed and whether any values which exist are valid. We don't do much validation as to whether required elements actually exist unless we have to to make sense of the document (instead, that's the job of the L{validate} method). @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._reference = Config._parseReference(parentNode) self._extensions = Config._parseExtensions(parentNode) self._options = Config._parseOptions(parentNode) self._peers = Config._parsePeers(parentNode) self._collect = Config._parseCollect(parentNode) self._stage = Config._parseStage(parentNode) self._store = Config._parseStore(parentNode) self._purge = Config._parsePurge(parentNode) @staticmethod def _parseReference(parentNode): """ Parses a reference configuration section. We read the following fields:: author //cb_config/reference/author revision //cb_config/reference/revision description //cb_config/reference/description generator //cb_config/reference/generator @param parentNode: Parent node to search beneath. @return: C{ReferenceConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ reference = None sectionNode = readFirstChild(parentNode, "reference") if sectionNode is not None: reference = ReferenceConfig() reference.author = readString(sectionNode, "author") reference.revision = readString(sectionNode, "revision") reference.description = readString(sectionNode, "description") reference.generator = readString(sectionNode, "generator") return reference @staticmethod def _parseExtensions(parentNode): """ Parses an extensions configuration section. We read the following fields:: orderMode //cb_config/extensions/order_mode We also read groups of the following items, one list element per item:: name //cb_config/extensions/action/name module //cb_config/extensions/action/module function //cb_config/extensions/action/function index //cb_config/extensions/action/index dependencies //cb_config/extensions/action/depends The extended actions are parsed by L{_parseExtendedActions}. @param parentNode: Parent node to search beneath. @return: C{ExtensionsConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ extensions = None sectionNode = readFirstChild(parentNode, "extensions") if sectionNode is not None: extensions = ExtensionsConfig() extensions.orderMode = readString(sectionNode, "order_mode") extensions.actions = Config._parseExtendedActions(sectionNode) return extensions @staticmethod def _parseOptions(parentNode): """ Parses a options configuration section. We read the following fields:: startingDay //cb_config/options/starting_day workingDir //cb_config/options/working_dir backupUser //cb_config/options/backup_user backupGroup //cb_config/options/backup_group rcpCommand //cb_config/options/rcp_command rshCommand //cb_config/options/rsh_command cbackCommand //cb_config/options/cback_command managedActions //cb_config/options/managed_actions The list of managed actions is a comma-separated list of action names. We also read groups of the following items, one list element per item:: overrides //cb_config/options/override hooks //cb_config/options/hook The overrides are parsed by L{_parseOverrides} and the hooks are parsed by L{_parseHooks}. @param parentNode: Parent node to search beneath. @return: C{OptionsConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ options = None sectionNode = readFirstChild(parentNode, "options") if sectionNode is not None: options = OptionsConfig() options.startingDay = readString(sectionNode, "starting_day") options.workingDir = readString(sectionNode, "working_dir") options.backupUser = readString(sectionNode, "backup_user") options.backupGroup = readString(sectionNode, "backup_group") options.rcpCommand = readString(sectionNode, "rcp_command") options.rshCommand = readString(sectionNode, "rsh_command") options.cbackCommand = readString(sectionNode, "cback_command") options.overrides = Config._parseOverrides(sectionNode) options.hooks = Config._parseHooks(sectionNode) managedActions = readString(sectionNode, "managed_actions") options.managedActions = parseCommaSeparatedString(managedActions) return options @staticmethod def _parsePeers(parentNode): """ Parses a peers configuration section. We read groups of the following items, one list element per item:: localPeers //cb_config/stage/peer remotePeers //cb_config/stage/peer The individual peer entries are parsed by L{_parsePeerList}. @param parentNode: Parent node to search beneath. @return: C{StageConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ peers = None sectionNode = readFirstChild(parentNode, "peers") if sectionNode is not None: peers = PeersConfig() (peers.localPeers, peers.remotePeers) = Config._parsePeerList(sectionNode) return peers @staticmethod def _parseCollect(parentNode): """ Parses a collect configuration section. We read the following individual fields:: targetDir //cb_config/collect/collect_dir collectMode //cb_config/collect/collect_mode archiveMode //cb_config/collect/archive_mode ignoreFile //cb_config/collect/ignore_file We also read groups of the following items, one list element per item:: absoluteExcludePaths //cb_config/collect/exclude/abs_path excludePatterns //cb_config/collect/exclude/pattern collectFiles //cb_config/collect/file collectDirs //cb_config/collect/dir The exclusions are parsed by L{_parseExclusions}, the collect files are parsed by L{_parseCollectFiles}, and the directories are parsed by L{_parseCollectDirs}. @param parentNode: Parent node to search beneath. @return: C{CollectConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ collect = None sectionNode = readFirstChild(parentNode, "collect") if sectionNode is not None: collect = CollectConfig() collect.targetDir = readString(sectionNode, "collect_dir") collect.collectMode = readString(sectionNode, "collect_mode") collect.archiveMode = readString(sectionNode, "archive_mode") collect.ignoreFile = readString(sectionNode, "ignore_file") (collect.absoluteExcludePaths, unused, collect.excludePatterns) = Config._parseExclusions(sectionNode) collect.collectFiles = Config._parseCollectFiles(sectionNode) collect.collectDirs = Config._parseCollectDirs(sectionNode) return collect @staticmethod def _parseStage(parentNode): """ Parses a stage configuration section. We read the following individual fields:: targetDir //cb_config/stage/staging_dir We also read groups of the following items, one list element per item:: localPeers //cb_config/stage/peer remotePeers //cb_config/stage/peer The individual peer entries are parsed by L{_parsePeerList}. @param parentNode: Parent node to search beneath. @return: C{StageConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ stage = None sectionNode = readFirstChild(parentNode, "stage") if sectionNode is not None: stage = StageConfig() stage.targetDir = readString(sectionNode, "staging_dir") (stage.localPeers, stage.remotePeers) = Config._parsePeerList(sectionNode) return stage @staticmethod def _parseStore(parentNode): """ Parses a store configuration section. We read the following fields:: sourceDir //cb_config/store/source_dir mediaType //cb_config/store/media_type deviceType //cb_config/store/device_type devicePath //cb_config/store/target_device deviceScsiId //cb_config/store/target_scsi_id driveSpeed //cb_config/store/drive_speed checkData //cb_config/store/check_data checkMedia //cb_config/store/check_media warnMidnite //cb_config/store/warn_midnite noEject //cb_config/store/no_eject Blanking behavior configuration is parsed by the C{_parseBlankBehavior} method. @param parentNode: Parent node to search beneath. @return: C{StoreConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ store = None sectionNode = readFirstChild(parentNode, "store") if sectionNode is not None: store = StoreConfig() store.sourceDir = readString(sectionNode, "source_dir") store.mediaType = readString(sectionNode, "media_type") store.deviceType = readString(sectionNode, "device_type") store.devicePath = readString(sectionNode, "target_device") store.deviceScsiId = readString(sectionNode, "target_scsi_id") store.driveSpeed = readInteger(sectionNode, "drive_speed") store.checkData = readBoolean(sectionNode, "check_data") store.checkMedia = readBoolean(sectionNode, "check_media") store.warnMidnite = readBoolean(sectionNode, "warn_midnite") store.noEject = readBoolean(sectionNode, "no_eject") store.blankBehavior = Config._parseBlankBehavior(sectionNode) store.refreshMediaDelay = readInteger(sectionNode, "refresh_media_delay") store.ejectDelay = readInteger(sectionNode, "eject_delay") return store @staticmethod def _parsePurge(parentNode): """ Parses a purge configuration section. We read groups of the following items, one list element per item:: purgeDirs //cb_config/purge/dir The individual directory entries are parsed by L{_parsePurgeDirs}. @param parentNode: Parent node to search beneath. @return: C{PurgeConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ purge = None sectionNode = readFirstChild(parentNode, "purge") if sectionNode is not None: purge = PurgeConfig() purge.purgeDirs = Config._parsePurgeDirs(sectionNode) return purge @staticmethod def _parseExtendedActions(parentNode): """ Reads extended actions data from immediately beneath the parent. We read the following individual fields from each extended action:: name name module module function function index index dependencies depends Dependency information is parsed by the C{_parseDependencies} method. @param parentNode: Parent node to search beneath. @return: List of extended actions. @raise ValueError: If the data at the location can't be read """ lst = [] for entry in readChildren(parentNode, "action"): if isElement(entry): action = ExtendedAction() action.name = readString(entry, "name") action.module = readString(entry, "module") action.function = readString(entry, "function") action.index = readInteger(entry, "index") action.dependencies = Config._parseDependencies(entry) lst.append(action) if lst == []: lst = None return lst @staticmethod def _parseExclusions(parentNode): """ Reads exclusions data from immediately beneath the parent. We read groups of the following items, one list element per item:: absolute exclude/abs_path relative exclude/rel_path patterns exclude/pattern If there are none of some pattern (i.e. no relative path items) then C{None} will be returned for that item in the tuple. This method can be used to parse exclusions on both the collect configuration level and on the collect directory level within collect configuration. @param parentNode: Parent node to search beneath. @return: Tuple of (absolute, relative, patterns) exclusions. """ sectionNode = readFirstChild(parentNode, "exclude") if sectionNode is None: return (None, None, None) else: absolute = readStringList(sectionNode, "abs_path") relative = readStringList(sectionNode, "rel_path") patterns = readStringList(sectionNode, "pattern") return (absolute, relative, patterns) @staticmethod def _parseOverrides(parentNode): """ Reads a list of C{CommandOverride} objects from immediately beneath the parent. We read the following individual fields:: command command absolutePath abs_path @param parentNode: Parent node to search beneath. @return: List of C{CommandOverride} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parentNode, "override"): if isElement(entry): override = CommandOverride() override.command = readString(entry, "command") override.absolutePath = readString(entry, "abs_path") lst.append(override) if lst == []: lst = None return lst @staticmethod #pylint: disable=R0204 def _parseHooks(parentNode): """ Reads a list of C{ActionHook} objects from immediately beneath the parent. We read the following individual fields:: action action command command @param parentNode: Parent node to search beneath. @return: List of C{ActionHook} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parentNode, "pre_action_hook"): if isElement(entry): hook = PreActionHook() hook.action = readString(entry, "action") hook.command = readString(entry, "command") lst.append(hook) for entry in readChildren(parentNode, "post_action_hook"): if isElement(entry): hook = PostActionHook() hook.action = readString(entry, "action") hook.command = readString(entry, "command") lst.append(hook) if lst == []: lst = None return lst @staticmethod def _parseCollectFiles(parentNode): """ Reads a list of C{CollectFile} objects from immediately beneath the parent. We read the following individual fields:: absolutePath abs_path collectMode mode I{or} collect_mode archiveMode archive_mode The collect mode is a special case. Just a C{mode} tag is accepted, but we prefer C{collect_mode} for consistency with the rest of the config file and to avoid confusion with the archive mode. If both are provided, only C{mode} will be used. @param parentNode: Parent node to search beneath. @return: List of C{CollectFile} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parentNode, "file"): if isElement(entry): cfile = CollectFile() cfile.absolutePath = readString(entry, "abs_path") cfile.collectMode = readString(entry, "mode") if cfile.collectMode is None: cfile.collectMode = readString(entry, "collect_mode") cfile.archiveMode = readString(entry, "archive_mode") lst.append(cfile) if lst == []: lst = None return lst @staticmethod def _parseCollectDirs(parentNode): """ Reads a list of C{CollectDir} objects from immediately beneath the parent. We read the following individual fields:: absolutePath abs_path collectMode mode I{or} collect_mode archiveMode archive_mode ignoreFile ignore_file linkDepth link_depth dereference dereference recursionLevel recursion_level The collect mode is a special case. Just a C{mode} tag is accepted for backwards compatibility, but we prefer C{collect_mode} for consistency with the rest of the config file and to avoid confusion with the archive mode. If both are provided, only C{mode} will be used. We also read groups of the following items, one list element per item:: absoluteExcludePaths exclude/abs_path relativeExcludePaths exclude/rel_path excludePatterns exclude/pattern The exclusions are parsed by L{_parseExclusions}. @param parentNode: Parent node to search beneath. @return: List of C{CollectDir} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parentNode, "dir"): if isElement(entry): cdir = CollectDir() cdir.absolutePath = readString(entry, "abs_path") cdir.collectMode = readString(entry, "mode") if cdir.collectMode is None: cdir.collectMode = readString(entry, "collect_mode") cdir.archiveMode = readString(entry, "archive_mode") cdir.ignoreFile = readString(entry, "ignore_file") cdir.linkDepth = readInteger(entry, "link_depth") cdir.dereference = readBoolean(entry, "dereference") cdir.recursionLevel = readInteger(entry, "recursion_level") (cdir.absoluteExcludePaths, cdir.relativeExcludePaths, cdir.excludePatterns) = Config._parseExclusions(entry) lst.append(cdir) if lst == []: lst = None return lst @staticmethod def _parsePurgeDirs(parentNode): """ Reads a list of C{PurgeDir} objects from immediately beneath the parent. We read the following individual fields:: absolutePath /abs_path retainDays /retain_days @param parentNode: Parent node to search beneath. @return: List of C{PurgeDir} objects or C{None} if none are found. @raise ValueError: If the data at the location can't be read """ lst = [] for entry in readChildren(parentNode, "dir"): if isElement(entry): cdir = PurgeDir() cdir.absolutePath = readString(entry, "abs_path") cdir.retainDays = readInteger(entry, "retain_days") lst.append(cdir) if lst == []: lst = None return lst @staticmethod def _parsePeerList(parentNode): """ Reads remote and local peer data from immediately beneath the parent. We read the following individual fields for both remote and local peers:: name name collectDir collect_dir We also read the following individual fields for remote peers only:: remoteUser backup_user rcpCommand rcp_command rshCommand rsh_command cbackCommand cback_command managed managed managedActions managed_actions Additionally, the value in the C{type} field is used to determine whether this entry is a remote peer. If the type is C{"remote"}, it's a remote peer, and if the type is C{"local"}, it's a remote peer. If there are none of one type of peer (i.e. no local peers) then C{None} will be returned for that item in the tuple. @param parentNode: Parent node to search beneath. @return: Tuple of (local, remote) peer lists. @raise ValueError: If the data at the location can't be read """ localPeers = [] remotePeers = [] for entry in readChildren(parentNode, "peer"): if isElement(entry): peerType = readString(entry, "type") if peerType == "local": localPeer = LocalPeer() localPeer.name = readString(entry, "name") localPeer.collectDir = readString(entry, "collect_dir") localPeer.ignoreFailureMode = readString(entry, "ignore_failures") localPeers.append(localPeer) elif peerType == "remote": remotePeer = RemotePeer() remotePeer.name = readString(entry, "name") remotePeer.collectDir = readString(entry, "collect_dir") remotePeer.remoteUser = readString(entry, "backup_user") remotePeer.rcpCommand = readString(entry, "rcp_command") remotePeer.rshCommand = readString(entry, "rsh_command") remotePeer.cbackCommand = readString(entry, "cback_command") remotePeer.ignoreFailureMode = readString(entry, "ignore_failures") remotePeer.managed = readBoolean(entry, "managed") managedActions = readString(entry, "managed_actions") remotePeer.managedActions = parseCommaSeparatedString(managedActions) remotePeers.append(remotePeer) if localPeers == []: localPeers = None if remotePeers == []: remotePeers = None return (localPeers, remotePeers) @staticmethod def _parseDependencies(parentNode): """ Reads extended action dependency information from a parent node. We read the following individual fields:: runBefore depends/run_before runAfter depends/run_after Each of these fields is a comma-separated list of action names. The result is placed into an C{ActionDependencies} object. If the dependencies parent node does not exist, C{None} will be returned. Otherwise, an C{ActionDependencies} object will always be created, even if it does not contain any actual dependencies in it. @param parentNode: Parent node to search beneath. @return: C{ActionDependencies} object or C{None}. @raise ValueError: If the data at the location can't be read """ sectionNode = readFirstChild(parentNode, "depends") if sectionNode is None: return None else: runBefore = readString(sectionNode, "run_before") runAfter = readString(sectionNode, "run_after") beforeList = parseCommaSeparatedString(runBefore) afterList = parseCommaSeparatedString(runAfter) return ActionDependencies(beforeList, afterList) @staticmethod def _parseBlankBehavior(parentNode): """ Reads a single C{BlankBehavior} object from immediately beneath the parent. We read the following individual fields:: blankMode blank_behavior/mode blankFactor blank_behavior/factor @param parentNode: Parent node to search beneath. @return: C{BlankBehavior} object or C{None} if none if the section is not found @raise ValueError: If some filled-in value is invalid. """ blankBehavior = None sectionNode = readFirstChild(parentNode, "blank_behavior") if sectionNode is not None: blankBehavior = BlankBehavior() blankBehavior.blankMode = readString(sectionNode, "mode") blankBehavior.blankFactor = readString(sectionNode, "factor") return blankBehavior ######################################## # High-level methods for generating XML ######################################## def _extractXml(self): """ Internal method to extract configuration into an XML string. This method assumes that the internal L{validate} method has been called prior to extracting the XML, if the caller cares. No validation will be done internally. As a general rule, fields that are set to C{None} will be extracted into the document as empty tags. The same goes for container tags that are filled based on lists - if the list is empty or C{None}, the container tag will be empty. """ (xmlDom, parentNode) = createOutputDom() Config._addReference(xmlDom, parentNode, self.reference) Config._addExtensions(xmlDom, parentNode, self.extensions) Config._addOptions(xmlDom, parentNode, self.options) Config._addPeers(xmlDom, parentNode, self.peers) Config._addCollect(xmlDom, parentNode, self.collect) Config._addStage(xmlDom, parentNode, self.stage) Config._addStore(xmlDom, parentNode, self.store) Config._addPurge(xmlDom, parentNode, self.purge) xmlData = serializeDom(xmlDom) xmlDom.unlink() return xmlData @staticmethod def _addReference(xmlDom, parentNode, referenceConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: author //cb_config/reference/author revision //cb_config/reference/revision description //cb_config/reference/description generator //cb_config/reference/generator If C{referenceConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param referenceConfig: Reference configuration section to be added to the document. """ if referenceConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "reference") addStringNode(xmlDom, sectionNode, "author", referenceConfig.author) addStringNode(xmlDom, sectionNode, "revision", referenceConfig.revision) addStringNode(xmlDom, sectionNode, "description", referenceConfig.description) addStringNode(xmlDom, sectionNode, "generator", referenceConfig.generator) @staticmethod def _addExtensions(xmlDom, parentNode, extensionsConfig): """ Adds an configuration section as the next child of a parent. We add the following fields to the document:: order_mode //cb_config/extensions/order_mode We also add groups of the following items, one list element per item:: actions //cb_config/extensions/action The extended action entries are added by L{_addExtendedAction}. If C{extensionsConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param extensionsConfig: Extensions configuration section to be added to the document. """ if extensionsConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "extensions") addStringNode(xmlDom, sectionNode, "order_mode", extensionsConfig.orderMode) if extensionsConfig.actions is not None: for action in extensionsConfig.actions: Config._addExtendedAction(xmlDom, sectionNode, action) @staticmethod def _addOptions(xmlDom, parentNode, optionsConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: startingDay //cb_config/options/starting_day workingDir //cb_config/options/working_dir backupUser //cb_config/options/backup_user backupGroup //cb_config/options/backup_group rcpCommand //cb_config/options/rcp_command rshCommand //cb_config/options/rsh_command cbackCommand //cb_config/options/cback_command managedActions //cb_config/options/managed_actions We also add groups of the following items, one list element per item:: overrides //cb_config/options/override hooks //cb_config/options/pre_action_hook hooks //cb_config/options/post_action_hook The individual override items are added by L{_addOverride}. The individual hook items are added by L{_addHook}. If C{optionsConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param optionsConfig: Options configuration section to be added to the document. """ if optionsConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "options") addStringNode(xmlDom, sectionNode, "starting_day", optionsConfig.startingDay) addStringNode(xmlDom, sectionNode, "working_dir", optionsConfig.workingDir) addStringNode(xmlDom, sectionNode, "backup_user", optionsConfig.backupUser) addStringNode(xmlDom, sectionNode, "backup_group", optionsConfig.backupGroup) addStringNode(xmlDom, sectionNode, "rcp_command", optionsConfig.rcpCommand) addStringNode(xmlDom, sectionNode, "rsh_command", optionsConfig.rshCommand) addStringNode(xmlDom, sectionNode, "cback_command", optionsConfig.cbackCommand) managedActions = Config._buildCommaSeparatedString(optionsConfig.managedActions) addStringNode(xmlDom, sectionNode, "managed_actions", managedActions) if optionsConfig.overrides is not None: for override in optionsConfig.overrides: Config._addOverride(xmlDom, sectionNode, override) if optionsConfig.hooks is not None: for hook in optionsConfig.hooks: Config._addHook(xmlDom, sectionNode, hook) @staticmethod def _addPeers(xmlDom, parentNode, peersConfig): """ Adds a configuration section as the next child of a parent. We add groups of the following items, one list element per item:: localPeers //cb_config/peers/peer remotePeers //cb_config/peers/peer The individual local and remote peer entries are added by L{_addLocalPeer} and L{_addRemotePeer}, respectively. If C{peersConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param peersConfig: Peers configuration section to be added to the document. """ if peersConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "peers") if peersConfig.localPeers is not None: for localPeer in peersConfig.localPeers: Config._addLocalPeer(xmlDom, sectionNode, localPeer) if peersConfig.remotePeers is not None: for remotePeer in peersConfig.remotePeers: Config._addRemotePeer(xmlDom, sectionNode, remotePeer) @staticmethod def _addCollect(xmlDom, parentNode, collectConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: targetDir //cb_config/collect/collect_dir collectMode //cb_config/collect/collect_mode archiveMode //cb_config/collect/archive_mode ignoreFile //cb_config/collect/ignore_file We also add groups of the following items, one list element per item:: absoluteExcludePaths //cb_config/collect/exclude/abs_path excludePatterns //cb_config/collect/exclude/pattern collectFiles //cb_config/collect/file collectDirs //cb_config/collect/dir The individual collect files are added by L{_addCollectFile} and individual collect directories are added by L{_addCollectDir}. If C{collectConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param collectConfig: Collect configuration section to be added to the document. """ if collectConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "collect") addStringNode(xmlDom, sectionNode, "collect_dir", collectConfig.targetDir) addStringNode(xmlDom, sectionNode, "collect_mode", collectConfig.collectMode) addStringNode(xmlDom, sectionNode, "archive_mode", collectConfig.archiveMode) addStringNode(xmlDom, sectionNode, "ignore_file", collectConfig.ignoreFile) if ((collectConfig.absoluteExcludePaths is not None and collectConfig.absoluteExcludePaths != []) or (collectConfig.excludePatterns is not None and collectConfig.excludePatterns != [])): excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") if collectConfig.absoluteExcludePaths is not None: for absolutePath in collectConfig.absoluteExcludePaths: addStringNode(xmlDom, excludeNode, "abs_path", absolutePath) if collectConfig.excludePatterns is not None: for pattern in collectConfig.excludePatterns: addStringNode(xmlDom, excludeNode, "pattern", pattern) if collectConfig.collectFiles is not None: for collectFile in collectConfig.collectFiles: Config._addCollectFile(xmlDom, sectionNode, collectFile) if collectConfig.collectDirs is not None: for collectDir in collectConfig.collectDirs: Config._addCollectDir(xmlDom, sectionNode, collectDir) @staticmethod def _addStage(xmlDom, parentNode, stageConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: targetDir //cb_config/stage/staging_dir We also add groups of the following items, one list element per item:: localPeers //cb_config/stage/peer remotePeers //cb_config/stage/peer The individual local and remote peer entries are added by L{_addLocalPeer} and L{_addRemotePeer}, respectively. If C{stageConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param stageConfig: Stage configuration section to be added to the document. """ if stageConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "stage") addStringNode(xmlDom, sectionNode, "staging_dir", stageConfig.targetDir) if stageConfig.localPeers is not None: for localPeer in stageConfig.localPeers: Config._addLocalPeer(xmlDom, sectionNode, localPeer) if stageConfig.remotePeers is not None: for remotePeer in stageConfig.remotePeers: Config._addRemotePeer(xmlDom, sectionNode, remotePeer) @staticmethod def _addStore(xmlDom, parentNode, storeConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: sourceDir //cb_config/store/source_dir mediaType //cb_config/store/media_type deviceType //cb_config/store/device_type devicePath //cb_config/store/target_device deviceScsiId //cb_config/store/target_scsi_id driveSpeed //cb_config/store/drive_speed checkData //cb_config/store/check_data checkMedia //cb_config/store/check_media warnMidnite //cb_config/store/warn_midnite noEject //cb_config/store/no_eject refreshMediaDelay //cb_config/store/refresh_media_delay ejectDelay //cb_config/store/eject_delay Blanking behavior configuration is added by the L{_addBlankBehavior} method. If C{storeConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param storeConfig: Store configuration section to be added to the document. """ if storeConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "store") addStringNode(xmlDom, sectionNode, "source_dir", storeConfig.sourceDir) addStringNode(xmlDom, sectionNode, "media_type", storeConfig.mediaType) addStringNode(xmlDom, sectionNode, "device_type", storeConfig.deviceType) addStringNode(xmlDom, sectionNode, "target_device", storeConfig.devicePath) addStringNode(xmlDom, sectionNode, "target_scsi_id", storeConfig.deviceScsiId) addIntegerNode(xmlDom, sectionNode, "drive_speed", storeConfig.driveSpeed) addBooleanNode(xmlDom, sectionNode, "check_data", storeConfig.checkData) addBooleanNode(xmlDom, sectionNode, "check_media", storeConfig.checkMedia) addBooleanNode(xmlDom, sectionNode, "warn_midnite", storeConfig.warnMidnite) addBooleanNode(xmlDom, sectionNode, "no_eject", storeConfig.noEject) addIntegerNode(xmlDom, sectionNode, "refresh_media_delay", storeConfig.refreshMediaDelay) addIntegerNode(xmlDom, sectionNode, "eject_delay", storeConfig.ejectDelay) Config._addBlankBehavior(xmlDom, sectionNode, storeConfig.blankBehavior) @staticmethod def _addPurge(xmlDom, parentNode, purgeConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: purgeDirs //cb_config/purge/dir The individual directory entries are added by L{_addPurgeDir}. If C{purgeConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param purgeConfig: Purge configuration section to be added to the document. """ if purgeConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "purge") if purgeConfig.purgeDirs is not None: for purgeDir in purgeConfig.purgeDirs: Config._addPurgeDir(xmlDom, sectionNode, purgeDir) @staticmethod def _addExtendedAction(xmlDom, parentNode, action): """ Adds an extended action container as the next child of a parent. We add the following fields to the document:: name action/name module action/module function action/function index action/index dependencies action/depends Dependencies are added by the L{_addDependencies} method. The node itself is created as the next child of the parent node. This method only adds one action node. The parent must loop for each action in the C{ExtensionsConfig} object. If C{action} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param action: Purge directory to be added to the document. """ if action is not None: sectionNode = addContainerNode(xmlDom, parentNode, "action") addStringNode(xmlDom, sectionNode, "name", action.name) addStringNode(xmlDom, sectionNode, "module", action.module) addStringNode(xmlDom, sectionNode, "function", action.function) addIntegerNode(xmlDom, sectionNode, "index", action.index) Config._addDependencies(xmlDom, sectionNode, action.dependencies) @staticmethod def _addOverride(xmlDom, parentNode, override): """ Adds a command override container as the next child of a parent. We add the following fields to the document:: command override/command absolutePath override/abs_path The node itself is created as the next child of the parent node. This method only adds one override node. The parent must loop for each override in the C{OptionsConfig} object. If C{override} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param override: Command override to be added to the document. """ if override is not None: sectionNode = addContainerNode(xmlDom, parentNode, "override") addStringNode(xmlDom, sectionNode, "command", override.command) addStringNode(xmlDom, sectionNode, "abs_path", override.absolutePath) @staticmethod def _addHook(xmlDom, parentNode, hook): """ Adds an action hook container as the next child of a parent. The behavior varies depending on the value of the C{before} and C{after} flags on the hook. If the C{before} flag is set, it's a pre-action hook, and we'll add the following fields:: action pre_action_hook/action command pre_action_hook/command If the C{after} flag is set, it's a post-action hook, and we'll add the following fields:: action post_action_hook/action command post_action_hook/command The or node itself is created as the next child of the parent node. This method only adds one hook node. The parent must loop for each hook in the C{OptionsConfig} object. If C{hook} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param hook: Command hook to be added to the document. """ if hook is not None: if hook.before: sectionNode = addContainerNode(xmlDom, parentNode, "pre_action_hook") else: sectionNode = addContainerNode(xmlDom, parentNode, "post_action_hook") addStringNode(xmlDom, sectionNode, "action", hook.action) addStringNode(xmlDom, sectionNode, "command", hook.command) @staticmethod def _addCollectFile(xmlDom, parentNode, collectFile): """ Adds a collect file container as the next child of a parent. We add the following fields to the document:: absolutePath dir/abs_path collectMode dir/collect_mode archiveMode dir/archive_mode Note that for consistency with collect directory handling we'll only emit the preferred C{collect_mode} tag. The node itself is created as the next child of the parent node. This method only adds one collect file node. The parent must loop for each collect file in the C{CollectConfig} object. If C{collectFile} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param collectFile: Collect file to be added to the document. """ if collectFile is not None: sectionNode = addContainerNode(xmlDom, parentNode, "file") addStringNode(xmlDom, sectionNode, "abs_path", collectFile.absolutePath) addStringNode(xmlDom, sectionNode, "collect_mode", collectFile.collectMode) addStringNode(xmlDom, sectionNode, "archive_mode", collectFile.archiveMode) @staticmethod def _addCollectDir(xmlDom, parentNode, collectDir): """ Adds a collect directory container as the next child of a parent. We add the following fields to the document:: absolutePath dir/abs_path collectMode dir/collect_mode archiveMode dir/archive_mode ignoreFile dir/ignore_file linkDepth dir/link_depth dereference dir/dereference recursionLevel dir/recursion_level Note that an original XML document might have listed the collect mode using the C{mode} tag, since we accept both C{collect_mode} and C{mode}. However, here we'll only emit the preferred C{collect_mode} tag. We also add groups of the following items, one list element per item:: absoluteExcludePaths dir/exclude/abs_path relativeExcludePaths dir/exclude/rel_path excludePatterns dir/exclude/pattern The node itself is created as the next child of the parent node. This method only adds one collect directory node. The parent must loop for each collect directory in the C{CollectConfig} object. If C{collectDir} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param collectDir: Collect directory to be added to the document. """ if collectDir is not None: sectionNode = addContainerNode(xmlDom, parentNode, "dir") addStringNode(xmlDom, sectionNode, "abs_path", collectDir.absolutePath) addStringNode(xmlDom, sectionNode, "collect_mode", collectDir.collectMode) addStringNode(xmlDom, sectionNode, "archive_mode", collectDir.archiveMode) addStringNode(xmlDom, sectionNode, "ignore_file", collectDir.ignoreFile) addIntegerNode(xmlDom, sectionNode, "link_depth", collectDir.linkDepth) addBooleanNode(xmlDom, sectionNode, "dereference", collectDir.dereference) addIntegerNode(xmlDom, sectionNode, "recursion_level", collectDir.recursionLevel) if ((collectDir.absoluteExcludePaths is not None and collectDir.absoluteExcludePaths != []) or (collectDir.relativeExcludePaths is not None and collectDir.relativeExcludePaths != []) or (collectDir.excludePatterns is not None and collectDir.excludePatterns != [])): excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") if collectDir.absoluteExcludePaths is not None: for absolutePath in collectDir.absoluteExcludePaths: addStringNode(xmlDom, excludeNode, "abs_path", absolutePath) if collectDir.relativeExcludePaths is not None: for relativePath in collectDir.relativeExcludePaths: addStringNode(xmlDom, excludeNode, "rel_path", relativePath) if collectDir.excludePatterns is not None: for pattern in collectDir.excludePatterns: addStringNode(xmlDom, excludeNode, "pattern", pattern) @staticmethod def _addLocalPeer(xmlDom, parentNode, localPeer): """ Adds a local peer container as the next child of a parent. We add the following fields to the document:: name peer/name collectDir peer/collect_dir ignoreFailureMode peer/ignore_failures Additionally, C{peer/type} is filled in with C{"local"}, since this is a local peer. The node itself is created as the next child of the parent node. This method only adds one peer node. The parent must loop for each peer in the C{StageConfig} object. If C{localPeer} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param localPeer: Purge directory to be added to the document. """ if localPeer is not None: sectionNode = addContainerNode(xmlDom, parentNode, "peer") addStringNode(xmlDom, sectionNode, "name", localPeer.name) addStringNode(xmlDom, sectionNode, "type", "local") addStringNode(xmlDom, sectionNode, "collect_dir", localPeer.collectDir) addStringNode(xmlDom, sectionNode, "ignore_failures", localPeer.ignoreFailureMode) @staticmethod def _addRemotePeer(xmlDom, parentNode, remotePeer): """ Adds a remote peer container as the next child of a parent. We add the following fields to the document:: name peer/name collectDir peer/collect_dir remoteUser peer/backup_user rcpCommand peer/rcp_command rcpCommand peer/rcp_command rshCommand peer/rsh_command cbackCommand peer/cback_command ignoreFailureMode peer/ignore_failures managed peer/managed managedActions peer/managed_actions Additionally, C{peer/type} is filled in with C{"remote"}, since this is a remote peer. The node itself is created as the next child of the parent node. This method only adds one peer node. The parent must loop for each peer in the C{StageConfig} object. If C{remotePeer} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param remotePeer: Purge directory to be added to the document. """ if remotePeer is not None: sectionNode = addContainerNode(xmlDom, parentNode, "peer") addStringNode(xmlDom, sectionNode, "name", remotePeer.name) addStringNode(xmlDom, sectionNode, "type", "remote") addStringNode(xmlDom, sectionNode, "collect_dir", remotePeer.collectDir) addStringNode(xmlDom, sectionNode, "backup_user", remotePeer.remoteUser) addStringNode(xmlDom, sectionNode, "rcp_command", remotePeer.rcpCommand) addStringNode(xmlDom, sectionNode, "rsh_command", remotePeer.rshCommand) addStringNode(xmlDom, sectionNode, "cback_command", remotePeer.cbackCommand) addStringNode(xmlDom, sectionNode, "ignore_failures", remotePeer.ignoreFailureMode) addBooleanNode(xmlDom, sectionNode, "managed", remotePeer.managed) managedActions = Config._buildCommaSeparatedString(remotePeer.managedActions) addStringNode(xmlDom, sectionNode, "managed_actions", managedActions) @staticmethod def _addPurgeDir(xmlDom, parentNode, purgeDir): """ Adds a purge directory container as the next child of a parent. We add the following fields to the document:: absolutePath dir/abs_path retainDays dir/retain_days The node itself is created as the next child of the parent node. This method only adds one purge directory node. The parent must loop for each purge directory in the C{PurgeConfig} object. If C{purgeDir} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param purgeDir: Purge directory to be added to the document. """ if purgeDir is not None: sectionNode = addContainerNode(xmlDom, parentNode, "dir") addStringNode(xmlDom, sectionNode, "abs_path", purgeDir.absolutePath) addIntegerNode(xmlDom, sectionNode, "retain_days", purgeDir.retainDays) @staticmethod def _addDependencies(xmlDom, parentNode, dependencies): """ Adds a extended action dependencies to parent node. We add the following fields to the document:: runBefore depends/run_before runAfter depends/run_after If C{dependencies} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param dependencies: C{ActionDependencies} object to be added to the document """ if dependencies is not None: sectionNode = addContainerNode(xmlDom, parentNode, "depends") runBefore = Config._buildCommaSeparatedString(dependencies.beforeList) runAfter = Config._buildCommaSeparatedString(dependencies.afterList) addStringNode(xmlDom, sectionNode, "run_before", runBefore) addStringNode(xmlDom, sectionNode, "run_after", runAfter) @staticmethod def _buildCommaSeparatedString(valueList): """ Creates a comma-separated string from a list of values. As a special case, if C{valueList} is C{None}, then C{None} will be returned. @param valueList: List of values to be placed into a string @return: Values from valueList as a comma-separated string. """ if valueList is None: return None return ",".join(valueList) @staticmethod def _addBlankBehavior(xmlDom, parentNode, blankBehavior): """ Adds a blanking behavior container as the next child of a parent. We add the following fields to the document:: blankMode blank_behavior/mode blankFactor blank_behavior/factor The node itself is created as the next child of the parent node. If C{blankBehavior} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param blankBehavior: Blanking behavior to be added to the document. """ if blankBehavior is not None: sectionNode = addContainerNode(xmlDom, parentNode, "blank_behavior") addStringNode(xmlDom, sectionNode, "mode", blankBehavior.blankMode) addStringNode(xmlDom, sectionNode, "factor", blankBehavior.blankFactor) ################################################# # High-level methods used for validating content ################################################# def _validateContents(self): """ Validates configuration contents per rules discussed in module documentation. This is the second pass at validation. It ensures that any filled-in section contains valid data. Any sections which is not set to C{None} is validated per the rules for that section, laid out in the module documentation (above). @raise ValueError: If configuration is invalid. """ self._validateReference() self._validateExtensions() self._validateOptions() self._validatePeers() self._validateCollect() self._validateStage() self._validateStore() self._validatePurge() def _validateReference(self): """ Validates reference configuration. There are currently no reference-related validations. @raise ValueError: If reference configuration is invalid. """ pass def _validateExtensions(self): """ Validates extensions configuration. The list of actions may be either C{None} or an empty list C{[]} if desired. Each extended action must include a name, a module, and a function. Then, if the order mode is None or "index", an index is required; and if the order mode is "dependency", dependency information is required. @raise ValueError: If reference configuration is invalid. """ if self.extensions is not None: if self.extensions.actions is not None: names = [] for action in self.extensions.actions: if action.name is None: raise ValueError("Each extended action must set a name.") names.append(action.name) if action.module is None: raise ValueError("Each extended action must set a module.") if action.function is None: raise ValueError("Each extended action must set a function.") if self.extensions.orderMode is None or self.extensions.orderMode == "index": if action.index is None: raise ValueError("Each extended action must set an index, based on order mode.") elif self.extensions.orderMode == "dependency": if action.dependencies is None: raise ValueError("Each extended action must set dependency information, based on order mode.") checkUnique("Duplicate extension names exist:", names) def _validateOptions(self): """ Validates options configuration. All fields must be filled in except the rsh command. The rcp and rsh commands are used as default values for all remote peers. Remote peers can also rely on the backup user as the default remote user name if they choose. @raise ValueError: If reference configuration is invalid. """ if self.options is not None: if self.options.startingDay is None: raise ValueError("Options section starting day must be filled in.") if self.options.workingDir is None: raise ValueError("Options section working directory must be filled in.") if self.options.backupUser is None: raise ValueError("Options section backup user must be filled in.") if self.options.backupGroup is None: raise ValueError("Options section backup group must be filled in.") if self.options.rcpCommand is None: raise ValueError("Options section remote copy command must be filled in.") def _validatePeers(self): """ Validates peers configuration per rules in L{_validatePeerList}. @raise ValueError: If peers configuration is invalid. """ if self.peers is not None: self._validatePeerList(self.peers.localPeers, self.peers.remotePeers) def _validateCollect(self): """ Validates collect configuration. The target directory must be filled in. The collect mode, archive mode, ignore file, and recursion level are all optional. The list of absolute paths to exclude and patterns to exclude may be either C{None} or an empty list C{[]} if desired. Each collect directory entry must contain an absolute path to collect, and then must either be able to take collect mode, archive mode and ignore file configuration from the parent C{CollectConfig} object, or must set each value on its own. The list of absolute paths to exclude, relative paths to exclude and patterns to exclude may be either C{None} or an empty list C{[]} if desired. Any list of absolute paths to exclude or patterns to exclude will be combined with the same list in the C{CollectConfig} object to make the complete list for a given directory. @raise ValueError: If collect configuration is invalid. """ if self.collect is not None: if self.collect.targetDir is None: raise ValueError("Collect section target directory must be filled in.") if self.collect.collectFiles is not None: for collectFile in self.collect.collectFiles: if collectFile.absolutePath is None: raise ValueError("Each collect file must set an absolute path.") if self.collect.collectMode is None and collectFile.collectMode is None: raise ValueError("Collect mode must either be set in parent collect section or individual collect file.") if self.collect.archiveMode is None and collectFile.archiveMode is None: raise ValueError("Archive mode must either be set in parent collect section or individual collect file.") if self.collect.collectDirs is not None: for collectDir in self.collect.collectDirs: if collectDir.absolutePath is None: raise ValueError("Each collect directory must set an absolute path.") if self.collect.collectMode is None and collectDir.collectMode is None: raise ValueError("Collect mode must either be set in parent collect section or individual collect directory.") if self.collect.archiveMode is None and collectDir.archiveMode is None: raise ValueError("Archive mode must either be set in parent collect section or individual collect directory.") if self.collect.ignoreFile is None and collectDir.ignoreFile is None: raise ValueError("Ignore file must either be set in parent collect section or individual collect directory.") if (collectDir.linkDepth is None or collectDir.linkDepth < 1) and collectDir.dereference: raise ValueError("Dereference flag is only valid when a non-zero link depth is in use.") def _validateStage(self): """ Validates stage configuration. The target directory must be filled in, and the peers are also validated. Peers are only required in this section if the peers configuration section is not filled in. However, if any peers are filled in here, they override the peers configuration and must meet the validation criteria in L{_validatePeerList}. @raise ValueError: If stage configuration is invalid. """ if self.stage is not None: if self.stage.targetDir is None: raise ValueError("Stage section target directory must be filled in.") if self.peers is None: # In this case, stage configuration is our only configuration and must be valid. self._validatePeerList(self.stage.localPeers, self.stage.remotePeers) else: # In this case, peers configuration is the default and stage configuration overrides. # Validation is only needed if it's stage configuration is actually filled in. if self.stage.hasPeers(): self._validatePeerList(self.stage.localPeers, self.stage.remotePeers) def _validateStore(self): """ Validates store configuration. The device type, drive speed, and blanking behavior are optional. All other values are required. Missing booleans will be set to defaults. If blanking behavior is provided, then both a blanking mode and a blanking factor are required. The image writer functionality in the C{writer} module is supposed to be able to handle a device speed of C{None}. Any caller which needs a "real" (non-C{None}) value for the device type can use C{DEFAULT_DEVICE_TYPE}, which is guaranteed to be sensible. This is also where we make sure that the media type -- which is already a valid type -- matches up properly with the device type. @raise ValueError: If store configuration is invalid. """ if self.store is not None: if self.store.sourceDir is None: raise ValueError("Store section source directory must be filled in.") if self.store.mediaType is None: raise ValueError("Store section media type must be filled in.") if self.store.devicePath is None: raise ValueError("Store section device path must be filled in.") if self.store.deviceType is None or self.store.deviceType == "cdwriter": if self.store.mediaType not in VALID_CD_MEDIA_TYPES: raise ValueError("Media type must match device type.") elif self.store.deviceType == "dvdwriter": if self.store.mediaType not in VALID_DVD_MEDIA_TYPES: raise ValueError("Media type must match device type.") if self.store.blankBehavior is not None: if self.store.blankBehavior.blankMode is None and self.store.blankBehavior.blankFactor is None: raise ValueError("If blanking behavior is provided, all values must be filled in.") def _validatePurge(self): """ Validates purge configuration. The list of purge directories may be either C{None} or an empty list C{[]} if desired. All purge directories must contain a path and a retain days value. @raise ValueError: If purge configuration is invalid. """ if self.purge is not None: if self.purge.purgeDirs is not None: for purgeDir in self.purge.purgeDirs: if purgeDir.absolutePath is None: raise ValueError("Each purge directory must set an absolute path.") if purgeDir.retainDays is None: raise ValueError("Each purge directory must set a retain days value.") def _validatePeerList(self, localPeers, remotePeers): """ Validates the set of local and remote peers. Local peers must be completely filled in, including both name and collect directory. Remote peers must also fill in the name and collect directory, but can leave the remote user and rcp command unset. In this case, the remote user is assumed to match the backup user from the options section and rcp command is taken directly from the options section. @param localPeers: List of local peers @param remotePeers: List of remote peers @raise ValueError: If stage configuration is invalid. """ if localPeers is None and remotePeers is None: raise ValueError("Peer list must contain at least one backup peer.") if localPeers is None and remotePeers is not None: if len(remotePeers) < 1: raise ValueError("Peer list must contain at least one backup peer.") elif localPeers is not None and remotePeers is None: if len(localPeers) < 1: raise ValueError("Peer list must contain at least one backup peer.") elif localPeers is not None and remotePeers is not None: if len(localPeers) + len(remotePeers) < 1: raise ValueError("Peer list must contain at least one backup peer.") names = [] if localPeers is not None: for localPeer in localPeers: if localPeer.name is None: raise ValueError("Local peers must set a name.") names.append(localPeer.name) if localPeer.collectDir is None: raise ValueError("Local peers must set a collect directory.") if remotePeers is not None: for remotePeer in remotePeers: if remotePeer.name is None: raise ValueError("Remote peers must set a name.") names.append(remotePeer.name) if remotePeer.collectDir is None: raise ValueError("Remote peers must set a collect directory.") if (self.options is None or self.options.backupUser is None) and remotePeer.remoteUser is None: raise ValueError("Remote user must either be set in options section or individual remote peer.") if (self.options is None or self.options.rcpCommand is None) and remotePeer.rcpCommand is None: raise ValueError("Remote copy command must either be set in options section or individual remote peer.") if remotePeer.managed: if (self.options is None or self.options.rshCommand is None) and remotePeer.rshCommand is None: raise ValueError("Remote shell command must either be set in options section or individual remote peer.") if (self.options is None or self.options.cbackCommand is None) and remotePeer.cbackCommand is None: raise ValueError("Remote cback command must either be set in options section or individual remote peer.") if ((self.options is None or self.options.managedActions is None or len(self.options.managedActions) < 1) and (remotePeer.managedActions is None or len(remotePeer.managedActions) < 1)): raise ValueError("Managed actions list must be set in options section or individual remote peer.") checkUnique("Duplicate peer names exist:", names) ######################################################################## # General utility functions ######################################################################## def readByteQuantity(parent, name): """ Read a byte size value from an XML document. A byte size value is an interpreted string value. If the string value ends with "MB" or "GB", then the string before that is interpreted as megabytes or gigabytes. Otherwise, it is intepreted as bytes. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: ByteQuantity parsed from XML document """ data = readString(parent, name) if data is None: return None data = data.strip() if data.endswith("KB"): quantity = data[0:data.rfind("KB")].strip() units = UNIT_KBYTES elif data.endswith("MB"): quantity = data[0:data.rfind("MB")].strip() units = UNIT_MBYTES elif data.endswith("GB"): quantity = data[0:data.rfind("GB")].strip() units = UNIT_GBYTES else: quantity = data.strip() units = UNIT_BYTES return ByteQuantity(quantity, units) def addByteQuantityNode(xmlDom, parentNode, nodeName, byteQuantity): """ Adds a text node as the next child of a parent, to contain a byte size. If the C{byteQuantity} is None, then the node will be created, but will be empty (i.e. will contain no text node child). The size in bytes will be normalized. If it is larger than 1.0 GB, it will be shown in GB ("1.0 GB"). If it is larger than 1.0 MB ("1.0 MB"), it will be shown in MB. Otherwise, it will be shown in bytes ("423413"). @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @param byteQuantity: ByteQuantity object to put into the XML document @return: Reference to the newly-created node. """ if byteQuantity is None: byteString = None elif byteQuantity.units == UNIT_KBYTES: byteString = "%s KB" % byteQuantity.quantity elif byteQuantity.units == UNIT_MBYTES: byteString = "%s MB" % byteQuantity.quantity elif byteQuantity.units == UNIT_GBYTES: byteString = "%s GB" % byteQuantity.quantity else: byteString = byteQuantity.quantity return addStringNode(xmlDom, parentNode, nodeName, byteString) CedarBackup3-3.1.6/cback30000775000175000017500000000164112555753772016572 0ustar pronovicpronovic00000000000000#!/usr/bin/python3 # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Implements Cedar Backup cback3 script. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # """ Implements Cedar Backup cback3 script. @author: Kenneth J. Pronovici """ try: import sys from CedarBackup3.cli import cli except ImportError as e: print("Failed to import Python modules: %s" % e) print("Are you running a proper version of Python?") sys.exit(1) result = cli() sys.exit(result) CedarBackup3-3.1.6/CREDITS0000664000175000017500000002124112642035600016510 0ustar pronovicpronovic00000000000000# vim: set ft=text80: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Project : Cedar Backup, release 3 # Purpose : Credits for package # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ########## # Credits ########## Most of the source code in this project was written by Kenneth J. Pronovici. Some portions have been based on other pieces of open-source software, as indicated in the source code itself. Unless otherwise indicated, all Cedar Backup source code is Copyright (c) 2004-2011,2013-2016 Kenneth J. Pronovici and is released under the GNU General Public License, version 2. The contents of the GNU General Public License can be found in the LICENSE file, or can be downloaded from http://www.gnu.org/. Various patches have been contributed to the Cedar Backup codebase by Dmitry Rutsky. Major contributions include the initial implementation for the optimized media blanking strategy as well as improvements to the DVD writer implementation. The PostgreSQL extension was contributed by Antoine Beaupre ("The Anarcat"), based on the existing MySQL extension. Lukasz K. Nowak helped debug the split functionality and also provided patches for parts of the documentation. Zoran Bosnjak contributed changes to collect.py to implement recursive collect behavior based on recursion level. Jan Medlock contributed patches to improve the manpage and to support recent versions of the /usr/bin/split command. Minor code snippets derived from newsgroup and mailing list postings are not generally attributed unless I used someone else's source code verbatim. Source code annotated as "(c) 2001, 2002 Python Software Foundation" was originally taken from or derived from code within the Python 2.3 codebase. This code was released under the Python 2.3 license, which is an MIT-style academic license. Items under this license include the function util.getFunctionReference(). Source code annotated as "(c) 2000-2004 CollabNet" was originally released under the CollabNet License, which is an Apache/BSD-style license. Items under this license include basic markup and stylesheets used in creating the user manual. The dblite.dtd and readme-dblite.html files are also assumed to be under the CollabNet License, since they were found as part of the Subversion source tree and did not specify an explicit copyright notice. Source code annotated as "(c) 2000 Fourthought Inc, USA" was taken from or derived from code within the PyXML distribution and was originally part of the 4DOM suite developed by Fourthought, Inc. Fourthought released the code under a BSD-like license. Items under this license include the XML pretty-printing functionality implemented in xmlutil.py. #################### # CollabNet License #################### /* ================================================================ * Copyright (c) 2000-2004 CollabNet. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * 3. The end-user documentation included with the redistribution, if * any, must include the following acknowledgment: "This product includes * software developed by CollabNet (http://www.Collab.Net/)." * Alternately, this acknowledgment may appear in the software itself, if * and wherever such third-party acknowledgments normally appear. * * 4. The hosted project names must not be used to endorse or promote * products derived from this software without prior written * permission. For written permission, please contact info@collab.net. * * 5. Products derived from this software may not use the "Tigris" name * nor may "Tigris" appear in their names without prior written * permission of CollabNet. * * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESSED OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL COLLABNET OR ITS CONTRIBUTORS BE LIABLE FOR ANY * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE * GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * ==================================================================== * * This software consists of voluntary contributions made by many * individuals on behalf of CollabNet. */ ##################### # Python 2.3 License ##################### PSF LICENSE AGREEMENT FOR PYTHON 2.3 ------------------------------------ 1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"), and the Individual or Organization ("Licensee") accessing and otherwise using Python 2.3 software in source or binary form and its associated documentation. 2. Subject to the terms and conditions of this License Agreement, PSF hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python 2.3 alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of copyright, i.e., "Copyright (c) 2001, 2002 Python Software Foundation; All Rights Reserved" are retained in Python 2.3 alone or in any derivative version prepared by Licensee. 3. In the event Licensee prepares a derivative work that is based on or incorporates Python 2.3 or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to Python 2.3. 4. PSF is making Python 2.3 available to Licensee on an "AS IS" basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 2.3 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON 2.3 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 2.3, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 6. This License Agreement will automatically terminate upon a material breach of its terms and conditions. 7. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between PSF and Licensee. This License Agreement does not grant permission to use PSF trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party. 8. By copying, installing or otherwise using Python 2.3, Licensee agrees to be bound by the terms and conditions of this License Agreement. ###################### # Fourthought License ###################### Copyright (c) 2000 Fourthought Inc, USA All Rights Reserved Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of FourThought LLC not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. FOURTHOUGHT LLC DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL FOURTHOUGHT BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. CedarBackup3-3.1.6/LICENSE0000664000175000017500000004310512555004756016513 0ustar pronovicpronovic00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) 19yy This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) 19yy name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License. CedarBackup3-3.1.6/manual/0002775000175000017500000000000012657665551016773 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/manual/Makefile0000664000175000017500000000765112555004756020431 0ustar pronovicpronovic00000000000000# vim: set ft=make: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Make # Project : Cedar Backup, release 3 # Purpose : Makefile used for building the Cedar Backup manual. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######## # Notes ######## # This Makefile was originally taken from the Subversion project's book # (http://svnbook.red-bean.com/) and has been substantially modifed (almost # completely rewritten) for use with Cedar Backup. # # The original Makefile was (c) 2000-2004 CollabNet (see CREDITS). ######################## # Programs and commands ######################## CP = cp INSTALL = install MKDIR = mkdir RM = rm XSLTPROC = xsltproc W3M = w3m ############ # Locations ############ INSTALL_DIR = ../doc/manual XSL_DIR = ../util/docbook STYLES_CSS = $(XSL_DIR)/styles.css XSL_FO = $(XSL_DIR)/fo-stylesheet.xsl XSL_HTML = $(XSL_DIR)/html-stylesheet.xsl XSL_CHUNK = $(XSL_DIR)/chunk-stylesheet.xsl MANUAL_TOP = . MANUAL_DIR = $(MANUAL_TOP)/src MANUAL_CHUNK_DIR = $(MANUAL_DIR)/chunk MANUAL_HTML_TARGET = $(MANUAL_DIR)/manual.html MANUAL_CHUNK_TARGET = $(MANUAL_CHUNK_DIR)/index.html # index.html is created last MANUAL_TEXT_TARGET = $(MANUAL_DIR)/manual.txt MANUAL_XML_SOURCE = $(MANUAL_DIR)/book.xml MANUAL_ALL_SOURCE = $(MANUAL_DIR)/*.xml MANUAL_HTML_IMAGES = $(MANUAL_DIR)/images/html/*.png ############################################# # High-level targets and simple dependencies ############################################# all: manual-html manual-chunk install: install-manual-html install-manual-chunk install-manual-text clean: -@$(RM) -f $(MANUAL_HTML_TARGET) $(MANUAL_FO_TARGET) $(MANUAL_TEXT_TARGET) -@$(RM) -rf $(MANUAL_CHUNK_DIR) $(INSTALL_DIR): $(INSTALL) --mode=775 -d $(INSTALL_DIR) ################### # HTML build rules ################### manual-html: $(MANUAL_HTML_TARGET) $(MANUAL_HTML_TARGET): $(MANUAL_ALL_SOURCE) $(XSLTPROC) --output $(MANUAL_HTML_TARGET) $(XSL_HTML) $(MANUAL_XML_SOURCE) install-manual-html: $(MANUAL_HTML_TARGET) $(INSTALL_DIR) $(INSTALL) --mode=775 -d $(INSTALL_DIR)/images $(INSTALL) --mode=664 $(MANUAL_HTML_TARGET) $(INSTALL_DIR) $(INSTALL) --mode=664 $(STYLES_CSS) $(INSTALL_DIR) $(INSTALL) --mode=664 $(MANUAL_HTML_IMAGES) $(INSTALL_DIR)/images ########################### # Chunked HTML build rules ##################*######## manual-chunk: $(MANUAL_CHUNK_TARGET) # The trailing slash in the $(XSLTPROC) command is essential, so that xsltproc will output pages to the dir $(MANUAL_CHUNK_TARGET): $(MANUAL_ALL_SOURCE) $(STYLES_CSS) $(MANUAL_HTML_IMAGES) $(MKDIR) -p $(MANUAL_CHUNK_DIR) $(MKDIR) -p $(MANUAL_CHUNK_DIR)/images $(XSLTPROC) --output $(MANUAL_CHUNK_DIR)/ $(XSL_CHUNK) $(MANUAL_XML_SOURCE) $(CP) $(STYLES_CSS) $(MANUAL_CHUNK_DIR) $(CP) $(MANUAL_HTML_IMAGES) $(MANUAL_CHUNK_DIR)/images install-manual-chunk: $(MANUAL_CHUNK_TARGET) $(INSTALL_DIR) $(INSTALL) --mode=775 -d $(INSTALL_DIR)/images $(INSTALL) --mode=664 $(MANUAL_CHUNK_DIR)/*.html $(INSTALL_DIR) $(INSTALL) --mode=664 $(STYLES_CSS) $(INSTALL_DIR) $(INSTALL) --mode=664 $(MANUAL_HTML_IMAGES) $(INSTALL_DIR)/images ################### # Text build rules ################### manual-text: manual-html $(MANUAL_TEXT_TARGET) $(MANUAL_TEXT_TARGET): $(W3M) -dump -cols 80 $(MANUAL_HTML_TARGET) > $(MANUAL_TEXT_TARGET) install-manual-text: $(MANUAL_TEXT_TARGET) $(INSTALL_DIR) $(INSTALL) --mode=664 $(MANUAL_TEXT_TARGET) $(INSTALL_DIR) CedarBackup3-3.1.6/manual/src/0002775000175000017500000000000012657665551017562 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/manual/src/book.xml0000664000175000017500000000772112555752224021232 0ustar pronovicpronovic00000000000000 ]> Cedar Backup 3 Software Manual First Kenneth J. Pronovici Juliana E. Pronovici 2005-2008,2013-2015 Kenneth J. Pronovici This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation. For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work. This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA &preface; &intro; &basic; &install; &commandline; &config; &extensions; &extenspec; &depends; &recovering; &securingssh; ©right; CedarBackup3-3.1.6/manual/src/config.xml0000664000175000017500000061123212555752404021543 0ustar pronovicpronovic00000000000000 Configuration Overview Configuring Cedar Backup is unfortunately somewhat complicated. The good news is that once you get through the initial configuration process, you'll hardly ever have to change anything. Even better, the most typical changes (i.e. adding and removing directories from a backup) are easy. First, familiarize yourself with the concepts in . In particular, be sure that you understand the differences between a master and a client. (If you only have one machine, then your machine will act as both a master and a client, and we'll refer to your setup as a pool of one.) Then, install Cedar Backup per the instructions in . Once everything has been installed, you are ready to begin configuring Cedar Backup. Look over (in ) to become familiar with the command line interface. Then, look over (below) and create a configuration file for each peer in your backup pool. To start with, create a very simple configuration file, then expand it later. Decide now whether you will store the configuration file in the standard place (/etc/cback3.conf) or in some other location. After you have all of the configuration files in place, configure each of your machines, following the instructions in the appropriate section below (for master, client or pool of one). Since the master and client(s) must communicate over the network, you won't be able to fully configure the master without configuring each client and vice-versa. The instructions are clear on what needs to be done. Which Platform? Cedar Backup has been designed for use on all UNIX-like systems. However, since it was developed on a Debian GNU/Linux system, and because I am a Debian developer, the packaging is prettier and the setup is somewhat simpler on a Debian system than on a system where you install from source. The configuration instructions below have been generalized so they should work well regardless of what platform you are running (i.e. RedHat, Gentoo, FreeBSD, etc.). If instructions vary for a particular platform, you will find a note related to that platform. I am always open to adding more platform-specific hints and notes, so write me if you find problems with these instructions. Configuration File Format Cedar Backup is configured through an XML See for a basic introduction to XML. configuration file, usually called /etc/cback3.conf. The configuration file contains the following sections: reference, options, collect, stage, store, purge and extensions. All configuration files must contain the two general configuration sections, the reference section and the options section. Besides that, administrators need only configure actions they intend to use. For instance, on a client machine, administrators will generally only configure the collect and purge sections, while on a master machine they will have to configure all four action-related sections. See , in . The extensions section is always optional and can be omitted unless extensions are in use. Even though the Mac OS X (darwin) filesystem is not case-sensitive, Cedar Backup configuration is generally case-sensitive on that platform, just like on all other platforms. For instance, even though the files Ken and ken might be the same on the Mac OS X filesystem, an exclusion in Cedar Backup configuration for ken will only match the file if it is actually on the filesystem with a lower-case k as its first letter. This won't surprise the typical UNIX user, but might surprise someone who's gotten into the Mac Mindset. Sample Configuration File Both the Python source distribution and the Debian package come with a sample configuration file. The Debian package includes its sample in /usr/share/doc/cedar-backup3/examples/cback3.conf.sample. This is a sample configuration file similar to the one provided in the source package. Documentation below provides more information about each of the individual configuration sections. <?xml version="1.0"?> <cb_config> <reference> <author>Kenneth J. Pronovici</author> <revision>1.3</revision> <description>Sample</description> </reference> <options> <starting_day>tuesday</starting_day> <working_dir>/opt/backup/tmp</working_dir> <backup_user>backup</backup_user> <backup_group>group</backup_group> <rcp_command>/usr/bin/scp -B</rcp_command> </options> <peers> <peer> <name>debian</name> <type>local</type> <collect_dir>/opt/backup/collect</collect_dir> </peer> </peers> <collect> <collect_dir>/opt/backup/collect</collect_dir> <collect_mode>daily</collect_mode> <archive_mode>targz</archive_mode> <ignore_file>.cbignore</ignore_file> <dir> <abs_path>/etc</abs_path> <collect_mode>incr</collect_mode> </dir> <file> <abs_path>/home/root/.profile</abs_path> <collect_mode>weekly</collect_mode> </file> </collect> <stage> <staging_dir>/opt/backup/staging</staging_dir> </stage> <store> <source_dir>/opt/backup/staging</source_dir> <media_type>cdrw-74</media_type> <device_type>cdwriter</device_type> <target_device>/dev/cdrw</target_device> <target_scsi_id>0,0,0</target_scsi_id> <drive_speed>4</drive_speed> <check_data>Y</check_data> <check_media>Y</check_media> <warn_midnite>Y</warn_midnite> </store> <purge> <dir> <abs_path>/opt/backup/stage</abs_path> <retain_days>7</retain_days> </dir> <dir> <abs_path>/opt/backup/collect</abs_path> <retain_days>0</retain_days> </dir> </purge> </cb_config> Reference Configuration The reference configuration section contains free-text elements that exist only for reference.. The section itself is required, but the individual elements may be left blank if desired. This is an example reference configuration section: <reference> <author>Kenneth J. Pronovici</author> <revision>Revision 1.3</revision> <description>Sample</description> <generator>Yet to be Written Config Tool (tm)</description> </reference> The following elements are part of the reference configuration section: author Author of the configuration file. Restrictions: None revision Revision of the configuration file. Restrictions: None description Description of the configuration file. Restrictions: None generator Tool that generated the configuration file, if any. Restrictions: None Options Configuration The options configuration section contains configuration options that are not specific to any one action. This is an example options configuration section: <options> <starting_day>tuesday</starting_day> <working_dir>/opt/backup/tmp</working_dir> <backup_user>backup</backup_user> <backup_group>backup</backup_group> <rcp_command>/usr/bin/scp -B</rcp_command> <rsh_command>/usr/bin/ssh</rsh_command> <cback_command>/usr/bin/cback</cback_command> <managed_actions>collect, purge</managed_actions> <override> <command>cdrecord</command> <abs_path>/opt/local/bin/cdrecord</abs_path> </override> <override> <command>mkisofs</command> <abs_path>/opt/local/bin/mkisofs</abs_path> </override> <pre_action_hook> <action>collect</action> <command>echo "I AM A PRE-ACTION HOOK RELATED TO COLLECT"</command> </pre_action_hook> <post_action_hook> <action>collect</action> <command>echo "I AM A POST-ACTION HOOK RELATED TO COLLECT"</command> </post_action_hook> </options> The following elements are part of the options configuration section: starting_day Day that starts the week. Cedar Backup is built around the idea of weekly backups. The starting day of week is the day that media will be rebuilt from scratch and that incremental backup information will be cleared. Restrictions: Must be a day of the week in English, i.e. monday, tuesday, etc. The validation is case-sensitive. working_dir Working (temporary) directory to use for backups. This directory is used for writing temporary files, such as tar file or ISO filesystem images as they are being built. It is also used to store day-to-day information about incremental backups. The working directory should contain enough free space to hold temporary tar files (on a client) or to build an ISO filesystem image (on a master). Restrictions: Must be an absolute path backup_user Effective user that backups should run as. This user must exist on the machine which is being configured and should not be root (although that restriction is not enforced). This value is also used as the default remote backup user for remote peers. Restrictions: Must be non-empty backup_group Effective group that backups should run as. This group must exist on the machine which is being configured, and should not be root or some other powerful group (although that restriction is not enforced). Restrictions: Must be non-empty rcp_command Default rcp-compatible copy command for staging. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This value is used as the default value for all remote peers. Technically, this value is not needed by clients, but we require it for all config files anyway. Restrictions: Must be non-empty rsh_command Default rsh-compatible command to use for remote shells. The rsh command should be the exact command used for remote shells, including any required options. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Restrictions: Must be non-empty cback_command Default cback-compatible command to use on managed remote clients. The cback command should be the exact command used for for executing cback on a remote managed client, including any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration. Restrictions: Must be non-empty managed_actions Default set of actions that are managed on remote clients. This is a comma-separated list of actions that the master will manage on behalf of remote clients. Typically, it would include only collect-like actions and purge. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Restrictions: Must be non-empty. override Command to override with a customized path. This is a subsection which contains a command to override with a customized path. This functionality would be used if root's $PATH does not include a particular required command, or if there is a need to use a version of a command that is different than the one listed on the $PATH. Most users will only use this section when directed to, in order to fix a problem. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: command Name of the command to be overridden, i.e. cdrecord. Restrictions: Must be a non-empty string. abs_path The absolute path where the overridden command can be found. Restrictions: Must be an absolute path. pre_action_hook Hook configuring a command to be executed before an action. This is a subsection which configures a command to be executed immediately before a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: action Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists. Restrictions: Must be a non-empty string. command Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command. Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands. Restrictions: Must be a non-empty string. post_action_hook Hook configuring a command to be executed after an action. This is a subsection which configures a command to be executed immediately after a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: action Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists. Restrictions: Must be a non-empty string. command Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command. Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands. Restrictions: Must be a non-empty string. Peers Configuration The peers configuration section contains a list of the peers managed by a master. This section is only required on a master. This is an example peers configuration section: <peers> <peer> <name>machine1</name> <type>local</type> <collect_dir>/opt/backup/collect</collect_dir> </peer> <peer> <name>machine2</name> <type>remote</type> <backup_user>backup</backup_user> <collect_dir>/opt/backup/collect</collect_dir> <ignore_failures>all</ignore_failures> </peer> <peer> <name>machine3</name> <type>remote</type> <managed>Y</managed> <backup_user>backup</backup_user> <collect_dir>/opt/backup/collect</collect_dir> <rcp_command>/usr/bin/scp</rcp_command> <rsh_command>/usr/bin/ssh</rsh_command> <cback_command>/usr/bin/cback</cback_command> <managed_actions>collect, purge</managed_actions> </peer> </peers> The following elements are part of the peers configuration section: peer (local version) Local client peer in a backup pool. This is a subsection which contains information about a specific local client peer managed by a master. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. The local peer subsection must contain the following fields: name Name of the peer, typically a valid hostname. For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a local peer, it must always be local. Restrictions: Must be local. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp). Restrictions: Must be an absolute path. ignore_failures Ignore failure mode for this peer The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator. The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup. Restrictions: If set, must be one of "none", "all", "daily", or "weekly". peer (remote version) Remote client peer in a backup pool. This is a subsection which contains information about a specific remote client peer managed by a master. A remote peer is one which can be reached via an rsh-based network call. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. The remote peer subsection must contain the following fields: name Hostname of the peer. For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a remote peer, it must always be remote. Restrictions: Must be remote. managed Indicates whether this peer is managed. A managed peer (or managed client) is a peer for which the master manages all of the backup activites via a remote shell. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command). Restrictions: Must be an absolute path. ignore_failures Ignore failure mode for this peer The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator. The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup. Restrictions: If set, must be one of "none", "all", "daily", or "weekly". backup_user Name of backup user on the remote peer. This username will be used when copying files from the remote peer via an rsh-based network connection. This field is optional. if it doesn't exist, the backup will use the default backup user from the options section. Restrictions: Must be non-empty. rcp_command The rcp-compatible copy command for this peer. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section. Restrictions: Must be non-empty. rsh_command The rsh-compatible command for this peer. The rsh command should be the exact command used for remote shells, including any required options. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default rsh command from the options section. Restrictions: Must be non-empty cback_command The cback-compatible command for this peer. The cback command should be the exact command used for for executing cback on the peer as part of a managed backup. This value must include any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default cback command from the options section. Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration. Restrictions: Must be non-empty managed_actions Set of actions that are managed for this peer. This is a comma-separated list of actions that the master will manage on behalf this peer. Typically, it would include only collect-like actions and purge. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default list of managed actions from the options section. Restrictions: Must be non-empty. Collect Configuration The collect configuration section contains configuration options related the the collect action. This section contains a variable number of elements, including an optional exclusion section and a repeating subsection used to specify which directories and/or files to collect. You can also configure an ignore indicator file, which lets users mark their own directories as not backed up. Using a Link Farm Sometimes, it's not very convenient to list directories one by one in the Cedar Backup configuration file. For instance, when backing up your home directory, you often exclude as many directories as you include. The ignore file mechanism can be of some help, but it still isn't very convenient if there are a lot of directories to ignore (or if new directories pop up all of the time). In this situation, one option is to use a link farm rather than listing all of the directories in configuration. A link farm is a directory that contains nothing but a set of soft links to other files and directories. Normally, Cedar Backup does not follow soft links, but you can override this behavior for individual directories using the link_depth and dereference options (see below). When using a link farm, you still have to deal with each backed-up directory individually, but you don't have to modify configuration. Some users find that this works better for them. In order to actually execute the collect action, you must have configured at least one collect directory or one collect file. However, if you are only including collect configuration for use by an extension, then it's OK to leave out these sections. The validation will take place only when the collect action is executed. This is an example collect configuration section: <collect> <collect_dir>/opt/backup/collect</collect_dir> <collect_mode>daily</collect_mode> <archive_mode>targz</archive_mode> <ignore_file>.cbignore</ignore_file> <exclude> <abs_path>/etc</abs_path> <pattern>.*\.conf</pattern> </exclude> <file> <abs_path>/home/root/.profile</abs_path> </file> <dir> <abs_path>/etc</abs_path> </dir> <dir> <abs_path>/var/log</abs_path> <collect_mode>incr</collect_mode> </dir> <dir> <abs_path>/opt</abs_path> <collect_mode>weekly</collect_mode> <exclude> <abs_path>/opt/large</abs_path> <rel_path>backup</rel_path> <pattern>.*tmp</pattern> </exclude> </dir> </collect> The following elements are part of the collect configuration section: collect_dir Directory to collect files into. On a client, this is the directory which tarfiles for individual collect directories are written into. The master then stages files from this directory into its own staging directory. This field is always required. It must contain enough free space to collect all of the backed-up files on the machine in a compressed form. Restrictions: Must be an absolute path collect_mode Default collect mode. The collect mode describes how frequently a directory is backed up. See (in ) for more information. This value is the collect mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Default archive mode for collect files. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This value is the archive mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of tar, targz or tarbz2. ignore_file Default ignore file name. The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration. The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it. This value is the ignore file name that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be non-empty recursion_level Recursion level to use when collecting directories. This is an integer value that Cedar Backup will consider when generating archive files for a configured collect directory. Normally, Cedar Backup generates one archive file per collect directory. So, if you collect /etc you get etc.tar.gz. Most of the time, this is what you want. However, you may sometimes wish to generate multiple archive files for a single collect directory. The most obvious example is for /home. By default, Cedar Backup will generate home.tar.gz. If instead, you want one archive file per home directory you can set a recursion level of 1. Cedar Backup will generate home-user1.tar.gz, home-user2.tar.gz, etc. Higher recursion levels (2, 3, etc.) are legal, and it doesn't matter if the configured recursion level is deeper than the directory tree that is being collected. You can use a negative recursion level (like -1) to specify an infinite level of recursion. This will exhaust the tree in the same way as if the recursion level is set too high. This field is optional. if it doesn't exist, the backup will use the default recursion level of zero. Restrictions: Must be an integer. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of absolute paths and patterns to be excluded across all configured directories. For a given directory, the set of absolute paths and patterns to exclude is built from this list and any list that exists on the directory itself. Directories cannot override or remove entries that are in this list, however. This section is optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: abs_path An absolute path to be recursively excluded from the backup. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be an absolute path. pattern A pattern to be recursively excluded from the backup. The pattern must be a Python regular expression. See It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty file A file to be collected. This is a subsection which contains information about a specific file to be collected (backed up). This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed. The collect file subsection contains the following fields: abs_path Absolute path of the file to collect. Restrictions: Must be an absolute path. collect_mode Collect mode for this file The collect mode describes how frequently a file is backed up. See (in ) for more information. This field is optional. If it doesn't exist, the backup will use the default collect mode. Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Archive mode for this file. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This field is optional. if it doesn't exist, the backup will use the default archive mode. Restrictions: Must be one of tar, targz or tarbz2. dir A directory to be collected. This is a subsection which contains information about a specific directory to be collected (backed up). This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed. The collect directory subsection contains the following fields: abs_path Absolute path of the directory to collect. The path may be either a directory, a soft link to a directory, or a hard link to a directory. All three are treated the same at this level. The contents of the directory will be recursively collected. The backup will contain all of the files in the directory, as well as the contents of all of the subdirectories within the directory, etc. Soft links within the directory are treated as files, i.e. they are copied verbatim (as a link) and their contents are not backed up. Restrictions: Must be an absolute path. collect_mode Collect mode for this directory The collect mode describes how frequently a directory is backed up. See (in ) for more information. This field is optional. If it doesn't exist, the backup will use the default collect mode. Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Archive mode for this directory. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This field is optional. if it doesn't exist, the backup will use the default archive mode. Restrictions: Must be one of tar, targz or tarbz2. ignore_file Ignore file name for this directory. The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration. The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it. This field is optional. If it doesn't exist, the backup will use the default ignore file name. Restrictions: Must be non-empty link_depth Link depth value to use for this directory. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links within the collect directory, a depth of 1 follows only links immediately within the collect directory, a depth of 2 follows the links at the next level down, etc. This field is optional. If it doesn't exist, the backup will assume a value of zero, meaning that soft links within the collect directory will never be followed. Restrictions: If set, must be an integer ≥ 0. dereference Whether to dereference soft links. If this flag is set, links that are being followed will be dereferenced before being added to the backup. The link will be added (as a link), and then the directory or file that the link points at will be added as well. This value only applies to a directory where soft links are being followed (per the link_depth configuration option). It never applies to a configured collect directory itself, only to other directories within the collect directory. This field is optional. If it doesn't exist, the backup will assume that links should never be dereferenced. Restrictions: Must be a boolean (Y or N). exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this collect directory. This list is combined with the program-wide list to build a complete list for the directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: abs_path An absolute path to be recursively excluded from the backup. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be an absolute path. rel_path A relative path to be recursively excluded from the backup. The path is assumed to be relative to the collect directory itself. For instance, if the configured directory is /opt/web a configured relative path of something/else would exclude the path /opt/web/something/else. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value something/else would exclude any files within something/else as well as files within other directories under something/else. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty Stage Configuration The stage configuration section contains configuration options related the the stage action. The section indicates where date from peers can be staged to. This section can also (optionally) override the list of peers so that not all peers are staged. If you provide any peers in this section, then the list of peers here completely replaces the list of peers in the peers configuration section for the purposes of staging. This is an example stage configuration section for the simple case where the list of peers is taken from peers configuration: <stage> <staging_dir>/opt/backup/stage</staging_dir> </stage> This is an example stage configuration section that overrides the default list of peers: <stage> <staging_dir>/opt/backup/stage</staging_dir> <peer> <name>machine1</name> <type>local</type> <collect_dir>/opt/backup/collect</collect_dir> </peer> <peer> <name>machine2</name> <type>remote</type> <backup_user>backup</backup_user> <collect_dir>/opt/backup/collect</collect_dir> </peer> </stage> The following elements are part of the stage configuration section: staging_dir Directory to stage files into. This is the directory into which the master stages collected data from each of the clients. Within the staging directory, data is staged into date-based directories by peer name. For instance, peer daystrom backed up on 19 Feb 2005 would be staged into something like 2005/02/19/daystrom relative to the staging directory itself. This field is always required. The directory must contain enough free space to stage all of the files collected from all of the various machines in a backup pool. Many administrators set up purging to keep staging directories around for a week or more, which requires even more space. Restrictions: Must be an absolute path peer (local version) Local client peer in a backup pool. This is a subsection which contains information about a specific local client peer to be staged (backed up). A local peer is one whose collect directory can be reached without requiring any rsh-based network calls. It is possible that a remote peer might be staged as a local peer if its collect directory is mounted to the master via NFS, AFS or some other method. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration. The local peer subsection must contain the following fields: name Name of the peer, typically a valid hostname. For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a local peer, it must always be local. Restrictions: Must be local. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp). Restrictions: Must be an absolute path. peer (remote version) Remote client peer in a backup pool. This is a subsection which contains information about a specific remote client peer to be staged (backed up). A remote peer is one whose collect directory can only be reached via an rsh-based network call. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration. The remote peer subsection must contain the following fields: name Hostname of the peer. For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a remote peer, it must always be remote. Restrictions: Must be remote. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command). Restrictions: Must be an absolute path. backup_user Name of backup user on the remote peer. This username will be used when copying files from the remote peer via an rsh-based network connection. This field is optional. if it doesn't exist, the backup will use the default backup user from the options section. Restrictions: Must be non-empty. rcp_command The rcp-compatible copy command for this peer. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section. Restrictions: Must be non-empty. Store Configuration The store configuration section contains configuration options related the the store action. This section contains several optional fields. Most fields control the way media is written using the writer device. This is an example store configuration section: <store> <source_dir>/opt/backup/stage</source_dir> <media_type>cdrw-74</media_type> <device_type>cdwriter</device_type> <target_device>/dev/cdrw</target_device> <target_scsi_id>0,0,0</target_scsi_id> <drive_speed>4</drive_speed> <check_data>Y</check_data> <check_media>Y</check_media> <warn_midnite>Y</warn_midnite> <no_eject>N</no_eject> <refresh_media_delay>15</refresh_media_delay> <eject_delay>2</eject_delay> <blank_behavior> <mode>weekly</mode> <factor>1.3</factor> </blank_behavior> </store> The following elements are part of the store configuration section: source_dir Directory whose contents should be written to media. This directory must be a Cedar Backup staging directory, as configured in the staging configuration section. Only certain data from that directory (typically, data from the current day) will be written to disc. Restrictions: Must be an absolute path device_type Type of the device used to write the media. This field controls which type of writer device will be used by Cedar Backup. Currently, Cedar Backup supports CD writers (cdwriter) and DVD writers (dvdwriter). This field is optional. If it doesn't exist, the cdwriter device type is assumed. Restrictions: If set, must be either cdwriter or dvdwriter. media_type Type of the media in the device. Unless you want to throw away a backup disc every week, you are probably best off using rewritable media. You must choose a media type that is appropriate for the device type you chose above. For more information on media types, see (in ). Restrictions: Must be one of cdr-74, cdrw-74, cdr-80 or cdrw-80 if device type is cdwriter; or one of dvd+r or dvd+rw if device type is dvdwriter. target_device Filesystem device name for writer device. This value is required for both CD writers and DVD writers. This is the UNIX device name for the writer drive, for instance /dev/scd0 or a symlink like /dev/cdrw. In some cases, this device name is used to directly write to media. This is true all of the time for DVD writers, and is true for CD writers when a SCSI id (see below) has not been specified. Besides this, the device name is also needed in order to do several pre-write checks (such as whether the device might already be mounted) as well as the post-write consistency check, if enabled. Note: some users have reported intermittent problems when using a symlink as the target device on Linux, especially with DVD media. If you experience problems, try using the real device name rather than the symlink. Restrictions: Must be an absolute path. target_scsi_id SCSI id for the writer device. This value is optional for CD writers and is ignored for DVD writers. If you have configured your CD writer hardware to work through the normal filesystem device path, then you can leave this parameter unset. Cedar Backup will just use the target device (above) when talking to cdrecord. Otherwise, if you have SCSI CD writer hardware or you have configured your non-SCSI hardware to operate like a SCSI device, then you need to provide Cedar Backup with a SCSI id it can use when talking with cdrecord. For the purposes of Cedar Backup, a valid SCSI identifier must either be in the standard SCSI identifier form scsibus,target,lun or in the specialized-method form <method>:scsibus,target,lun. An example of a standard SCSI identifier is 1,6,2. Today, the two most common examples of the specialized-method form are ATA:scsibus,target,lun and ATAPI:scsibus,target,lun, but you may occassionally see other values (like OLDATAPI in some forks of cdrecord). See for more information on writer devices and how they are configured. Restrictions: If set, must be a valid SCSI identifier. drive_speed Speed of the drive, i.e. 2 for a 2x device. This field is optional. If it doesn't exist, the underlying device-related functionality will use the default drive speed. For DVD writers, it is best to leave this value unset, so growisofs can pick an appropriate speed. For CD writers, since media can be speed-sensitive, it is probably best to set a sensible value based on your specific writer and media. Restrictions: If set, must be an integer ≥ 1. check_data Whether the media should be validated. This field indicates whether a resulting image on the media should be validated after the write completes, by running a consistency check against it. If this check is enabled, the contents of the staging directory are directly compared to the media, and an error is reported if there is a mismatch. Practice shows that some drives can encounter an error when writing a multisession disc, but not report any problems. This consistency check allows us to catch the problem. By default, the consistency check is disabled, but most users should choose to enable it unless they have a good reason not to. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). check_media Whether the media should be checked before writing to it. By default, Cedar Backup does not check its media before writing to it. It will write to any media in the backup device. If you set this flag to Y, Cedar Backup will make sure that the media has been initialized before writing to it. (Rewritable media is initialized using the initialize action.) If the configured media is not rewritable (like CD-R), then this behavior is modified slightly. For this kind of media, the check passes either if the media has been initialized or if the media appears unused. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). warn_midnite Whether to generate warnings for crossing midnite. This field indicates whether warnings should be generated if the store operation has to cross a midnite boundary in order to find data to write to disc. For instance, a warning would be generated if valid store data was only found in the day before or day after the current day. Configuration for some users is such that the store operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something strange might have happened. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). no_eject Indicates that the writer device should not be ejected. Under some circumstances, Cedar Backup ejects (opens and closes) the writer device. This is done because some writer devices need to re-load the media before noticing a media state change (like a new session). For most writer devices this is safe, because they have a tray that can be opened and closed. If your writer device does not have a tray and Cedar Backup does not properly detect this, then set this flag. Cedar Backup will not ever issue an eject command to your writer. Note: this could cause problems with your backup. For instance, with many writers, the check data step may fail if the media is not reloaded first. If this happens to you, you may need to get a different writer device. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). refresh_media_delay Number of seconds to delay after refreshing media This field is optional. If it doesn't exist, no delay will occur. Some devices seem to take a little while to stablize after refreshing the media (i.e. closing and opening the tray). During this period, operations on the media may fail. If your device behaves like this, you can try setting a delay of 10-15 seconds. Restrictions: If set, must be an integer ≥ 1. eject_delay Number of seconds to delay after ejecting the tray This field is optional. If it doesn't exist, no delay will occur. If your system seems to have problems opening and closing the tray, one possibility is that the open/close sequence is happening too quickly — either the tray isn't fully open when Cedar Backup tries to close it, or it doesn't report being open. To work around that problem, set an eject delay of a few seconds. Restrictions: If set, must be an integer ≥ 1. blank_behavior Optimized blanking strategy. For more information about Cedar Backup's optimized blanking strategy, see . This entire configuration section is optional. However, if you choose to provide it, you must configure both a blanking mode and a blanking factor. blank_mode Blanking mode. Restrictions:Must be one of "daily" or "weekly". blank_factor Blanking factor. Restrictions:Must be a floating point number ≥ 0. Purge Configuration The purge configuration section contains configuration options related the the purge action. This section contains a set of directories to be purged, along with information about the schedule at which they should be purged. Typically, Cedar Backup should be configured to purge collect directories daily (retain days of 0). If you are tight on space, staging directories can also be purged daily. However, if you have space to spare, you should consider purging about once per week. That way, if your backup media is damaged, you will be able to recreate the week's backup using the rebuild action. You should also purge the working directory periodically, once every few weeks or once per month. This way, if any unneeded files are left around, perhaps because a backup was interrupted or because configuration changed, they will eventually be removed. The working directory should not be purged any more frequently than once per week, otherwise you will risk destroying data used for incremental backups. This is an example purge configuration section: <purge> <dir> <abs_path>/opt/backup/stage</abs_path> <retain_days>7</retain_days> </dir> <dir> <abs_path>/opt/backup/collect</abs_path> <retain_days>0</retain_days> </dir> </purge> The following elements are part of the purge configuration section: dir A directory to purge within. This is a subsection which contains information about a specific directory to purge within. This section can be repeated as many times as is necessary. At least one purge directory must be configured. The purge directory subsection contains the following fields: abs_path Absolute path of the directory to purge within. The contents of the directory will be purged based on age. The purge will remove any files that were last modified more than retain days days ago. Empty directories will also eventually be removed. The purge directory itself will never be removed. The path may be either a directory, a soft link to a directory, or a hard link to a directory. Soft links within the directory (if any) are treated as files. Restrictions: Must be an absolute path. retain_days Number of days to retain old files. Once it has been more than this many days since a file was last modified, it is a candidate for removal. Restrictions: Must be an integer ≥ 0. Extensions Configuration The extensions configuration section is used to configure third-party extensions to Cedar Backup. If you don't intend to use any extensions, or don't know what extensions are, then you can safely leave this section out of your configuration file. It is optional. Extensions configuration is used to specify extended actions implemented by code external to Cedar Backup. An administrator can use this section to map command-line Cedar Backup actions to third-party extension functions. Each extended action has a name, which is mapped to a Python function within a particular module. Each action also has an index associated with it. This index is used to properly order execution when more than one action is specified on the command line. The standard actions have predefined indexes, and extended actions are interleaved into the normal order of execution using those indexes. The collect action has index 100, the stage index has action 200, the store action has index 300 and the purge action has index 400. Extended actions should always be configured to run before the standard action they are associated with. This is because of the way indicator files are used in Cedar Backup. For instance, the staging process considers the collect action to be complete for a peer if the file cback.collect can be found in that peer's collect directory. If you were to run the standard collect action before your other collect-like actions, the indicator file would be written after the collect action completes but before all of the other actions even run. Because of this, there's a chance the stage process might back up the collect directory before the entire set of collect-like actions have completed — and you would get no warning about this in your email! So, imagine that a third-party developer provided a Cedar Backup extension to back up a certain kind of database repository, and you wanted to map that extension to the database command-line action. You have been told that this function is called foo.bar(). You think of this backup as a collect kind of action, so you want it to be performed immediately before the collect action. To configure this extension, you would list an action with a name database, a module foo, a function name bar and an index of 99. This is how the hypothetical action would be configured: <extensions> <action> <name>database</name> <module>foo</module> <function>bar</function> <index>99</index> </action> </extensions> The following elements are part of the extensions configuration section: action This is a subsection that contains configuration related to a single extended action. This section can be repeated as many times as is necessary. The action subsection contains the following fields: name Name of the extended action. Restrictions: Must be a non-empty string consisting of only lower-case letters and digits. module Name of the Python module associated with the extension function. Restrictions: Must be a non-empty string and a valid Python identifier. function Name of the Python extension function within the module. Restrictions: Must be a non-empty string and a valid Python identifier. index Index of action, for execution ordering. Restrictions: Must be an integer ≥ 0. Setting up a Pool of One Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one). Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. This setup procedure discusses how to set up Cedar Backup in the normal case for a pool of one. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure your writer device. Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation. Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations. See for more information on writer devices and how they are configured. There is no need to set up your CD/DVD device if you have decided not to execute the store action. Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers. Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly three times as big as the amount of data that will be backed up on a nightly basis, to allow for the data to be collected, staged, and then placed into an ISO filesystem image on disk. (This is one disadvantage to using Cedar Backup in single-machine pools, but in this day of really large hard drives, it might not be an issue.) Note that if you elect not to purge the staging directory every night, you will need even more space. You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ stage/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in (above) create a configuration file for your machine. Since you are working with a pool of one, you must configure all four action-specific sections: collect, stage, store and purge. The usual location for the Cedar Backup config file is /etc/cback3.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback3 script at the correct config file (using the option). Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidential information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback3 validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately. Step 8: Test your backup. Place a valid CD/DVD disc in your drive, and then use the command cback3 --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback3.log) for errors and also mount the CD/DVD disc to be sure it can be read. If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. See . To be safe, always enable the consistency check option in the store configuration section. Step 9: Modify the backup cron jobs. Since Cedar Backup should be run as root, one way to configure the cron job is to add a line like this to your /etc/crontab file: 30 00 * * * root cback3 all Or, you can create an executable script containing just these lines and place that file in the /etc/cron.daily directory: #/bin/sh cback3 all You should consider adding the or switch to your cback3 command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup3. As installed, this file contains several different settings, all commented out. Uncomment the Single machine (pool of one) entry in the file, and change the line so that the backup goes off when you want it to. Setting up a Client Peer Node Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a client is a little simpler than configuring a master. Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own. Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. See for some important notes on how to optionally further secure password-less SSH connections to your clients. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure the master in your backup pool. You will not be able to complete the client configuration until at least step 3 of the master's configuration has been completed. In particular, you will need to know the master's public SSH identity to fully configure a client. To find the master's public SSH identity, log in as the backup user on the master and cat the public identity file ~/.ssh/id_rsa.pub: user@machine> cat ~/.ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA0vOKjlfwohPg1oPRdrmwHk75l3mI9Tb/WRZfVnu2Pw69 uyphM9wBLRo6QfOC2T8vZCB8o/ZIgtAM3tkM0UgQHxKBXAZ+H36TOgg7BcI20I93iGtzpsMA/uXQy8kH HgZooYqQ9pw+ZduXgmPcAAv2b5eTm07wRqFt/U84k6bhTzs= user@machine Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a "ready made" backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa: user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa Generating public/private rsa key pair. Created directory '/home/user/.ssh'. Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: 11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable only by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644). Finally, take the master's public SSH identity (which you found in step 2) and cut-and-paste it into the file ~/.ssh/authorized_keys. Make sure the identity value is pasted into the file all on one line, and that the authorized_keys file is owned by your backup user and has permissions 600. If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly as big as the amount of data that will be backed up on a nightly basis (more if you elect not to purge it all every night). You should create a collect directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more "standard" location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in (above), create a configuration file for your machine. Since you are working with a client, you must configure all action-specific sections for the collect and purge actions. The usual location for the Cedar Backup config file is /etc/cback3.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback3 script at the correct config file (using the option). Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback3 validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems. This command only validates configuration on the one client, not the master or any other clients in a pool. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately. Step 8: Test your backup. Use the command cback3 --full collect purge. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback3.log) for errors. Step 9: Modify the backup cron jobs. Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file: 30 00 * * * root cback3 collect 30 06 * * * root cback3 purge You should consider adding the or switch to your cback3 command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. You will need to coordinate the collect and purge actions on the client so that the collect action completes before the master attempts to stage, and so that the purge action does not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. See in . For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup3. As installed, this file contains several different settings, all commented out. Uncomment the Client machine entries in the file, and change the lines so that the backup goes off when you want it to. Setting up a Master Peer Node Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a master is somewhat more complicated than configuring a client. Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own. Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or whichever other user receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. This setup procedure discusses how to set up Cedar Backup in the normal case for a master. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Keep in mind that you do not necessarily have to run the collect action on the master. See notes further below for more information. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure your writer device. Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation. Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations. See for more information on writer devices and how they are configured. There is no need to set up your CD/DVD device if you have decided not to execute the store action. Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers. Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa: user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa Generating public/private rsa key pair. Created directory '/home/user/.ssh'. Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: 11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644). If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly large enough hold twice as much data as will be backed up from the entire pool on a given night, plus space for whatever is collected on the master itself. This will allow for all three operations - collect, stage and store - to have enough space to complete. Note that if you elect not to purge the staging directory every night, you will need even more space. You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ stage/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in (above), create a configuration file for your machine. Since you are working with a master machine, you would typically configure all four action-specific sections: collect, stage, store and purge. Note that the master can treat itself as a client peer for certain actions. As an example, if you run the collect action on the master, then you will stage that data by configuring a local peer representing the master. Something else to keep in mind is that you do not really have to run the collect action on the master. For instance, you may prefer to just use your master machine as a consolidation point machine that just collects data from the other client machines in a backup pool. In that case, there is no need to collect data on the master itself. The usual location for the Cedar Backup config file is /etc/cback3.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback3 script at the correct config file (using the option). Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback3 validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. This command only validates configuration on the master, not any clients that the master might be configured to connect to. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately. Step 8: Test connectivity to client machines. This step must wait until after your client machines have been at least partially configured. Once the backup user(s) have been configured on the client machine(s) in a pool, attempt an SSH connection to each client. Log in as the backup user on the master, and then use the command ssh user@machine where user is the name of backup user on the client machine, and machine is the name of the client machine. If you are able to log in successfully to each client without entering a password, then things have been configured properly. Otherwise, double-check that you followed the user setup instructions for the master and the clients. Step 9: Test your backup. Make sure that you have configured all of the clients in your backup pool. On all of the clients, execute cback3 --full collect. (You will probably have already tested this command on each of the clients, so it should succeed.) When all of the client backups have completed, place a valid CD/DVD disc in your drive, and then use the command cback3 --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback3.log) on the master and each of the clients, and also mount the CD/DVD disc on the master to be sure it can be read. You may also want to run cback3 purge on the master and each client once you have finished validating that everything worked. If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. To be safe, always enable the consistency check option in the store configuration section. Step 10: Modify the backup cron jobs. Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file: 30 00 * * * root cback3 collect 30 02 * * * root cback3 stage 30 04 * * * root cback3 store 30 06 * * * root cback3 purge You should consider adding the or switch to your cback3 command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. You will need to coordinate the collect and purge actions on clients so that their collect actions complete before the master attempts to stage, and so that their purge actions do not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup3. As installed, this file contains several different settings, all commented out. Uncomment the Master machine entries in the file, and change the lines so that the backup goes off when you want it to. Configuring your Writer Device Device Types In order to execute the store action, you need to know how to identify your writer device. Cedar Backup supports two kinds of device types: CD writers and DVD writers. DVD writers are always referenced through a filesystem device name (i.e. /dev/dvd). CD writers can be referenced either through a SCSI id, or through a filesystem device name. Which you use depends on your operating system and hardware. Devices identified by by device name For all DVD writers, and for CD writers on certain platforms, you will configure your writer device using only a device name. If your writer device works this way, you should just specify <target_device> in configuration. You can either leave <target_scsi_id> blank or remove it completely. The writer device will be used both to write to the device and for filesystem operations — for instance, when the media needs to be mounted to run the consistency check. Devices identified by SCSI id Cedar Backup can use devices identified by SCSI id only when configured to use the cdwriter device type. In order to use a SCSI device with Cedar Backup, you must know both the SCSI id <target_scsi_id> and the device name <target_device>. The SCSI id will be used to write to media using cdrecord; and the device name will be used for other filesystem operations. A true SCSI device will always have an address scsibus,target,lun (i.e. 1,6,2). This should hold true on most UNIX-like systems including Linux and the various BSDs (although I do not have a BSD system to test with currently). The SCSI address represents the location of your writer device on the one or more SCSI buses that you have available on your system. On some platforms, it is possible to reference non-SCSI writer devices (i.e. an IDE CD writer) using an emulated SCSI id. If you have configured your non-SCSI writer device to have an emulated SCSI id, provide the filesystem device path in <target_device> and the SCSI id in <target_scsi_id>, just like for a real SCSI device. You should note that in some cases, an emulated SCSI id takes the same form as a normal SCSI id, while in other cases you might see a method name prepended to the normal SCSI id (i.e. ATA:1,1,1). Linux Notes On a Linux system, IDE writer devices often have a emulated SCSI address, which allows SCSI-based software to access the device through an IDE-to-SCSI interface. Under these circumstances, the first IDE writer device typically has an address 0,0,0. However, support for the IDE-to-SCSI interface has been deprecated and is not well-supported in newer kernels (kernel 2.6.x and later). Newer Linux kernels can address ATA or ATAPI drives without SCSI emulation by prepending a method indicator to the emulated device address. For instance, ATA:0,0,0 or ATAPI:0,0,0 are typical values. However, even this interface is deprecated as of late 2006, so with relatively new kernels you may be better off using the filesystem device path directly rather than relying on any SCSI emulation. Finding your Linux CD Writer Here are some hints about how to find your Linux CD writer hardware. First, try to reference your device using the filesystem device path: cdrecord -prcap dev=/dev/cdrom Running this command on my hardware gives output that looks like this (just the top few lines): Device type : Removable CD-ROM Version : 0 Response Format: 2 Capabilities : Vendor_info : 'LITE-ON ' Identification : 'DVDRW SOHW-1673S' Revision : 'JS02' Device seems to be: Generic mmc2 DVD-R/DVD-RW. Drive capabilities, per MMC-3 page 2A: If this works, and the identifying information at the top of the output looks like your CD writer device, you've probably found a working configuration. Place the device path into <target_device> and leave <target_scsi_id> blank. If this doesn't work, you should try to find an ATA or ATAPI device: cdrecord -scanbus dev=ATA cdrecord -scanbus dev=ATAPI On my development system, I get a result that looks something like this for ATA: scsibus1: 1,0,0 100) 'LITE-ON ' 'DVDRW SOHW-1673S' 'JS02' Removable CD-ROM 1,1,0 101) * 1,2,0 102) * 1,3,0 103) * 1,4,0 104) * 1,5,0 105) * 1,6,0 106) * 1,7,0 107) * Again, if you get a result that you recognize, you have again probably found a working configuraton. Place the associated device path (in my case, /dev/cdrom) into <target_device> and put the emulated SCSI id (in this case, ATA:1,0,0) into <target_scsi_id>. Any further discussion of how to configure your CD writer hardware is outside the scope of this document. If you have tried the hints above and still can't get things working, you may want to reference the Linux CDROM HOWTO () or the ATA RAID HOWTO () for more information. Mac OS X Notes On a Mac OS X (darwin) system, things get strange. Apple has abandoned traditional SCSI device identifiers in favor of a system-wide resource id. So, on a Mac, your writer device will have a name something like IOCompactDiscServices (for a CD writer) or IODVDServices (for a DVD writer). If you have multiple drives, the second drive probably has a number appended, i.e. IODVDServices/2 for the second DVD writer. You can try to figure out what the name of your device is by grepping through the output of the command ioreg -l.Thanks to the file README.macosX in the cdrtools-2.01+01a01 source tree for this information Unfortunately, even if you can figure out what device to use, I can't really support the store action on this platform. In OS X, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality does work on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully. Optimized Blanking Stategy When the optimized blanking strategy has not been configured, Cedar Backup uses a simplistic approach: rewritable media is blanked at the beginning of every week, period. Since rewritable media can be blanked only a finite number of times before becoming unusable, some users — especially users of rewritable DVD media with its large capacity — may prefer to blank the media less often. If the optimized blanking strategy is configured, Cedar Backup will use a blanking factor and attempt to determine whether future backups will fit on the current media. If it looks like backups will fit, then the media will not be blanked. This feature will only be useful (assuming single disc is used for the whole week's backups) if the estimated total size of the weekly backup is considerably smaller than the capacity of the media (no more than 50% of the total media capacity), and only if the size of the backup can be expected to remain fairly constant over time (no frequent rapid growth expected). There are two blanking modes: daily and weekly. If the weekly blanking mode is set, Cedar Backup will only estimate future capacity (and potentially blank the disc) once per week, on the starting day of the week. If the daily blanking mode is set, Cedar Backup will estimate future capacity (and potentially blank the disc) every time it is run. You should only use the daily blanking mode in conjunction with daily collect configuration, otherwise you will risk losing data. If you are using the daily blanking mode, you can typically set the blanking value to 1.0. This will cause Cedar Backup to blank the media whenever there is not enough space to store the current day's backup. If you are using the weekly blanking mode, then finding the correct blanking factor will require some experimentation. Cedar Backup estimates future capacity based on the configured blanking factor. The disc will be blanked if the following relationship is true: bytes available / (1 + bytes required) ≤ blanking factor Another way to look at this is to consider the blanking factor as a sort of (upper) backup growth estimate: Total size of weekly backup / Full backup size at the start of the week This ratio can be estimated using a week or two of previous backups. For instance, take this example, where March 10 is the start of the week and March 4 through March 9 represent the incremental backups from the previous week: /opt/backup/staging# du -s 2007/03/* 3040 2007/03/01 3044 2007/03/02 6812 2007/03/03 3044 2007/03/04 3152 2007/03/05 3056 2007/03/06 3060 2007/03/07 3056 2007/03/08 4776 2007/03/09 6812 2007/03/10 11824 2007/03/11 In this case, the ratio is approximately 4: 6812 + (3044 + 3152 + 3056 + 3060 + 3056 + 4776) / 6812 = 3.9571 To be safe, you might choose to configure a factor of 5.0. Setting a higher value reduces the risk of exceeding media capacity mid-week but might result in blanking the media more often than is necessary. If you run out of space mid-week, then the solution is to run the rebuild action. If this happens frequently, a higher blanking factor value should be used. CedarBackup3-3.1.6/manual/src/install.xml0000664000175000017500000002737412555755425021762 0ustar pronovicpronovic00000000000000 Installation Background There are two different ways to install Cedar Backup. The easiest way is to install the pre-built Debian packages. This method is painless and ensures that all of the correct dependencies are available, etc. If you are running a Linux distribution other than Debian or you are running some other platform like FreeBSD or Mac OS X, then you must use the Python source distribution to install Cedar Backup. When using this method, you need to manage all of the dependencies yourself. Non-Linux Platforms Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python 3, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems. To run a Cedar Backup client, you really just need a working Python 3 installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images. A full list of dependencies is provided further on in this chapter. Installing on a Debian System The easiest way to install Cedar Backup onto a Debian system is by using a tool such as apt-get or aptitude. If you are running a Debian release which contains Cedar Backup, you can use your normal Debian mirror as an APT data source. (The Debian jessie release is the first release to contain Cedar Backup 3.) Otherwise, you need to install from the Cedar Solutions APT data source. See To do this, add the Cedar Solutions APT data source to your /etc/apt/sources.list file. After you have configured the proper APT data source, install Cedar Backup using this set of commands: $ apt-get update $ apt-get install cedar-backup3 cedar-backup3-doc Several of the Cedar Backup dependencies are listed as recommended rather than required. If you are installing Cedar Backup on a master machine, you must install some or all of the recommended dependencies, depending on which actions you intend to execute. The stage action normally requires ssh, and the store action requires eject and either cdrecord/mkisofs or dvd+rw-tools. Clients must also install some sort of ssh server if a remote master will collect backups from them. If you would prefer, you can also download the .deb files and install them by hand with a tool such as dpkg. You can find these files files in the Cedar Solutions APT source. In either case, once the package has been installed, you can proceed to configuration as described in . The Debian package-management tools must generally be run as root. It is safe to install Cedar Backup to a non-standard location and run it as a non-root user. However, to do this, you must install the source distribution instead of the Debian package. Installing from Source On platforms other than Debian, Cedar Backup is installed from a Python source distribution. See . You will have to manage dependencies on your own. Many UNIX-like distributions provide an automatic or semi-automatic way to install packages like the ones Cedar Backup requires (think RPMs for Mandrake or RedHat, Gentoo's Portage system, the Fink project for Mac OS X, or the BSD ports system). If you are not sure how to install these packages on your system, you might want to check out . This appendix provides links to upstream source packages, plus as much information as I have been able to gather about packages for non-Debian platforms. Installing Dependencies Cedar Backup requires a number of external packages in order to function properly. Before installing Cedar Backup, you must make sure that these dependencies are met. Cedar Backup is written in Python 3 and requires version 3.4 or greater of the language. Additionally, remote client peer nodes must be running an RSH-compatible server, such as the ssh server, and master nodes must have an RSH-compatible client installed if they need to connect to remote peer machines. Master machines also require several other system utilities, most having to do with writing and validating CD/DVD media. On master machines, you must make sure that these utilities are available if you want to to run the store action: mkisofs eject mount unmount volname Then, you need this utility if you are writing CD media: cdrecord or these utilities if you are writing DVD media: growisofs All of these utilities are common and are easy to find for almost any UNIX-like operating system. Installing the Source Package Python source packages are fairly easy to install. They are distributed as .tar.gz files which contain Python source code, a manifest and an installation script called setup.py. Once you have downloaded the source package from the Cedar Solutions website, untar it: $ zcat CedarBackup3-3.0.0.tar.gz | tar xvf - This will create a directory called (in this case) CedarBackup3-3.0.0. The version number in the directory will always match the version number in the filename. If you have root access and want to install the package to the standard Python location on your system, then you can install the package in two simple steps: $ cd CedarBackup3-3.0.0 $ python3 setup.py install Make sure that you are using Python 3.4 or better to execute setup.py. You may also wish to run the unit tests before actually installing anything. Run them like so: python3 util/test.py If any unit test reports a failure on your system, please email me the output from the unit test, so I can fix the problem. support@cedar-solutions.com This is particularly important for non-Linux platforms where I do not have a test system available to me. Some users might want to choose a different install location or change other install parameters. To get more information about how setup.py works, use the option: $ python3 setup.py --help $ python3 setup.py install --help In any case, once the package has been installed, you can proceed to configuration as described in . CedarBackup3-3.1.6/manual/src/recovering.xml0000664000175000017500000007313412555751652022450 0ustar pronovicpronovic00000000000000 Data Recovery Finding your Data The first step in data recovery is finding the data that you want to recover. You need to decide whether you are going to to restore off backup media, or out of some existing staging data that has not yet been purged. The only difference is, if you purge staging data less frequently than once per week, you might have some data available in the staging directories which would not be found on your backup media, depending on how you rotate your media. (And of course, if your system is trashed or stolen, you probably will not have access to your old staging data in any case.) Regardless of the data source you choose, you will find the data organized in the same way. The remainder of these examples will work off an example backup disc, but the contents of the staging directory will look pretty much like the contents of the disc, with data organized first by date and then by backup peer name. This is the root directory of my example disc: root:/mnt/cdrw# ls -l total 4 drwxr-x--- 3 backup backup 4096 Sep 01 06:30 2005/ In this root directory is one subdirectory for each year represented in the backup. In this example, the backup represents data entirely from the year 2005. If your configured backup week happens to span a year boundary, there would be two subdirectories here (for example, one for 2005 and one for 2006). Within each year directory is one subdirectory for each month represented in the backup. root:/mnt/cdrw/2005# ls -l total 2 dr-xr-xr-x 6 root root 2048 Sep 11 05:30 09/ In this example, the backup represents data entirely from the month of September, 2005. If your configured backup week happens to span a month boundary, there would be two subdirectories here (for example, one for August 2005 and one for September 2005). Within each month directory is one subdirectory for each day represented in the backup. root:/mnt/cdrw/2005/09# ls -l total 8 dr-xr-xr-x 5 root root 2048 Sep 7 05:30 07/ dr-xr-xr-x 5 root root 2048 Sep 8 05:30 08/ dr-xr-xr-x 5 root root 2048 Sep 9 05:30 09/ dr-xr-xr-x 5 root root 2048 Sep 11 05:30 11/ Depending on how far into the week your backup media is from, you might have as few as one daily directory in here, or as many as seven. Within each daily directory is a stage indicator (indicating when the directory was staged) and one directory for each peer configured in the backup: root:/mnt/cdrw/2005/09/07# ls -l total 10 dr-xr-xr-x 2 root root 2048 Sep 7 02:31 host1/ -r--r--r-- 1 root root 0 Sep 7 03:27 cback.stage dr-xr-xr-x 2 root root 4096 Sep 7 02:30 host2/ dr-xr-xr-x 2 root root 4096 Sep 7 03:23 host3/ In this case, you can see that my backup includes three machines, and that the backup data was staged on September 7, 2005 at 03:27. Within the directory for a given host are all of the files collected on that host. This might just include tarfiles from a normal Cedar Backup collect run, and might also include files collected from Cedar Backup extensions or by other third-party processes on your system. root:/mnt/cdrw/2005/09/07/host1# ls -l total 157976 -r--r--r-- 1 root root 11206159 Sep 7 02:30 boot.tar.bz2 -r--r--r-- 1 root root 0 Sep 7 02:30 cback.collect -r--r--r-- 1 root root 3199 Sep 7 02:30 dpkg-selections.txt.bz2 -r--r--r-- 1 root root 908325 Sep 7 02:30 etc.tar.bz2 -r--r--r-- 1 root root 389 Sep 7 02:30 fdisk-l.txt.bz2 -r--r--r-- 1 root root 1003100 Sep 7 02:30 ls-laR.txt.bz2 -r--r--r-- 1 root root 19800 Sep 7 02:30 mysqldump.txt.bz2 -r--r--r-- 1 root root 4133372 Sep 7 02:30 opt-local.tar.bz2 -r--r--r-- 1 root root 44794124 Sep 8 23:34 opt-public.tar.bz2 -r--r--r-- 1 root root 30028057 Sep 7 02:30 root.tar.bz2 -r--r--r-- 1 root root 4747070 Sep 7 02:30 svndump-0:782-opt-svn-repo1.txt.bz2 -r--r--r-- 1 root root 603863 Sep 7 02:30 svndump-0:136-opt-svn-repo2.txt.bz2 -r--r--r-- 1 root root 113484 Sep 7 02:30 var-lib-jspwiki.tar.bz2 -r--r--r-- 1 root root 19556660 Sep 7 02:30 var-log.tar.bz2 -r--r--r-- 1 root root 14753855 Sep 7 02:30 var-mail.tar.bz2 As you can see, I back up variety of different things on host1. I run the normal collect action, as well as the sysinfo, mysql and subversion extensions. The resulting backup files are named in a way that makes it easy to determine what they represent. Files of the form *.tar.bz2 represent directories backed up by the collect action. The first part of the name (before .tar.bz2), represents the path to the directory. For example, boot.tar.gz contains data from /boot, and var-lib-jspwiki.tar.bz2 contains data from /var/lib/jspwiki. The fdisk-l.txt.bz2, ls-laR.tar.bz2 and dpkg-selections.tar.bz2 files are produced by the sysinfo extension. The mysqldump.txt.bz2 file is produced by the mysql extension. It represents a system-wide database dump, because I use the all flag in configuration. If I were to configure Cedar Backup to dump individual datbases, then the filename would contain the database name (something like mysqldump-bugs.txt.bz2). Finally, the files of the form svndump-*.txt.bz2 are produced by the subversion extension. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc. Recovering Filesystem Data Filesystem data is gathered by the standard Cedar Backup collect action. This data is placed into files of the form *.tar. The first part of the name (before .tar), represents the path to the directory. For example, boot.tar would contain data from /boot, and var-lib-jspwiki.tar would contain data from /var/lib/jspwiki. (As a special case, data from the root directory would be placed in -.tar). Remember that your tarfile might have a bzip2 (.bz2) or gzip (.gz) extension, depending on what compression you specified in configuration. If you are using full backups every day, the latest backup data is always within the latest daily directory stored on your backup media or within your staging directory. If you have some or all of your directories configured to do incremental backups, then the first day of the week holds the full backups and the other days represent incremental differences relative to that first day of the week. Where to extract your backup If you are restoring a home directory or some other non-system directory as part of a full restore, it is probably fine to extract the backup directly into the filesystem. If you are restoring a system directory like /etc as part of a full restore, extracting directly into the filesystem is likely to break things, especially if you re-installed a newer version of your operating system than the one you originally backed up. It's better to extract directories like this to a temporary location and pick out only the files you find you need. When doing a partial restore, I suggest always extracting to a temporary location. Doing it this way gives you more control over what you restore, and helps you avoid compounding your original problem with another one (like overwriting the wrong file, oops). Full Restore To do a full system restore, find the newest applicable full backup and extract it. If you have some incremental backups, extract them into the same place as the full backup, one by one starting from oldest to newest. (This way, if a file changed every day you will always get the latest one.) All of the backed-up files are stored in the tar file in a relative fashion, so you can extract from the tar file either directly into the filesystem, or into a temporary location. For example, to restore boot.tar.bz2 directly into /boot, execute tar from your root directory (/): root:/# bzcat boot.tar.bz2 | tar xvf - Of course, use zcat or just cat, depending on what kind of compression is in use. If you want to extract boot.tar.gz into a temporary location like /tmp/boot instead, just change directories first. In this case, you'd execute the tar command from within /tmp instead of /. root:/tmp# bzcat boot.tar.bz2 | tar xvf - Again, use zcat or just cat as appropriate. For more information, you might want to check out the manpage or GNU info documentation for the tar command. Partial Restore Most users will need to do a partial restore much more frequently than a full restore. Perhaps you accidentally removed your home directory, or forgot to check in some version of a file before deleting it. Or, perhaps the person who packaged Apache for your system blew away your web server configuration on upgrade (it happens). The solution to these and other kinds of problems is a partial restore (assuming you've backed up the proper things). The procedure is similar to a full restore. The specific steps depend on how much information you have about the file you are looking for. Where with a full restore, you can confidently extract the full backup followed by each of the incremental backups, this might not be what you want when doing a partial restore. You may need to take more care in finding the right version of a file — since the same file, if changed frequently, would appear in more than one backup. Start by finding the backup media that contains the file you are looking for. If you rotate your backup media, and your last known contact with the file was a while ago, you may need to look on older media to find it. This may take some effort if you are not sure when the change you are trying to correct took place. Once you have decided to look at a particular piece of backup media, find the correct peer (host), and look for the file in the full backup: root:/tmp# bzcat boot.tar.bz2 | tar tvf - path/to/file Of course, use zcat or just cat, depending on what kind of compression is in use. The tells tar to search for the file in question and just list the results rather than extracting the file. Note that the filename is relative (with no starting /). Alternately, you can omit the path/to/file and search through the output using more or less If you haven't found what you are looking for, work your way through the incremental files for the directory in question. One of them may also have the file if it changed during the course of the backup. Or, move to older or newer media and see if you can find the file there. Once you have found your file, extract it using : root:/tmp# bzcat boot.tar.bz2 | tar xvf - path/to/file Again, use zcat or just cat as appropriate. Inspect the file and make sure it's what you're looking for. Again, you may need to move to older or newer media to find the exact version of your file. For more information, you might want to check out the manpage or GNU info documentation for the tar command. Recovering MySQL Data MySQL data is gathered by the Cedar Backup mysql extension. This extension always creates a full backup each time it runs. This wastes some space, but makes it easy to restore database data. The following procedure describes how to restore your MySQL database from the backup. I am not a MySQL expert. I am providing this information for reference. I have tested these procedures on my own MySQL installation; however, I only have a single database for use by Bugzilla, and I may have misunderstood something with regard to restoring individual databases as a user other than root. If you have any doubts, test the procedure below before relying on it! MySQL experts and/or knowledgable Cedar Backup users: feel free to write me and correct any part of this procedure. First, find the backup you are interested in. If you have specified all databases in configuration, you will have a single backup file, called mysqldump.txt. If you have specified individual databases in configuration, then you will have files with names like mysqldump-database.txt instead. In either case, your file might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. If you are restoring an all databases backup, make sure that you have correctly created the root user and know its password. Then, execute: daystrom:/# bzcat mysqldump.txt.bz2 | mysql -p -u root Of course, use zcat or just cat, depending on what kind of compression is in use. Because the database backup includes CREATE DATABASE SQL statements, this command should take care of creating all of the databases within the backup, as well as populating them. If you are restoring a backup for a specific database, you have two choices. If you have a root login, you can use the same command as above: daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u root Otherwise, you can create the database and its login first (or have someone create it) and then use a database-specific login to execute the restore: daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u user database Again, use zcat or just cat as appropriate. For more information on using MySQL, see the documentation on the MySQL web site, , or the manpages for the mysql and mysqldump commands. Recovering Subversion Data Subversion data is gathered by the Cedar Backup subversion extension. Cedar Backup will create either full or incremental backups, but the procedure for restoring is the same for both. Subversion backups are always taken on a per-repository basis. If you need to restore more than one repository, follow the procedures below for each repository you are interested in. First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week. The subversion extension creates files of the form svndump-*.txt. These files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc. Next, if you still have the old Subversion repository around, you might want to just move it off (rename the top-level directory) before executing the restore. Or, you can restore into a temporary directory and rename it later to its real name once you've checked it out. That is what my example below will show. Next, you need to create a new Subversion repository to hold the restored data. This example shows an FSFS repository, but that is an arbitrary choice. You can restore from an FSFS backup into a FSFS repository or a BDB repository. The Subversion dump format is backend-agnostic. root:/tmp# svnadmin create --fs-type=fsfs testrepo Next, load the full backup into the repository: root:/tmp# bzcat svndump-0:782-opt-svn-repo1.txt.bz2 | svnadmin load testrepo Of course, use zcat or just cat, depending on what kind of compression is in use. Follow that with loads for each of the incremental backups: root:/tmp# bzcat svndump-783:785-opt-svn-repo1.txt.bz2 | svnadmin load testrepo root:/tmp# bzcat svndump-786:800-opt-svn-repo1.txt.bz2 | svnadmin load testrepo Again, use zcat or just cat as appropriate. When this is done, your repository will be restored to the point of the last commit indicated in the svndump file (in this case, to revision 800). Note: don't be surprised if, when you test this, the restored directory doesn't have exactly the same contents as the original directory. I can't explain why this happens, but if you execute svnadmin dump on both old and new repositories, the results are identical. This means that the repositories do contain the same content. For more information on using Subversion, see the book Version Control with Subversion () or the Subversion FAQ (). Recovering Mailbox Data Mailbox data is gathered by the Cedar Backup mbox extension. Cedar Backup will create either full or incremental backups, but both kinds of backups are treated identically when restoring. Individual mbox files and mbox directories are treated a little differently, since individual files are just compressed, but directories are collected into a tar archive. First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week. The mbox extension creates files of the form mbox-*. Backup files for individual mbox files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. Backup files for mbox directories will have a .tar, .tar.gz or .tar.bz2 extension, again depending on what kind of compression you specified in configuration. There is one backup file for each configured mbox file or directory. The backup file name represents the name of the file or directory and the date it was backed up. So, the file mbox-20060624-home-user-mail-greylist represents the backup for /home/user/mail/greylist run on 24 Jun 2006. Likewise, mbox-20060624-home-user-mail.tar represents the backup for the /home/user/mail directory run on that same date. Once you have found the files you are looking for, the restoration procedure is fairly simple. First, concatenate all of the backup files together. Then, use grepmail to eliminate duplicate messages (if any). Here is an example for a single backed-up file: root:/tmp# rm restore.mbox # make sure it's not left over root:/tmp# cat mbox-20060624-home-user-mail-greylist >> restore.mbox root:/tmp# cat mbox-20060625-home-user-mail-greylist >> restore.mbox root:/tmp# cat mbox-20060626-home-user-mail-greylist >> restore.mbox root:/tmp# grepmail -a -u restore.mbox > nodups.mbox At this point, nodups.mbox contains all of the backed-up messages from /home/user/mail/greylist. Of course, if your backups are compressed, you'll have to use zcat or bzcat rather than just cat. If you are backing up mbox directories rather than individual files, see the filesystem instructions for notes on now to extract the individual files from inside tar archives. Extract the files you are interested in, and then concatenate them together just like shown above for the individual case. Recovering Data split by the Split Extension The Split extension takes large files and splits them up into smaller files. Typically, it would be used in conjunction with the cback3-span command. The split up files are not difficult to work with. Simply find all of the files — which could be split between multiple discs — and concatenate them together. root:/tmp# rm usr-src-software.tar.gz # make sure it's not there root:/tmp# cat usr-src-software.tar.gz_00001 >> usr-src-software.tar.gz root:/tmp# cat usr-src-software.tar.gz_00002 >> usr-src-software.tar.gz root:/tmp# cat usr-src-software.tar.gz_00003 >> usr-src-software.tar.gz Then, use the resulting file like usual. Remember, you need to have all of the files that the original large file was split into before this will work. If you are missing a file, the result of the concatenation step will be either a corrupt file or a truncated file (depending on which chunks you did not include). CedarBackup3-3.1.6/manual/src/extenspec.xml0000664000175000017500000002132612555752330022271 0ustar pronovicpronovic00000000000000 Extension Architecture Interface The Cedar Backup Extension Architecture Interface is the application programming interface used by third-party developers to write Cedar Backup extensions. This appendix briefly specifies the interface in enough detail for someone to succesfully implement an extension. You will recall that Cedar Backup extensions are third-party pieces of code which extend Cedar Backup's functionality. Extensions can be invoked from the Cedar Backup command line and are allowed to place their configuration in Cedar Backup's configuration file. There is a one-to-one mapping between a command-line extended action and an extension function. The mapping is configured in the Cedar Backup configuration file using a section something like this: <extensions> <action> <name>database</name> <module>foo</module> <function>bar</function> <index>101</index> </action> </extensions> In this case, the action database has been mapped to the extension function foo.bar(). Extension functions can take any actions they would like to once they have been invoked, but must abide by these rules: Extensions may not write to stdout or stderr using functions such as print or sys.write. All logging must take place using the Python logging facility. Flow-of-control logging should happen on the CedarBackup3.log topic. Authors can assume that ERROR will always go to the terminal, that INFO and WARN will always be logged, and that DEBUG will be ignored unless debugging is enabled. Any time an extension invokes a command-line utility, it must be done through the CedarBackup3.util.executeCommand function. This will help keep Cedar Backup safer from format-string attacks, and will make it easier to consistently log command-line process output. Extensions may not return any value. Extensions must throw a Python exception containing a descriptive message if processing fails. Extension authors can use their judgement as to what constitutes failure; however, any problems during execution should result in either a thrown exception or a logged message. Extensions may rely only on Cedar Backup functionality that is advertised as being part of the public interface. This means that extensions cannot directly make use of methods, functions or values starting with with the _ character. Furthermore, extensions should only rely on parts of the public interface that are documented in the online Epydoc documentation. Extension authors are encouraged to extend the Cedar Backup public interface through normal methods of inheritence. However, no extension is allowed to directly change Cedar Backup code in a way that would affect how Cedar Backup itself executes when the extension has not been invoked. For instance, extensions would not be allowed to add new command-line options or new writer types. Extensions must be written to assume an empty locale set (no $LC_* settings) and $LANG=C. For the typical open-source software project, this would imply writing output-parsing code against the English localization (if any). The executeCommand function does sanitize the environment to enforce this configuration. Extension functions take three arguments: the path to configuration on disk, a CedarBackup3.cli.Options object representing the command-line options in effect, and a CedarBackup3.config.Config object representing parsed standard configuration. def function(configPath, options, config): """Sample extension function.""" pass This interface is structured so that simple extensions can use standard configuration without having to parse it for themselves, but more complicated extensions can get at the configuration file on disk and parse it again as needed. The interface to the CedarBackup3.cli.Options and CedarBackup3.config.Config classes has been thoroughly documented using Epydoc, and the documentation is available on the Cedar Backup website. The interface is guaranteed to change only in backwards-compatible ways unless the Cedar Backup major version number is bumped (i.e. from 2 to 3). If an extension needs to add its own configuration information to the Cedar Backup configuration file, this extra configuration must be added in a new configuration section using a name that does not conflict with standard configuration or other known extensions. For instance, our hypothetical database extension might require configuration indicating the path to some repositories to back up. This information might go into a section something like this: <database> <repository>/path/to/repo1</repository> <repository>/path/to/repo2</repository> </database> In order to read this new configuration, the extension code can either inherit from the Config object and create a subclass that knows how to parse the new database config section, or can write its own code to parse whatever it needs out of the file. Either way, the resulting code is completely independent of the standard Cedar Backup functionality. CedarBackup3-3.1.6/manual/src/basic.xml0000664000175000017500000011022412555751314021351 0ustar pronovicpronovic00000000000000 Basic Concepts General Architecture Cedar Backup is architected as a Python package (library) and a single executable (a Python script). The Python package provides both application-specific code and general utilities that can be used by programs other than Cedar Backup. It also includes modules that can be used by third parties to extend Cedar Backup or provide related functionality. The cback3 script is designed to run as root, since otherwise it's difficult to back up system directories or write to the CD/DVD device. However, pains are taken to use the backup user's effective user id (specified in configuration) when appropriate. Note: this does not mean that cback3 runs setuidSee or setgid. However, all files on disk will be owned by the backup user, and and all rsh-based network connections will take place as the backup user. The cback3 script is configured via command-line options and an XML configuration file on disk. The configuration file is normally stored in /etc/cback3.conf, but this path can be overridden at runtime. See for more information on how Cedar Backup is configured. You should be aware that backups to CD/DVD media can probably be read by any user which has permissions to mount the CD/DVD writer. If you intend to leave the backup disc in the drive at all times, you may want to consider this when setting up device permissions on your machine. See also . Data Recovery Cedar Backup does not include any facility to restore backups. Instead, it assumes that the administrator (using the procedures and references in ) can handle the task of restoring their own system, using the standard system tools at hand. If I were to maintain recovery code in Cedar Backup, I would almost certainly end up in one of two situations. Either Cedar Backup would only support simple recovery tasks, and those via an interface a lot like that of the underlying system tools; or Cedar Backup would have to include a hugely complicated interface to support more specialized (and hence useful) recovery tasks like restoring individual files as of a certain point in time. In either case, I would end up trying to maintain critical functionality that would be rarely used, and hence would also be rarely tested by end-users. I am uncomfortable asking anyone to rely on functionality that falls into this category. My primary goal is to keep the Cedar Backup codebase as simple and focused as possible. I hope you can understand how the choice of providing documentation, but not code, seems to strike the best balance between managing code complexity and providing the functionality that end-users need. Cedar Backup Pools There are two kinds of machines in a Cedar Backup pool. One machine (the master) has a CD or DVD writer on it and writes the backup to disc. The others (clients) collect data to be written to disc by the master. Collectively, the master and client machines in a pool are called peer machines. Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one) and in fact more users seem to use it like this than any other way. The Backup Process The Cedar Backup backup process is structured in terms of a set of decoupled actions which execute independently (based on a schedule in cron) rather than through some highly coordinated flow of control. This design decision has both positive and negative consequences. On the one hand, the code is much simpler and can choose to simply abort or log an error if its expectations are not met. On the other hand, the administrator must coordinate the various actions during initial set-up. See (later in this chapter) for more information on this subject. A standard backup run consists of four steps (actions), some of which execute on the master machine, and some of which execute on one or more client machines. These actions are: collect, stage, store and purge. In general, more than one action may be specified on the command-line. If more than one action is specified, then actions will be taken in a sensible order (generally collect, stage, store, purge). A special all action is also allowed, which implies all of the standard actions in the same sensible order. The cback3 command also supports several actions that are not part of the standard backup run and cannot be executed along with any other actions. These actions are validate, initialize and rebuild. All of the various actions are discussed further below. See for more information on how a backup run is configured. Flexibility Cedar Backup was designed to be flexible. It allows you to decide for yourself which backup steps you care about executing (and when you execute them), based on your own situation and your own priorities. As an example, I always back up every machine I own. I typically keep 7-10 days of staging directories around, but switch CD/DVD media mostly every week. That way, I can periodically take a disc off-site in case the machine gets stolen or damaged. If you're not worried about these risks, then there's no need to write to disc. In fact, some users prefer to use their master machine as a simple consolidation point. They don't back up any data on the master, and don't write to disc at all. They just use Cedar Backup to handle the mechanics of moving backed-up data to a central location. This isn't quite what Cedar Backup was written to do, but it is flexible enough to meet their needs. The Collect Action The collect action is the first action in a standard backup run. It executes on both master and client nodes. Based on configuration, this action traverses the peer's filesystem and gathers files to be backed up. Each configured high-level directory is collected up into its own tar file in the collect directory. The tarfiles can either be uncompressed (.tar) or compressed with either gzip (.tar.gz) or bzip2 (.tar.bz2). There are three supported collect modes: daily, weekly and incremental. Directories configured for daily backups are backed up every day. Directories configured for weekly backups are backed up on the first day of the week. Directories configured for incremental backups are traversed every day, but only the files which have changed (based on a saved-off SHA hash) are actually backed up. Collect configuration also allows for a variety of ways to filter files and directories out of the backup. For instance, administrators can configure an ignore indicator file Analagous to .cvsignore in CVS or specify absolute paths or filename patterns In terms of Python regular expressions to be excluded. You can even configure a backup link farm rather than explicitly listing files and directories in configuration. This action is optional on the master. You only need to configure and execute the collect action on the master if you have data to back up on that machine. If you plan to use the master only as a consolidation point to collect data from other machines, then there is no need to execute the collect action there. If you run the collect action on the master, it behaves the same there as anywhere else, and you have to stage the master's collected data just like any other client (typically by configuring a local peer in the stage action). The Stage Action The stage action is the second action in a standard backup run. It executes on the master peer node. The master works down the list of peers in its backup pool and stages (copies) the collected backup files from each of them into a daily staging directory by peer name. For the purposes of this action, the master node can be configured to treat itself as a client node. If you intend to back up data on the master, configure the master as a local peer. Otherwise, just configure each of the clients as a remote peer. Local and remote client peers are treated differently. Local peer collect directories are assumed to be accessible via normal copy commands (i.e. on a mounted filesystem) while remote peer collect directories are accessed via an RSH-compatible command such as ssh. If a given peer is not ready to be staged, the stage process will log an error, abort the backup for that peer, and then move on to its other peers. This way, one broken peer cannot break a backup for other peers which are up and running. Keep in mind that Cedar Backup is flexible about what actions must be executed as part of a backup. If you would prefer, you can stop the backup process at this step, and skip the store step. In this case, the staged directories will represent your backup rather than a disc. Directories collected by another process can be staged by Cedar Backup. If the file cback.collect exists in a collect directory when the stage action is taken, then that directory will be staged. The Store Action The store action is the third action in a standard backup run. It executes on the master peer node. The master machine determines the location of the current staging directory, and then writes the contents of that staging directory to disc. After the contents of the directory have been written to disc, an optional validation step ensures that the write was successful. If the backup is running on the first day of the week, if the drive does not support multisession discs, or if the option is passed to the cback3 command, the disc will be rebuilt from scratch. Otherwise, a new ISO session will be added to the disc each day the backup runs. This action is entirely optional. If you would prefer to just stage backup data from a set of peers to a master machine, and have the staged directories represent your backup rather than a disc, this is fine. The store action is not supported on the Mac OS X (darwin) platform. On that platform, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality works on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully. Current Staging Directory The store action tries to be smart about finding the current staging directory. It first checks the current day's staging directory. If that directory exists, and it has not yet been written to disc (i.e. there is no store indicator), then it will be used. Otherwise, the store action will look for an unused staging directory for either the previous day or the next day, in that order. A warning will be written to the log under these circumstances (controlled by the <warn_midnite> configuration value). This behavior varies slightly when the option is in effect. Under these circumstances, any existing store indicator will be ignored. Also, the store action will always attempt to use the current day's staging directory, ignoring any staging directories for the previous day or the next day. This way, running a full store action more than once concurrently will always produce the same results. (You might imagine a use case where a person wants to make several copies of the same full backup.) The Purge Action The purge action is the fourth and final action in a standard backup run. It executes both on the master and client peer nodes. Configuration specifies how long to retain files in certain directories, and older files and empty directories are purged. Typically, collect directories are purged daily, and stage directories are purged weekly or slightly less often (if a disc gets corrupted, older backups may still be available on the master). Some users also choose to purge the configured working directory (which is used for temporary files) to eliminate any leftover files which might have resulted from changes to configuration. The All Action The all action is a pseudo-action which causes all of the actions in a standard backup run to be executed together in order. It cannot be combined with any other actions on the command line. Extensions cannot be executed as part of the all action. If you need to execute an extended action, you must specify the other actions you want to run individually on the command line. Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. I am not planning to change the way this works. The all action does not have its own configuration. Instead, it relies on the individual configuration sections for all of the other actions. The Validate Action The validate action is used to validate configuration on a particular peer node, either master or client. It cannot be combined with any other actions on the command line. The validate action checks that the configuration file can be found, that the configuration file is valid, and that certain portions of the configuration file make sense (for instance, making sure that specified users exist, directories are readable and writable as necessary, etc.). The Initialize Action The initialize action is used to initialize media for use with Cedar Backup. This is an optional step. By default, Cedar Backup does not need to use initialized media and will write to whatever media exists in the writer device. However, if the check media store configuration option is set to true, Cedar Backup will check the media before writing to it and will error out if the media has not been initialized. Initializing the media consists of writing a mostly-empty image using a known media label (the media label will begin with CEDAR BACKUP). Note that only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup. You can still configure Cedar Backup to check non-rewritable media; in this case, the check will also pass if the media is apparently unused (i.e. has no media label). The Rebuild Action The rebuild action is an exception-handling action that is executed independent of a standard backup run. It cannot be combined with any other actions on the command line. The rebuild action attempts to rebuild this week's disc from any remaining unpurged staging directories. Typically, it is used to make a copy of a backup, replace lost or damaged media, or to switch to new media mid-week for some other reason. To decide what data to write to disc again, the rebuild action looks back and finds the first day of the current week. Then, it finds any remaining staging directories between that date and the current date. If any staging directories are found, they are all written to disc in one big ISO session. The rebuild action does not have its own configuration. It relies on configuration for other other actions, especially the store action. Coordination between Master and Clients Unless you are using Cedar Backup to manage a pool of one, you will need to set up some coordination between your clients and master to make everything work properly. This coordination isn't difficult — it mostly consists of making sure that operations happen in the right order — but some users are suprised that it is required and want to know why Cedar Backup can't just take care of it for me. Essentially, each client must finish collecting all of its data before the master begins staging it, and the master must finish staging data from a client before that client purges its collected data. Administrators may need to experiment with the time between the collect and purge entries so that the master has enough time to stage data before it is purged. Managed Backups Cedar Backup also supports an optional feature called the managed backup. This feature is intended for use with remote clients where cron is not available. When managed backups are enabled, managed clients must still be configured as usual. However, rather than using a cron job on the client to execute the collect and purge actions, the master executes these actions on the client via a remote shell. To make this happen, first set up one or more managed clients in Cedar Backup configuration. Then, invoke Cedar Backup with the --managed command-line option. Whenever Cedar Backup invokes an action locally, it will invoke the same action on each of the managed clients. Technically, this feature works for any client, not just clients that don't have cron available. Used this way, it can simplify the setup process, because cron only has to be configured on the master. For some users, that may be motivation enough to use this feature all of the time. However, please keep in mind that this feature depends on a stable network. If your network connection drops, your backup will be interrupted and will not be complete. It is even possible that some of the Cedar Backup metadata (like incremental backup state) will be corrupted. The risk is not high, but it is something you need to be aware of if you choose to use this optional feature. Media and Device Types Cedar Backup is focused around writing backups to CD or DVD media using a standard SCSI or IDE writer. In Cedar Backup terms, the disc itself is referred to as the media, and the CD/DVD drive is referred to as the device or sometimes the backup device. My original backup device was an old Sony CRX140E 4X CD-RW drive. It has since died, and I currently develop using a Lite-On 1673S DVD±RW drive. When using a new enough backup device, a new multisession ISO image An ISO image is the standard way of creating a filesystem to be copied to a CD or DVD. It is essentially a filesystem-within-a-file and many UNIX operating systems can actually mount ISO image files just like hard drives, floppy disks or actual CDs. See Wikipedia for more information: . is written to the media on the first day of the week, and then additional multisession images are added to the media each day that Cedar Backup runs. This way, the media is complete and usable at the end of every backup run, but a single disc can be used all week long. If your backup device does not support multisession images — which is really unusual today — then a new ISO image will be written to the media each time Cedar Backup runs (and you should probably confine yourself to the daily backup mode to avoid losing data). Cedar Backup currently supports four different kinds of CD media: cdr-74 74-minute non-rewritable CD media cdrw-74 74-minute rewritable CD media cdr-80 80-minute non-rewritable CD media cdrw-80 80-minute rewritable CD media I have chosen to support just these four types of CD media because they seem to be the most standard of the various types commonly sold in the U.S. as of this writing (early 2005). If you regularly use an unsupported media type and would like Cedar Backup to support it, send me information about the capacity of the media in megabytes (MB) and whether it is rewritable. Cedar Backup also supports two kinds of DVD media: dvd+r Single-layer non-rewritable DVD+R media dvd+rw Single-layer rewritable DVD+RW media The underlying growisofs utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type. Incremental Backups Cedar Backup supports three different kinds of backups for individual collect directories. These are daily, weekly and incremental backups. Directories using the daily mode are backed up every day. Directories using the weekly mode are only backed up on the first day of the week, or when the option is used. Directories using the incremental mode are always backed up on the first day of the week (like a weekly backup), but after that only the files which have changed are actually backed up on a daily basis. In Cedar Backup, incremental backups are not based on date, but are instead based on saved checksums, one for each backed-up file. When a full backup is run, Cedar Backup gathers a checksum value The checksum is actually an SHA cryptographic hash. See Wikipedia for more information: . for each backed-up file. The next time an incremental backup is run, Cedar Backup checks its list of file/checksum pairs for each file that might be backed up. If the file's checksum value does not match the saved value, or if the file does not appear in the list of file/checksum pairs, then it will be backed up and a new checksum value will be placed into the list. Otherwise, the file will be ignored and the checksum value will be left unchanged. Cedar Backup stores the file/checksum pairs in .sha files in its working directory, one file per configured collect directory. The mappings in these files are reset at the start of the week or when the option is used. Because these files are used for an entire week, you should never purge the working directory more frequently than once per week. Extensions Imagine that there is a third party developer who understands how to back up a certain kind of database repository. This third party might want to integrate his or her specialized backup into the Cedar Backup process, perhaps thinking of the database backup as a sort of collect step. Prior to Cedar Backup version 2, any such integration would have been completely independent of Cedar Backup itself. The external backup functionality would have had to maintain its own configuration and would not have had access to any Cedar Backup configuration. Starting with version 2, Cedar Backup allows extensions to the backup process. An extension is an action that isn't part of the standard backup process (i.e. not collect, stage, store or purge), but can be executed by Cedar Backup when properly configured. Extension authors implement an action process function with a certain interface, and are allowed to add their own sections to the Cedar Backup configuration file, so that all backup configuration can be centralized. Then, the action process function is associated with an action name which can be executed from the cback3 command line like any other action. Hopefully, as the Cedar Backup user community grows, users will contribute their own extensions back to the community. Well-written general-purpose extensions will be accepted into the official codebase. Users should see for more information on how extensions are configured, and for details on all of the officially-supported extensions. Developers may be interested in . CedarBackup3-3.1.6/manual/src/commandline.xml0000664000175000017500000013250112555751471022564 0ustar pronovicpronovic00000000000000 Command Line Tools Overview Cedar Backup comes with three command-line programs: cback3, cback3-amazons3-sync, and cback3-span. The cback3 command is the primary command line interface and the only Cedar Backup program that most users will ever need. The cback3-amazons3-sync tool is used for synchronizing entire directories of files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar Backup process. Users who have a lot of data to back up — more than will fit on a single CD or DVD — can use the interactive cback3-span tool to split their data between multiple discs. The <command>cback3</command> command Introduction Cedar Backup's primary command-line interface is the cback3 command. It controls the entire backup process. Syntax The cback3 command has the following syntax: Usage: cback3 [switches] action(s) The following switches are accepted: -h, --help Display this usage/help listing -V, --version Display version information -b, --verbose Print verbose output as well as logging to disk -q, --quiet Run quietly (display no output to the screen) -c, --config Path to config file (default: /etc/cback3.conf) -f, --full Perform a full backup, regardless of configuration -M, --managed Include managed clients when executing actions -N, --managed-only Include ONLY managed clients when executing actions -l, --logfile Path to logfile (default: /var/log/cback3.log) -o, --owner Logfile ownership, user:group (default: root:adm) -m, --mode Octal logfile permissions mode (default: 640) -O, --output Record some sub-command (i.e. cdrecord) output to the log -d, --debug Write debugging information to the log (implies --output) -s, --stack Dump a Python stack trace instead of swallowing exceptions -D, --diagnostics Print runtime diagnostics to the screen and exit The following actions may be specified: all Take all normal actions (collect, stage, store, purge) collect Take the collect action stage Take the stage action store Take the store action purge Take the purge action rebuild Rebuild "this week's" disc if possible validate Validate configuration only initialize Initialize media for use with Cedar Backup You may also specify extended actions that have been defined in configuration. You must specify at least one action to take. More than one of the "collect", "stage", "store" or "purge" actions and/or extended actions may be specified in any arbitrary order; they will be executed in a sensible order. The "all", "rebuild", "validate", and "initialize" actions may not be combined with other actions. Note that the all action only executes the standard four actions. It never executes any of the configured extensions. Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. Better to be definitive than confusing. Switches , Display usage/help listing. , Display version information. , Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. , Run quietly (display no output to the screen). , Specify the path to an alternate configuration file. The default configuration file is /etc/cback3.conf. , Perform a full backup, regardless of configuration. For the collect action, this means that any existing information related to incremental backups will be ignored and rewritten; for the store action, this means that a new disc will be started. , Include managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client after executing the action locally. , Include only managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client — but do not execute the action locally. , Specify the path to an alternate logfile. The default logfile file is /var/log/cback3.log. , Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback3 command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. , Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback3 command is executed, it will retain its existing ownership and mode. , Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. , Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the option, as well. , Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. , Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report. Actions You can find more information about the various actions in (in ). In general, you may specify any combination of the collect, stage, store or purge actions, and the specified actions will be executed in a sensible order. Or, you can specify one of the all, rebuild, validate, or initialize actions (but these actions may not be combined with other actions). If you have configured any Cedar Backup extensions, then the actions associated with those extensions may also be specified on the command line. If you specify any other actions along with an extended action, the actions will be executed in a sensible order per configuration. The all action never executes extended actions, however. The <command>cback3-amazons3-sync</command> command Introduction The cback3-amazons3-sync tool is used for synchronizing entire directories of files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar Backup process. This might be a good option for some types of data, as long as you understand the limitations around retrieving previous versions of objects that get modified or deleted as part of a sync. S3 does support versioning, but it won't be quite as easy to get at those previous versions as with an explicit incremental backup like cback3 provides. Cedar Backup does not provide any tooling that would help you retrieve previous versions. The underlying functionality relies on the AWS CLI toolset. Before you use this extension, you need to set up your Amazon S3 account and configure AWS CLI as detailed in Amazons's setup guide. The aws command will be executed as the same user that is executing the cback3-amazons3-sync command, so make sure you configure it as the proper user. (This is different than the amazons3 extension, which is designed to execute as root and switches over to the configured backup user to execute AWS CLI commands.) Syntax The cback3-amazons3-sync command has the following syntax: Usage: cback3-amazons3-sync [switches] sourceDir s3bucketUrl Cedar Backup Amazon S3 sync tool. This Cedar Backup utility synchronizes a local directory to an Amazon S3 bucket. After the sync is complete, a validation step is taken. An error is reported if the contents of the bucket do not match the source directory, or if the indicated size for any file differs. This tool is a wrapper over the AWS CLI command-line tool. The following arguments are required: sourceDir The local source directory on disk (must exist) s3BucketUrl The URL to the target Amazon S3 bucket The following switches are accepted: -h, --help Display this usage/help listing -V, --version Display version information -b, --verbose Print verbose output as well as logging to disk -q, --quiet Run quietly (display no output to the screen) -l, --logfile Path to logfile (default: /var/log/cback3.log) -o, --owner Logfile ownership, user:group (default: root:adm) -m, --mode Octal logfile permissions mode (default: 640) -O, --output Record some sub-command (i.e. aws) output to the log -d, --debug Write debugging information to the log (implies --output) -s, --stack Dump Python stack trace instead of swallowing exceptions -D, --diagnostics Print runtime diagnostics to the screen and exit -v, --verifyOnly Only verify the S3 bucket contents, do not make changes -w, --ignoreWarnings Ignore warnings about problematic filename encodings Typical usage would be something like: cback3-amazons3-sync /home/myuser s3://example.com-backup/myuser This will sync the contents of /home/myuser into the indicated bucket. Switches , Display usage/help listing. , Display version information. , Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. , Run quietly (display no output to the screen). , Specify the path to an alternate logfile. The default logfile file is /var/log/cback3.log. , Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback3-amazons3-sync command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. , Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback3-amazons3-sync command is executed, it will retain its existing ownership and mode. , Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. , Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the option, as well. , Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. , Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report. , Only verify the S3 bucket contents against the directory on disk. Do not make any changes to the S3 bucket or transfer any files. This is intended as a quick check to see whether the sync is up-to-date. Although no files are transferred, the tool will still execute the source filename encoding check, discussed below along with . , The AWS CLI S3 sync process is very picky about filename encoding. Files that the Linux filesystem handles with no problems can cause problems in S3 if the filename cannot be encoded properly in your configured locale. As of this writing, filenames like this will cause the sync process to abort without transferring all files as expected. To avoid confusion, the cback3-amazons3-sync tries to guess which files in the source directory will cause problems, and refuses to execute the AWS CLI S3 sync if any problematic files exist. If you'd rather proceed anyway, use . If problematic files are found, then you have basically two options: either correct your locale (i.e. if you have set LANG=C) or rename the file so it can be encoded properly in your locale. The error messages will tell you the expected encoding (from your locale) and the actual detected encoding for the filename. The <command>cback3-span</command> command Introduction Cedar Backup was designed — and is still primarily focused — around weekly backups to a single CD or DVD. Most users who back up more data than fits on a single disc seem to stop their backup process at the stage step, using Cedar Backup as an easy way to collect data. However, some users have expressed a need to write these large kinds of backups to disc — if not every day, then at least occassionally. The cback3-span tool was written to meet those needs. If you have staged more data than fits on a single CD or DVD, you can use cback3-span to split that data between multiple discs. cback3-span is not a general-purpose disc-splitting tool. It is a specialized program that requires Cedar Backup configuration to run. All it can do is read Cedar Backup configuration, find any staging directories that have not yet been written to disc, and split the files in those directories between discs. cback3-span accepts many of the same command-line options as cback3, but must be run interactively. It cannot be run from cron. This is intentional. It is intended to be a useful tool, not a new part of the backup process (that is the purpose of an extension). In order to use cback3-span, you must configure your backup such that the largest individual backup file can fit on a single disc. The command will not split a single file onto more than one disc. All it can do is split large directories onto multiple discs. Files in those directories will be arbitrarily split up so that space is utilized most efficiently. Syntax The cback3-span command has the following syntax: Usage: cback3-span [switches] Cedar Backup 'span' tool. This Cedar Backup utility spans staged data between multiple discs. It is a utility, not an extension, and requires user interaction. The following switches are accepted, mostly to set up underlying Cedar Backup functionality: -h, --help Display this usage/help listing -V, --version Display version information -b, --verbose Print verbose output as well as logging to disk -c, --config Path to config file (default: /etc/cback3.conf) -l, --logfile Path to logfile (default: /var/log/cback3.log) -o, --owner Logfile ownership, user:group (default: root:adm) -m, --mode Octal logfile permissions mode (default: 640) -O, --output Record some sub-command (i.e. cdrecord) output to the log -d, --debug Write debugging information to the log (implies --output) -s, --stack Dump a Python stack trace instead of swallowing exceptions Switches , Display usage/help listing. , Display version information. , Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. , Specify the path to an alternate configuration file. The default configuration file is /etc/cback3.conf. , Specify the path to an alternate logfile. The default logfile file is /var/log/cback3.log. , Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback3 command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. , Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback3 command is executed, it will retain its existing ownership and mode. , Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD/DVD recorder and its media. , Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the option, as well. , Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. Using <command>cback3-span</command> As discussed above, the cback3-span is an interactive command. It cannot be run from cron. You can typically use the default answer for most questions. The only two questions that you may not want the default answer for are the fit algorithm and the cushion percentage. The cushion percentage is used by cback3-span to determine what capacity to shoot for when splitting up your staging directories. A 650 MB disc does not fit fully 650 MB of data. It's usually more like 627 MB of data. The cushion percentage tells cback3-span how much overhead to reserve for the filesystem. The default of 4% is usually OK, but if you have problems you may need to increase it slightly. The fit algorithm tells cback3-span how it should determine which items should be placed on each disc. If you don't like the result from one algorithm, you can reject that solution and choose a different algorithm. The four available fit algorithms are: worst The worst-fit algorithm. The worst-fit algorithm proceeds through a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing. best The best-fit algorithm. The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not unusual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms. first The first-fit algorithm. The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting. alternate A hybrid algorithm that I call alternate-fit. This algorithm tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slightly fewer items. Sample run Below is a log showing a sample cback3-span run. ================================================ Cedar Backup 'span' tool ================================================ This the Cedar Backup span tool. It is used to split up staging data when that staging data does not fit onto a single disc. This utility operates using Cedar Backup configuration. Configuration specifies which staging directory to look at and which writer device and media type to use. Continue? [Y/n]: === Cedar Backup store configuration looks like this: Source Directory...: /tmp/staging Media Type.........: cdrw-74 Device Type........: cdwriter Device Path........: /dev/cdrom Device SCSI ID.....: None Drive Speed........: None Check Data Flag....: True No Eject Flag......: False Is this OK? [Y/n]: === Please wait, indexing the source directory (this may take a while)... === The following daily staging directories have not yet been written to disc: /tmp/staging/2007/02/07 /tmp/staging/2007/02/08 /tmp/staging/2007/02/09 /tmp/staging/2007/02/10 /tmp/staging/2007/02/11 /tmp/staging/2007/02/12 /tmp/staging/2007/02/13 /tmp/staging/2007/02/14 The total size of the data in these directories is 1.00 GB. Continue? [Y/n]: === Based on configuration, the capacity of your media is 650.00 MB. Since estimates are not perfect and there is some uncertainly in media capacity calculations, it is good to have a "cushion", a percentage of capacity to set aside. The cushion reduces the capacity of your media, so a 1.5% cushion leaves 98.5% remaining. What cushion percentage? [4.00]: === The real capacity, taking into account the 4.00% cushion, is 627.25 MB. It will take at least 2 disc(s) to store your 1.00 GB of data. Continue? [Y/n]: === Which algorithm do you want to use to span your data across multiple discs? The following algorithms are available: first....: The "first-fit" algorithm best.....: The "best-fit" algorithm worst....: The "worst-fit" algorithm alternate: The "alternate-fit" algorithm If you don't like the results you will have a chance to try a different one later. Which algorithm? [worst]: === Please wait, generating file lists (this may take a while)... === Using the "worst-fit" algorithm, Cedar Backup can split your data into 2 discs. Disc 1: 246 files, 615.97 MB, 98.20% utilization Disc 2: 8 files, 412.96 MB, 65.84% utilization Accept this solution? [Y/n]: n === Which algorithm do you want to use to span your data across multiple discs? The following algorithms are available: first....: The "first-fit" algorithm best.....: The "best-fit" algorithm worst....: The "worst-fit" algorithm alternate: The "alternate-fit" algorithm If you don't like the results you will have a chance to try a different one later. Which algorithm? [worst]: alternate === Please wait, generating file lists (this may take a while)... === Using the "alternate-fit" algorithm, Cedar Backup can split your data into 2 discs. Disc 1: 73 files, 627.25 MB, 100.00% utilization Disc 2: 181 files, 401.68 MB, 64.04% utilization Accept this solution? [Y/n]: y === Please place the first disc in your backup device. Press return when ready. === Initializing image... Writing image to disc... CedarBackup3-3.1.6/manual/src/securingssh.xml0000664000175000017500000002361712555752165022643 0ustar pronovicpronovic00000000000000 Securing Password-less SSH Connections Cedar Backup relies on password-less public key SSH connections to make various parts of its backup process work. Password-less scp is used to stage files from remote clients to the master, and password-less ssh is used to execute actions on managed clients. Normally, it is a good idea to avoid password-less SSH connections in favor of using an SSH agent. The SSH agent manages your SSH connections so that you don't need to type your passphrase over and over. You get most of the benefits of a password-less connection without the risk. Unfortunately, because Cedar Backup has to execute without human involvement (through a cron job), use of an agent really isn't feasable. We have to rely on true password-less public keys to give the master access to the client peers. Traditionally, Cedar Backup has relied on a segmenting strategy to minimize the risk. Although the backup typically runs as root — so that all parts of the filesystem can be backed up — we don't use the root user for network connections. Instead, we use a dedicated backup user on the master to initiate network connections, and dedicated users on each of the remote peers to accept network connections. With this strategy in place, an attacker with access to the backup user on the master (or even root access, really) can at best only get access to the backup user on the remote peers. We still concede a local attack vector, but at least that vector is restricted to an unprivileged user. Some Cedar Backup users may not be comfortable with this risk, and others may not be able to implement the segmentation strategy — they simply may not have a way to create a login which is only used for backups. So, what are these users to do? Fortunately there is a solution. The SSH authorized keys file supports a way to put a filter in place on an SSH connection. This excerpt is from the AUTHORIZED_KEYS FILE FORMAT section of man 8 sshd: command="command" Specifies that the command is executed whenever this key is used for authentication. The command supplied by the user (if any) is ignored. The command is run on a pty if the client requests a pty; otherwise it is run without a tty. If an 8-bit clean channel is required, one must not request a pty or should specify no-pty. A quote may be included in the command by quoting it with a backslash. This option might be useful to restrict certain public keys to perform just a specific operation. An example might be a key that permits remote backups but nothing else. Note that the client may specify TCP and/or X11 forwarding unless they are explicitly prohibited. Note that this option applies to shell, command or subsystem execution. Essentially, this gives us a way to authenticate the commands that are being executed. We can either accept or reject commands, and we can even provide a readable error message for commands we reject. The filter is applied on the remote peer, to the key that provides the master access to the remote peer. So, let's imagine that we have two hosts: master mickey, and peer minnie. Here is the original ~/.ssh/authorized_keys file for the backup user on minnie (remember, this is all on one line in the file): ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp3MsSpVB9q9iZ+awek120391k;mm0c221=3=km =m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9FtAZ9U+MmpL901231asdkl;ai1-923ma9s=9= 1-2341=-a0sd=-sa0=1z= backup@mickey This line is the public key that minnie can use to identify the backup user on mickey. Assuming that there is no passphrase on the private key back on mickey, the backup user on mickey can get direct access to minnie. To put the filter in place, we add a command option to the key, like this: command="/opt/backup/validate-backup" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp 3MsSpVB9q9iZ+awek120391k;mm0c221=3=km=m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9F tAZ9U+MmpL901231asdkl;ai1-923ma9s=9=1-2341=-a0sd=-sa0=1z= backup@mickey Basically, the command option says that whenever this key is used to successfully initiate a connection, the /opt/backup/validate-backup command will be run instead of the real command that came over the SSH connection. Fortunately, the interface gives the command access to certain shell variables that can be used to invoke the original command if you want to. A very basic validate-backup script might look something like this: #!/bin/bash if [[ "${SSH_ORIGINAL_COMMAND}" == "ls -l" ]] ; then ${SSH_ORIGINAL_COMMAND} else echo "Security policy does not allow command [${SSH_ORIGINAL_COMMAND}]." exit 1 fi This script allows exactly ls -l and nothing else. If the user attempts some other command, they get a nice error message telling them that their command has been disallowed. For remote commands executed over ssh, the original command is exactly what the caller attempted to invoke. For remote copies, the commands are either scp -f file (copy from the peer to the master) or scp -t file (copy to the peer from the master). If you want, you can see what command SSH thinks it is executing by using ssh -v or scp -v. The command will be right at the top, something like this: Executing: program /usr/bin/ssh host mickey, user (unspecified), command scp -v -f .profile OpenSSH_4.3p2 Debian-9, OpenSSL 0.9.8c 05 Sep 2006 debug1: Reading configuration data /home/backup/.ssh/config debug1: Applying options for daystrom debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 Omit the -v and you have your command: scp -f .profile. For a normal, non-managed setup, you need to allow the following commands, where /path/to/collect/ is replaced with the real path to the collect directory on the remote peer: scp -f /path/to/collect/cback.collect scp -f /path/to/collect/* scp -t /path/to/collect/cback.stage If you are configuring a managed client, then you also need to list the exact command lines that the master will be invoking on the managed client. You are guaranteed that the master will invoke one action at a time, so if you list two lines per action (full and non-full) you should be fine. Here's an example for the collect action: /usr/bin/cback3 --full collect /usr/bin/cback3 collect Of course, you would have to list the actual path to the cback3 executable — exactly the one listed in the <cback_command> configuration option for your managed peer. I hope that there is enough information here for interested users to implement something that makes them comfortable. I have resisted providing a complete example script, because I think everyone's setup will be different. However, feel free to write if you are working through this and you have questions. CedarBackup3-3.1.6/manual/src/depends.xml0000664000175000017500000005263312555752016021723 0ustar pronovicpronovic00000000000000 Dependencies Python 3.4 (or later) Source URL upstream Debian RPM If you can't find a package for your system, install from the package source, using the upstream link. RSH Server and Client Although Cedar Backup will technically work with any RSH-compatible server and client pair (such as the classic rsh client), most users should only use an SSH (secure shell) server and client. The defacto standard today is OpenSSH. Some systems package the server and the client together, and others package the server and the client separately. Note that master nodes need an SSH client, and client nodes need to run an SSH server. Source URL upstream Debian RPM If you can't find SSH client or server packages for your system, install from the package source, using the upstream link. mkisofs The mkisofs command is used create ISO filesystem images that can later be written to backup media. On Debian platforms, mkisofs is not distributed and genisoimage is used instead. The Debian package takes care of this for you. Source URL upstream RPM If you can't find a package for your system, install from the package source, using the upstream link. cdrecord The cdrecord command is used to write ISO images to CD media in a backup device. On Debian platforms, cdrecord is not distributed and wodim is used instead. The Debian package takes care of this for you. Source URL upstream RPM If you can't find a package for your system, install from the package source, using the upstream link. dvd+rw-tools The dvd+rw-tools package provides the growisofs utility, which is used to write ISO images to DVD media in a backup device. Source URL upstream Debian RPM If you can't find a package for your system, install from the package source, using the upstream link. eject and volname The eject command is used to open and close the tray on a backup device (if the backup device has a tray). Sometimes, the tray must be opened and closed in order to "reset" the device so it notices recent changes to a disc. The volname command is used to determine the volume name of media in a backup device. Source URL upstream Debian RPM If you can't find a package for your system, install from the package source, using the upstream link. mount and umount The mount and umount commands are used to mount and unmount CD/DVD media after it has been written, in order to run a consistency check. Source URL upstream Debian RPM If you can't find a package for your system, install from the package source, using the upstream link. grepmail The grepmail command is used by the mbox extension to pull out only recent messages from mbox mail folders. Source URL upstream Debian RPM If you can't find a package for your system, install from the package source, using the upstream link. gpg The gpg command is used by the encrypt extension to encrypt files. Source URL upstream Debian RPM If you can't find a package for your system, install from the package source, using the upstream link. split The split command is used by the split extension to split up large files. This command is typically part of the core operating system install and is not distributed in a separate package. AWS CLI AWS CLI is Amazon's official command-line tool for interacting with the Amazon Web Services infrastruture. Cedar Backup uses AWS CLI to copy backup data up to Amazon S3 cloud storage. After you install AWS CLI, you need to configure your connection to AWS with an appropriate access id and access key. Amazon provides a good setup guide. Source URL upstream Debian The initial implementation of the amazons3 extension was written using AWS CLI 1.4. As of this writing, not all Linux distributions include a package for this version. On these platforms, the easiest way to install it is via PIP: apt-get install python3-pip, and then pip3 install awscli. The Debian package includes an appropriate dependency starting with the jessie release. Chardet The cback3-amazons3-sync command relies on the Chardet Python package to check filename encoding. You only need this package if you are going to use the sync tool. Source URL upstream debian CedarBackup3-3.1.6/manual/src/intro.xml0000664000175000017500000003733512555752347021445 0ustar pronovicpronovic00000000000000 Introduction
Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it.— Linus Torvalds, at the release of Linux 2.0.8 in July of 1996.
What is Cedar Backup? Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough (and almost all hardware is today), Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Alternately, Cedar Backup can write your backups to the Amazon S3 cloud rather than relying on physical media. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python 3 programming language. There are many different backup software implementations out there in the open source world. Cedar Backup aims to fill a niche: it aims to be a good fit for people who need to back up a limited amount of important data on a regular basis. Cedar Backup isn't for you if you want to back up your huge MP3 collection every night, or if you want to back up a few hundred machines. However, if you administer a small set of machines and you want to run daily incremental backups for things like system configuration, current email, small web sites, Subversion or Mercurial repositories, or small MySQL databases, then Cedar Backup is probably worth your time. Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python 3, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems. To run a Cedar Backup client, you really just need a working Python 3 installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images or talking to the Amazon S3 infrastructure. A full list of dependencies is provided in . Migrating from Version 2 to Version 3 The main difference between Cedar Backup version 2 and Cedar Backup version 3 is the targeted Python interpreter. Cedar Backup version 2 was designed for Python 2, while version 3 is a conversion of the original code to Python 3. Other than that, both versions are functionally equivalent. The configuration format is unchanged, and you can mix-and-match masters and clients of different versions in the same backup pool. Both versions will be fully supported until around the time of the Python 2 end-of-life in 2020, but you should plan to migrate sooner than that if possible. A major design goal for version 3 was to facilitate easy migration testing for users, by making it possible to install version 3 on the same server where version 2 was already in use. A side effect of this design choice is that all of the executables, configuration files, and logs changed names in version 3. Where version 2 used "cback", version 3 uses "cback3": cback3.conf instead of cback.conf, cback3.log instead of cback.log, etc. So, while migrating from version 2 to version 3 is relatively straightforward, you will have to make some changes manually. You will need to create a new configuration file (or soft link to the old one), modify your cron jobs to use the new executable name, etc. You can migrate one server at a time in your pool with no ill effects, or even incrementally migrate a single server by using version 2 and version 3 on different days of the week or for different parts of the backup. How to Get Support Cedar Backup is open source software that is provided to you at no cost. It is provided with no warranty, not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. However, that said, someone can usually help you solve whatever problems you might see. If you experience a problem, your best bet is to file an issue in the issue tracker at BitBucket. See . When the source code was hosted at SourceForge, there was a mailing list. However, it was very lightly used in the last years before I abandoned SourceForge, and I have decided not to replace it. If you are not comfortable discussing your problem in public or listing it in a public database, or if you need to send along information that you do not want made public, then you can write support@cedar-solutions.com. That mail will go directly to me. If you write the support address about a bug, a scrubbed bug report will eventually end up in the public bug database anyway, so if at all possible you should use the public reporting mechanisms. One of the strengths of the open-source software development model is its transparency. Regardless of how you report your problem, please try to provide as much information as possible about the behavior you observed and the environment in which the problem behavior occurred. See Simon Tatham's excellent bug reporting tutorial: . In particular, you should provide: the version of Cedar Backup that you are using; how you installed Cedar Backup (i.e. Debian package, source package, etc.); the exact command line that you executed; any error messages you received, including Python stack traces (if any); and relevant sections of the Cedar Backup log. It would be even better if you could describe exactly how to reproduce the problem, for instance by including your entire configuration file and/or specific information about your system that might relate to the problem. However, please do not provide huge sections of debugging logs unless you are sure they are relevant or unless someone asks for them. Sometimes, the error that Cedar Backup displays can be rather cryptic. This is because under internal error conditions, the text related to an exception might get propogated all of the way up to the user interface. If the message you receive doesn't make much sense, or if you suspect that it results from an internal error, you might want to re-run Cedar Backup with the option. This forces Cedar Backup to dump the entire Python stack trace associated with the error, rather than just printing the last message it received. This is good information to include along with a bug report, as well. History Cedar Backup began life in late 2000 as a set of Perl scripts called kbackup. These scripts met an immediate need (which was to back up skyjammer.com and some personal machines) but proved to be unstable, overly verbose and rather difficult to maintain. In early 2002, work began on a rewrite of kbackup. The goal was to address many of the shortcomings of the original application, as well as to clean up the code and make it available to the general public. While doing research related to code I could borrow or base the rewrite on, I discovered that there was already an existing backup package with the name kbackup, so I decided to change the name to Cedar Backup instead. Because I had become fed up with the prospect of maintaining a large volume of Perl code, I decided to abandon that language in favor of Python. See . At the time, I chose Python mostly because I was interested in learning it, but in retrospect it turned out to be a very good decision. From my perspective, Python has almost all of the strengths of Perl, but few of its inherent weaknesses (I feel that primarily, Python code often ends up being much more readable than Perl code). Around this same time, skyjammer.com and cedar-solutions.com were converted to run Debian GNU/Linux (potato) Debian's stable releases are named after characters in the Toy Story movie. and I entered the Debian new maintainer queue, so I also made it a goal to implement Debian packages along with a Python source distribution for the new release. Version 1.0 of Cedar Backup was released in June of 2002. We immediately began using it to back up skyjammer.com and cedar-solutions.com, where it proved to be much more stable than the original code. In the meantime, I continued to improve as a Python programmer and also started doing a significant amount of professional development in Java. It soon became obvious that the internal structure of Cedar Backup 1.0, while much better than kbackup, still left something to be desired. In November 2003, I began an attempt at cleaning up the codebase. I converted all of the internal documentation to use Epydoc, Epydoc is a Python code documentation tool. See . and updated the code to use the newly-released Python logging package See . after having a good experience with Java's log4j. However, I was still not satisfied with the code, which did not lend itself to the automated regression testing I had used when working with junit in my Java code. So, rather than releasing the cleaned-up code, I instead began another ground-up rewrite in May 2004. With this rewrite, I applied everything I had learned from other Java and Python projects I had undertaken over the last few years. I structured the code to take advantage of Python's unique ability to blend procedural code with object-oriented code, and I made automated unit testing a primary requirement. The result was the 2.0 release, which is cleaner, more compact, better focused, and better documented than any release before it. Utility code is less application-specific, and is now usable as a general-purpose library. The 2.0 release also includes a complete regression test suite of over 3000 tests, which will help to ensure that quality is maintained as development continues into the future. Tests are implemented using Python's unit test framework. See . The 3.0 release of Cedar Backup is a Python 3 conversion of the 2.0 release, with minimal additional functionality. The conversion from Python 2 to Python 3 started in mid-2015, about 5 years before the anticipated deprecation of Python 2 in 2020. Most users should consider transitioning to the 3.0 release.
CedarBackup3-3.1.6/manual/src/preface.xml0000664000175000017500000002125612555751635021711 0ustar pronovicpronovic00000000000000 Preface Purpose This software manual has been written to document version 2 of Cedar Backup, originally released in early 2005. Audience This manual has been written for computer-literate administrators who need to use and configure Cedar Backup on their Linux or UNIX-like system. The examples in this manual assume the reader is relatively comfortable with UNIX and command-line interfaces. Conventions Used in This Book This section covers the various conventions used in this manual. Typographic Conventions Term Used for first use of important terms. Command Used for commands, command output, and switches Replaceable Used for replaceable items in code and text Filenames Used for file and directory names Icons This icon designates a note relating to the surrounding text. This icon designates a helpful tip relating to the surrounding text. This icon designates a warning relating to the surrounding text. Organization of This Manual Provides some some general history about Cedar Backup, what needs it is intended to meet, how to get support, and how to migrate from version 2 to version 3. Discusses the basic concepts of a Cedar Backup infrastructure, and specifies terms used throughout the rest of the manual. Explains how to install the Cedar Backup package either from the Python source distribution or from the Debian package. Discusses the various Cedar Backup command-line tools, including the primary cback3 command. Provides detailed information about how to configure Cedar Backup. Describes each of the officially-supported Cedar Backup extensions. Specifies the Cedar Backup extension architecture interface, through which third party developers can write extensions to Cedar Backup. Provides some additional information about the packages which Cedar Backup relies on, including information about how to find documentation and packages on non-Debian systems. Cedar Backup provides no facility for restoring backups, assuming the administrator can handle this infrequent task. This appendix provides some notes for administrators to work from. Password-less SSH connections are a necessary evil when remote backup processes need to execute without human interaction. This appendix describes some ways that you can reduce the risk to your backup pool should your master machine be compromised. Acknowledgments The structure of this manual and some of the basic boilerplate has been taken from the book Version Control with Subversion. Thanks to the authors (and O'Reilly) for making this excellent reference available under a free and open license. CedarBackup3-3.1.6/manual/src/copyright.xml0000664000175000017500000004245312555751246022314 0ustar pronovicpronovic00000000000000 Copyright Copyright (c) 2004-2011,2013-2015 Kenneth J. Pronovici This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation. For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work. This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA ==================================================================== GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS ==================================================================== CedarBackup3-3.1.6/manual/src/images/0002775000175000017500000000000012657665551021027 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/manual/src/images/html/0002775000175000017500000000000012657665551021773 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/manual/src/images/html/note.png0000664000175000017500000000317212555004756023436 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @QAb @QAϛR*Aw0E |x7EeukWmxV`Io$@Q `Lʔ*q"FaSxt%n rO2[22 2 K&Éc0@ `y$:CGW25MJr +É^2@ Pyuފ @7Wn4IODMqknzŸq. ?4='1=)'AaM7] i1 vRiGJ7JzzABz N7/3Y2tVnBNOi21q@D8tM7AJO'"ߏ 0l ˡw>W Ci6(.ߝ!ć{#Datׯ ,%]I68(<G_O -y!.{3 7e1@Kk`N7@'$HNO@Sk.p9$  ux.8=e3h3&"=A&5!ěS{},pd@ˀrH JO'HP򦰸 WADB.NO<I7"7 i}{`tL4=)bIOt .o n' "yj (Ptd@A)zHYD8=M9,<;;\ 2 Al$Mj>Ό.z nSh'"TyHJ7 (~׍3e3ph- 0sk+ۙ᫧DoMx)IOh 9_ )u!YIENDB`CedarBackup3-3.1.6/manual/src/images/html/warning.png0000664000175000017500000000301612555004756024133 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @ :93g-YrjR(8YT͎="AD>]x=7nV֝ cCx:w_I uЗW/EGt..0vQ-3gHua?vBv]v8 8ݻBBs 2OHYo#@sǓ'o) ?}c)@tS kdxWn !v Mp@aq?.R+LLAA⎦M||ô ͽ)";ǯ_9 t9"D{00lCKL/WagO>oFv@8Fn6zHEr@!۷# T/x?}; (01AkjZ@;s B8ؾGUU&&^MH JJ:me sr.3 B8n}=$}p-9?-20c`+*;D :|;A@AOfKכJCB|  .GFA3f!͛)BrPTԀ8NO] ӗ.Ah Ap͂D.KI͛\] z8a$@ MO >|d'Zt-3;v76|HI}a2/ojZګ7o  }}\\~-"; ,ĪZZ~:nR۟qvQ;Wm,tt\ w@8),g`XsA-ee@10d)(dUd@@MXdmb*d~qĄnvPZTJʓǏ@XA/% @Ӗ50w9mܴ vs>a tss;XY  0Z 6rs>}i5@l;w㛊atMV@LEXpWEDledKSf1OtqY @a > u[Y%ee]8DxHӯ_Νl.hZ v΍>o@5 ~?~R^D96Ap `Z^əhoilŊ/_ H~={: Z河7ё{u*_xkbNάsܹC-D߿/޹|ƺd'x S3X3@kk_7={ eգO^ͻvmٽuĉ;}BꮓhuРs@OoCIENDB`CedarBackup3-3.1.6/manual/src/images/html/info.png0000664000175000017500000000443312555004756023425 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @ :߿~ @Aǻw˗/߿O`d^TŋgϞm~@@e@@-@@D9Q``M\o۶=8jms`tv+v=yo߾B?@@v0eHB7o~:ZP^Qe f30DB@|1偁{nPqJ:"7o@Ç_, DZ0aK/)(8GF'5 0w-i0w,@Bs448ceH@ų!<~ɓ'x@8t۷o!쪪`:?o(/YpBS8D0߿^ `d~?(+.>I'O"kVln\ǀݽ{̱Z @Xvn pt (x<=wcj8 ւh,ܹs L:O>dᄒ8 +&fe f*,AzsFF1cѣ/z]^~5^]ƞ= g^x⠿Sܟ?@wsu \BSښ}_Gtvށ-YAAƺuEAՁ={!3T&"\\|l9Kв/N\1Mf-j֯III  $Ϟ}>!?`χw" `҂r=;""J[J_-j9sfcc#: M3`]la;19Ͷrrr,Y7 Pj{`u<:!J4gP|]zʕ+pA*1&$K(+Twwwgdd  5`2/ $RS#YCnnnC6 L>FO78޽Ϩ7 *}6 kх VcM _]}- drبXl11-Č,@c*O>mkk LJE  ,g+x[ n<R__Ns$%W ^s6P%@ڲ1x o|{f˯@Vξ=@7A CX mZŵo1//Nól@p״ڵ o8ء޿?$= “0%y\ -%?8 t-ZLX- C>|<3yY)y1.Np )3~Ν;22 #hXۺuM A.~ì&&sp,# 9TTQㇹs:::Ĝ8quD#L6mƍ^!:P͛n! E  HFߖ-[mۮ]ûІ/~;TWWwa`K-D85,e˖֦AAAPXY֬Y)7 w9rdՋaؐٳgMR" РA TmGIENDB`CedarBackup3-3.1.6/manual/src/extensions.xml0000664000175000017500000023111012557775310022471 0ustar pronovicpronovic00000000000000 Official Extensions System Information Extension The System Information Extension is a simple Cedar Backup extension used to save off important system recovery information that might be useful when reconstructing a broken system. It is intended to be run either immediately before or immediately after the standard collect action. This extension saves off the following information to the configured Cedar Backup collect directory. Saved off data is always compressed using bzip2. Currently-installed Debian packages via dpkg --get-selections Disk partition information via fdisk -l System-wide mounted filesystem contents, via ls -laR The Debian-specific information is only collected on systems where /usr/bin/dpkg exists. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>sysinfo</name> <module>CedarBackup3.extend.sysinfo</module> <function>executeAction</function> <index>99</index> </action> </extensions> This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, but requires no new configuration of its own. Amazon S3 Extension The Amazon S3 extension writes data to Amazon S3 cloud storage rather than to physical media. It is intended to replace the store action, but you can also use it alongside the store action if you'd prefer to backup your data in more than one place. This extension must be run after the stage action. The underlying functionality relies on the AWS CLI toolset. Before you use this extension, you need to set up your Amazon S3 account and configure AWS CLI as detailed in Amazons's setup guide. The extension assumes that the backup is being executed as root, and switches over to the configured backup user to run the aws program. So, make sure you configure the AWS CLI tools as the backup user and not root. (This is different than the amazons3 sync tool extension, which executes AWS CLI command as the same user that is running the tool.) When using physical media via the standard store action, there is an implicit limit to the size of a backup, since a backup must fit on a single disc. Since there is no physical media, no such limit exists for Amazon S3 backups. This leaves open the possibility that Cedar Backup might construct an unexpectedly-large backup that the administrator is not aware of. Over time, this might become expensive, either in terms of network bandwidth or in terms of Amazon S3 storage and I/O charges. To mitigate this risk, set a reasonable maximum size using the configuration elements shown below. If the backup fails, you have a chance to review what made the backup larger than you expected, and you can either correct the problem (i.e. remove a large temporary directory that got inadvertently included in the backup) or change configuration to take into account the new "normal" maximum size. You can optionally configure Cedar Backup to encrypt data before sending it to S3. To do that, provide a complete command line using the ${input} and ${output} variables to represent the original input file and the encrypted output file. This command will be executed as the backup user. For instance, you can use something like this with GPG: /usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input} The GPG mechanism depends on a strong passphrase for security. One way to generate a strong passphrase is using your system random number generator, i.e.: dd if=/dev/urandom count=20 bs=1 | xxd -ps (See StackExchange for more details about that advice.) If you decide to use encryption, make sure you save off the passphrase in a safe place, so you can get at your backup data later if you need to. And obviously, make sure to set permissions on the passphrase file so it can only be read by the backup user. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>amazons3</name> <module>CedarBackup3.extend.amazons3</module> <function>executeAction</function> <index>201</index> <!-- just after stage --> </action> </extensions> This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own amazons3 configuration section. This is an example configuration section with encryption disabled: <amazons3> <s3_bucket>example.com-backup/staging</s3_bucket> </amazons3> The following elements are part of the Amazon S3 configuration section: warn_midnite Whether to generate warnings for crossing midnite. This field indicates whether warnings should be generated if the Amazon S3 operation has to cross a midnite boundary in order to find data to write to the cloud. For instance, a warning would be generated if valid data was only found in the day before or day after the current day. Configuration for some users is such that the amazons3 operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something strange might have happened. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). s3_bucket The name of the Amazon S3 bucket that data will be written to. This field configures the S3 bucket that your data will be written to. In S3, buckets are named globally. For uniqueness, you would typically use the name of your domain followed by some suffix, such as example.com-backup. If you want, you can specify a subdirectory within the bucket, such as example.com-backup/staging. Restrictions: Must be non-empty. encrypt Command used to encrypt backup data before upload to S3 If this field is provided, then data will be encrypted before it is uploaded to Amazon S3. You must provide the entire command used to encrypt a file, including the ${input} and ${output} variables. An example GPG command is shown above, but you can use any mechanism you choose. The command will be run as the configured backup user. Restrictions: If provided, must be non-empty. full_size_limit Maximum size of a full backup If this field is provided, then a size limit will be applied to full backups. If the total size of the selected staging directory is greater than the limit, then the backup will fail. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are 10240, 250 MB or 1.1 GB. Restrictions: Must be a value as described above, greater than zero. incr_size_limit Maximum size of an incremental backup If this field is provided, then a size limit will be applied to incremental backups. If the total size of the selected staging directory is greater than the limit, then the backup will fail. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are 10240, 250 MB or 1.1 GB. Restrictions: Must be a value as described above, greater than zero. Subversion Extension The Subversion Extension is a Cedar Backup extension used to back up Subversion See version control repositories via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. Each configured Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2. There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). This extension backs up both kinds of repositories in the same way, using svnadmin dump in an incremental mode. It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do the backup that way, then use the normal collect action rather than this extension. If you decide to do that, be sure to consult the Subversion documentation and make sure you understand the limitations of this kind of backup. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>subversion</name> <module>CedarBackup3.extend.subversion</module> <function>executeAction</function> <index>99</index> </action> </extensions> This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own subversion configuration section. This is an example Subversion configuration section: <subversion> <collect_mode>incr</collect_mode> <compress_mode>bzip2</compress_mode> <repository> <abs_path>/opt/public/svn/docs</abs_path> </repository> <repository> <abs_path>/opt/public/svn/web</abs_path> <compress_mode>gzip</compress_mode> </repository> <repository_dir> <abs_path>/opt/private/svn</abs_path> <collect_mode>daily</collect_mode> </repository_dir> </subversion> The following elements are part of the Subversion configuration section: collect_mode Default collect mode. The collect mode describes how frequently a Subversion repository is backed up. The Subversion extension recognizes the same collect modes as the standard Cedar Backup collect action (see ). This value is the collect mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. compress_mode Default compress mode. Subversion repositories backups are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. This value is the compress mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of none, gzip or bzip2. repository A Subversion repository be collected. This is a subsection which contains information about a specific Subversion repository to be backed up. This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured. The repository subsection contains the following fields: collect_mode Collect mode for this repository. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this repository. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the Subversion repository to back up. Restrictions: Must be an absolute path. repository_dir A Subversion parent repository directory be collected. This is a subsection which contains information about a Subversion parent repository directory to be backed up. Any subdirectory immediately within this directory is assumed to be a Subversion repository, and will be backed up. This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured. The repository_dir subsection contains the following fields: collect_mode Collect mode for this repository. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this repository. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the Subversion repository to back up. Restrictions: Must be an absolute path. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this subversion parent directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: rel_path A relative path to be excluded from the backup. The path is assumed to be relative to the subversion parent directory itself. For instance, if the configured subversion parent directory is /opt/svn a configured relative path of software would exclude the path /opt/svn/software. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). This field can be repeated as many times as is necessary. Restrictions: Must be non-empty MySQL Extension The MySQL Extension is a Cedar Backup extension used to back up MySQL See databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. This extension always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I will update this extension or provide another. The backup is done via the mysqldump command included with the MySQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that all configured databases can be backed up by a single user. Often, the root database user will be used. An alternative is to create a separate MySQL backup user and grant that user rights to read (but not write) various databases as needed. This second option is probably your best choice. The extension accepts a username and password in configuration. However, you probably do not want to list those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to mysqldump via the command-line and switches, which will be visible to other users in the process listing. Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in /root/.my.cnf: [mysqldump] user = root password = <secret> Of course, if you are executing the backup as a user other than root, then you would create the file in that user's home directory instead. As a side note, it is also possible to configure .my.cnf such that Cedar Backup can back up a remote database server: [mysqldump] host = remote.host For this to work, you will also need to grant privileges properly for the user which is executing the backup. See your MySQL documentation for more information about how this can be done. Regardless of whether you are using ~/.my.cnf or /etc/cback3.conf to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode 0600). To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>mysql</name> <module>CedarBackup3.extend.mysql</module> <function>executeAction</function> <index>99</index> </action> </extensions> This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mysql configuration section. This is an example MySQL configuration section: <mysql> <compress_mode>bzip2</compress_mode> <all>Y</all> </mysql> If you have decided to configure login information in Cedar Backup rather than using MySQL configuration, then you would add the username and password fields to configuration: <mysql> <user>root</user> <password>password</password> <compress_mode>bzip2</compress_mode> <all>Y</all> </mysql> The following elements are part of the MySQL configuration section: user Database user. The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. Typically, this would be root (i.e. the database root user, not the system root user). This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above. Restrictions: If provided, must be non-empty. password Password associated with the database user. This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above. Restrictions: If provided, must be non-empty. compress_mode Compress mode. MySQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. Restrictions: Must be one of none, gzip or bzip2. all Indicates whether to back up all databases. If this value is Y, then all MySQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below). If you choose this option, the entire database backup will go into one big dump file. Restrictions: Must be a boolean (Y or N). database Named database to be backed up. If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file. This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y. Restrictions: Must be non-empty. PostgreSQL Extension Community-contributed Extension This is a community-contributed extension provided by Antoine Beaupre ("The Anarcat"). I have added regression tests around the configuration parsing code and I will maintain this section in the user manual based on his source code documentation. Unfortunately, I don't have any PostgreSQL databases with which to test the functional code. While I have code-reviewed the code and it looks both sensible and safe, I have to rely on the author to ensure that it works properly. The PostgreSQL Extension is a Cedar Backup extension used to back up PostgreSQL See databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. The backup is done via the pg_dump or pg_dumpall commands included with the PostgreSQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the pg_dump client. This can be accomplished using appropriate configuration in the pg_hda.conf file. This extension always produces a full backup. There is currently no facility for making incremental backups. Once you place PostgreSQL configuration into the Cedar Backup configuration file, you should be careful about who is allowed to see that information. This is because PostgreSQL configuration will contain information about available PostgreSQL databases and usernames. Typically, you might want to lock down permissions so that only the file owner can read the file contents (i.e. use mode 0600). To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>postgresql</name> <module>CedarBackup3.extend.postgresql</module> <function>executeAction</function> <index>99</index> </action> </extensions> This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own postgresql configuration section. This is an example PostgreSQL configuration section: <postgresql> <compress_mode>bzip2</compress_mode> <user>username</user> <all>Y</all> </postgresql> If you decide to back up specific databases, then you would list them individually, like this: <postgresql> <compress_mode>bzip2</compress_mode> <user>username</user> <all>N</all> <database>db1</database> <database>db2</database> </postgresql> The following elements are part of the PostgreSQL configuration section: user Database user. The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. This value is optional. Consult your PostgreSQL documentation for information on how to configure a default database user outside of Cedar Backup, and for information on how to specify a database password when you configure a user within Cedar Backup. You will probably want to modify pg_hda.conf. Restrictions: If provided, must be non-empty. compress_mode Compress mode. PostgreSQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. Restrictions: Must be one of none, gzip or bzip2. all Indicates whether to back up all databases. If this value is Y, then all PostgreSQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below). If you choose this option, the entire database backup will go into one big dump file. Restrictions: Must be a boolean (Y or N). database Named database to be backed up. If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file. This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y. Restrictions: Must be non-empty. Mbox Extension The Mbox Extension is a Cedar Backup extension used to incrementally back up UNIX-style mbox mail folders via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. Mbox mail folders are not well-suited to being backed up by the normal Cedar Backup incremental backup process. This is because active folders are typically appended to on a daily basis. This forces the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large mail folders. What the mbox extension does is leverage the grepmail utility to back up only email messages which have been received since the last incremental backup. This way, even if a folder is added to every day, only the recently-added messages are backed up. This can potentially save a lot of space. Each configured mbox file or directory can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>mbox</name> <module>CedarBackup3.extend.mbox</module> <function>executeAction</function> <index>99</index> </action> </extensions> This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mbox configuration section. This is an example mbox configuration section: <mbox> <collect_mode>incr</collect_mode> <compress_mode>gzip</compress_mode> <file> <abs_path>/home/user1/mail/greylist</abs_path> <collect_mode>daily</collect_mode> </file> <dir> <abs_path>/home/user2/mail</abs_path> </dir> <dir> <abs_path>/home/user3/mail</abs_path> <exclude> <rel_path>spam</rel_path> <pattern>.*debian.*</pattern> </exclude> </dir> </mbox> Configuration is much like the standard collect action. Differences come from the fact that mbox directories are not collected recursively. Unlike collect configuration, exclusion information can only be configured at the mbox directory level (there are no global exclusions). Another difference is that no absolute exclusion paths are allowed — only relative path exclusions and patterns. The following elements are part of the mbox configuration section: collect_mode Default collect mode. The collect mode describes how frequently an mbox file or directory is backed up. The mbox extension recognizes the same collect modes as the standard Cedar Backup collect action (see ). This value is the collect mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. compress_mode Default compress mode. Mbox file or directory backups are just text, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. This value is the compress mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of none, gzip or bzip2. file An individual mbox file to be collected. This is a subsection which contains information about an individual mbox file to be backed up. This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured. The file subsection contains the following fields: collect_mode Collect mode for this file. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this file. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the mbox file to back up. Restrictions: Must be an absolute path. dir An mbox directory to be collected. This is a subsection which contains information about an mbox directory to be backed up. An mbox directory is a directory containing mbox files. Every file in an mbox directory is assumed to be an mbox file. Mbox directories are not collected recursively. Only the files immediately within the configured directory will be backed-up and any subdirectories will be ignored. This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured. The dir subsection contains the following fields: collect_mode Collect mode for this file. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this file. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the mbox directory to back up. Restrictions: Must be an absolute path. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this mbox directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: rel_path A relative path to be excluded from the backup. The path is assumed to be relative to the mbox directory itself. For instance, if the configured mbox directory is /home/user2/mail a configured relative path of SPAM would exclude the path /home/user2/mail/SPAM. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). This field can be repeated as many times as is necessary. Restrictions: Must be non-empty Encrypt Extension The Encrypt Extension is a Cedar Backup extension used to encrypt backups. It does this by encrypting the contents of a master's staging directory each day after the stage action is run. This way, backed-up data is encrypted both when sitting on the master and when written to disc. This extension must be run before the standard store action, otherwise unencrypted data will be written to disc. There are several differents ways encryption could have been built in to or layered on to Cedar Backup. I asked the mailing list for opinions on the subject in January 2007 and did not get a lot of feedback, so I chose the option that was simplest to understand and simplest to implement. If other encryption use cases make themselves known in the future, this extension can be enhanced or replaced. Currently, this extension supports only GPG. However, it would be straightforward to support other public-key encryption mechanisms, such as OpenSSL. If you decide to encrypt your backups, be absolutely sure that you have your GPG secret key saved off someplace safe — someplace other than on your backup disc. If you lose your secret key, your backup will be useless. I suggest that before you rely on this extension, you should execute a dry run and make sure you can successfully decrypt the backup that is written to disc. Before configuring the Encrypt extension, you must configure GPG. Either create a new keypair or use an existing one. Determine which user will execute your backup (typically root) and have that user import and lsign the public half of the keypair. Then, save off the secret half of the keypair someplace safe, apart from your backup (i.e. on a floppy disk or USB drive). Make sure you know the recipient name associated with the public key because you'll need it to configure Cedar Backup. (If you can run gpg -e -r "Recipient Name" file.txt and it executes cleanly with no user interaction required, you should be OK.) An encrypted backup has the same file structure as a normal backup, so all of the instructions in apply. The only difference is that encrypted files will have an additional .gpg extension (so for instance file.tar.gz becomes file.tar.gz.gpg). To recover decrypted data, simply log on as a user which has access to the secret key and decrypt the .gpg file that you are interested in. Then, recover the data as usual. Note: I am being intentionally vague about how to configure and use GPG, because I do not want to encourage neophytes to blindly use this extension. If you do not already understand GPG well enough to follow the two paragraphs above, do not use this extension. Instead, before encrypting your backups, check out the excellent GNU Privacy Handbook at and gain an understanding of how encryption can help you or hurt you. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>encrypt</name> <module>CedarBackup3.extend.encrypt</module> <function>executeAction</function> <index>301</index> </action> </extensions> This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own encrypt configuration section. This is an example Encrypt configuration section: <encrypt> <encrypt_mode>gpg</encrypt_mode> <encrypt_target>Backup User</encrypt_target> </encrypt> The following elements are part of the Encrypt configuration section: encrypt_mode Encryption mode. This value specifies which encryption mechanism will be used by the extension. Currently, only the GPG public-key encryption mechanism is supported. Restrictions: Must be gpg. encrypt_target Encryption target. The value in this field is dependent on the encryption mode. For the gpg mode, this is the name of the recipient whose public key will be used to encrypt the backup data, i.e. the value accepted by gpg -r. Split Extension The Split Extension is a Cedar Backup extension used to split up large files within staging directories. It is probably only useful in combination with the cback3-span command, which requires individual files within staging directories to each be smaller than a single disc. You would normally run this action immediately after the standard stage action, but you could also choose to run it by hand immediately before running cback3-span. The split extension uses the standard UNIX split tool to split the large files up. This tool simply splits the files on bite-size boundaries. It has no knowledge of file formats. Note: this means that in order to recover the data in your original large file, you must have every file that the original file was split into. Think carefully about whether this is what you want. It doesn't sound like a huge limitation. However, cback3-span might put an indivdual file on any disc in a set — the files split from one larger file will not necessarily be together. That means you will probably need every disc in your backup set in order to recover any data from the backup set. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>split</name> <module>CedarBackup3.extend.split</module> <function>executeAction</function> <index>299</index> </action> </extensions> This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own split configuration section. This is an example Split configuration section: <split> <size_limit>250 MB</size_limit> <split_size>100 MB</split_size> </split> The following elements are part of the Split configuration section: size_limit Size limit. Files with a size strictly larger than this limit will be split by the extension. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are 10240, 250 MB or 1.1 GB. Restrictions: Must be a size as described above. split_size Split size. This is the size of the chunks that a large file will be split into. The final chunk may be smaller if the split size doesn't divide evenly into the file size. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are 10240, 250 MB or 1.1 GB. Restrictions: Must be a size as described above. Capacity Extension The capacity extension checks the current capacity of the media in the writer and prints a warning if the media exceeds an indicated capacity. The capacity is indicated either by a maximum percentage utilized or by a minimum number of bytes that must remain unused. This action can be run at any time, but is probably best run as the last action on any given day, so you get as much notice as possible that your media is full and needs to be replaced. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>capacity</name> <module>CedarBackup3.extend.capacity</module> <function>executeAction</function> <index>299</index> </action> </extensions> This extension relies on the options and store configuration sections in the standard Cedar Backup configuration file, and then also requires its own capacity configuration section. This is an example Capacity configuration section that configures the extension to warn if the media is more than 95.5% full: <capacity> <max_percentage>95.5</max_percentage> </capacity> This example configures the extension to warn if the media has fewer than 16 MB free: <capacity> <min_bytes>16 MB</min_bytes> </capacity> The following elements are part of the Capacity configuration section: max_percentage Maximum percentage of the media that may be utilized. You must provide either this value or the min_bytes value. Restrictions: Must be a floating point number between 0.0 and 100.0 min_bytes Minimum number of free bytes that must be available. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are 10240, 250 MB or 1.1 GB. You must provide either this value or the max_percentage value. Restrictions: Must be a byte quantity as described above. CedarBackup3-3.1.6/setup.py0000775000175000017500000000530512560007327017214 0ustar pronovicpronovic00000000000000#!/usr/bin/python3 # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Python distutils setup script # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # pylint: disable=C0111,E0611,F0401 ######################################################################## # Imported modules ######################################################################## from distutils.core import setup from CedarBackup3.release import AUTHOR, EMAIL, VERSION, COPYRIGHT, URL ######################################################################## # Setup configuration ######################################################################## LONG_DESCRIPTION = """ Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Alternately, Cedar Backup can write your backups to the Amazon S3 cloud rather than relying on physical media. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python 3 programming language. """ setup ( name = 'CedarBackup3', version = VERSION, description = 'Implements local and remote backups to CD/DVD media.', long_description = LONG_DESCRIPTION, keywords = ('local', 'remote', 'backup', 'scp', 'CD-R', 'CD-RW', 'DVD+R', 'DVD+RW',), author = AUTHOR, author_email = EMAIL, url = URL, license = "Copyright (c) %s %s. Licensed under the GNU GPL." % (COPYRIGHT, AUTHOR), packages = ['CedarBackup3', 'CedarBackup3.actions', 'CedarBackup3.extend', 'CedarBackup3.tools', 'CedarBackup3.writers', ], scripts = ['cback3', 'util/cback3-span', 'util/cback3-amazons3-sync', ], ) CedarBackup3-3.1.6/util/0002775000175000017500000000000012657665551016473 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/util/cback3-span0000775000175000017500000000142212555754026020474 0ustar pronovicpronovic00000000000000#!/usr/bin/python3 # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Implements Cedar Backup cback3-span script. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # """ Implements Cedar Backup cback3-span script. @author: Kenneth J. Pronovici """ import sys from CedarBackup3.tools.span import cli result = cli() sys.exit(result) CedarBackup3-3.1.6/util/docbook/0002775000175000017500000000000012657665551020113 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/util/docbook/chunk-stylesheet.xsl0000664000175000017500000000423312555004757024132 0ustar pronovicpronovic00000000000000 styles.css 3 0 CedarBackup3-3.1.6/util/docbook/styles.css0000664000175000017500000000664712555004757022153 0ustar pronovicpronovic00000000000000/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * C E D A R * S O L U T I O N S "Software done right." * S O F T W A R E * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Author : Kenneth J. Pronovici * Language : XSLT * Project : Cedar Backup, release 3 * Purpose : Custom stylesheet applied to user manual in HTML form. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ /* This stylesheet was originally taken from the Subversion project's book (http://svnbook.red-bean.com/). I have not made any modifications to the sheet for use with Cedar Backup. The original stylesheet was (c) 2000-2004 CollabNet (see CREDITS). */ BODY { background: white; margin: 0.5in; font-family: arial,helvetica,sans-serif; } H1.title { font-size: 250%; font-style: normal; font-weight: bold; color: black; } H2.subtitle { font-size: 150%; font-style: italic; color: black; } H2.title { font-size: 150%; font-style: normal; font-weight: bold; color: black; } H3.title { font-size: 125%; font-style: normal; font-weight: bold; color: black; } H4.title { font-size: 100%; font-style: normal; font-weight: bold; color: black; } .toc B { font-size: 125%; font-style: normal; font-weight: bold; color: black; } P,LI,UL,OL,DD,DT { font-style: normal; font-weight: normal; color: black; } TT,PRE { font-family: courier new,courier,fixed; } .command, .screen, .programlisting { font-family: courier new,courier,fixed; font-style: normal; font-weight: normal; } .filename { font-family: arial,helvetica,sans-serif; font-style: italic; } A { color: blue; text-decoration: underline; } A:hover { background: rgb(75%,75%,100%); color: blue; text-decoration: underline; } A:visited { color: purple; text-decoration: underline; } IMG { border: none; } .figure, .example, .table { margin: 0.125in 0.5in; } .table TABLE { border: 1px rgb(180,180,200) solid; border-spacing: 0px; } .table TD { border: 1px rgb(180,180,200) solid; } .table TH { background: rgb(180,180,200); border: 1px rgb(180,180,200) solid; } .table P.title, .figure P.title, .example P.title { text-align: left !important; font-size: 100% !important; } .author { font-size: 100%; font-style: italic; font-weight: normal; color: black; } .sidebar { border: 2px black solid; background: rgb(230,230,235); padding: 0.12in; margin: 0 0.5in; } .sidebar P.title { text-align: center; font-size: 125%; } .tip { border: black solid 1px; background: url(./images/info.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .warning { border: black solid 1px; background: url(./images/warning.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .note { border: black solid 1px; background: url(./images/note.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .programlisting, .screen { font-family: courier new,courier,fixed; font-style: normal; font-weight: normal; font-size: 90%; color: black; margin: 0 0.5in; } .navheader, .navfooter { border: black solid 1px; background: rgb(180,180,200); } .navheader HR, .navfooter HR { display: none; } CedarBackup3-3.1.6/util/docbook/dblite.dtd0000664000175000017500000005060312555004757022045 0ustar pronovicpronovic00000000000000 %db; CedarBackup3-3.1.6/util/docbook/html-stylesheet.xsl0000664000175000017500000000424312555004757023767 0ustar pronovicpronovic00000000000000 styles.css 3 0 CedarBackup3-3.1.6/util/test.py0000775000175000017500000002434212560007327020012 0ustar pronovicpronovic00000000000000#!/usr/bin/python3 # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010,2014,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Run all of the unit tests for the project. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Notes ######################################################################## """ Run the CedarBackup3 unit tests. This script runs all of the unit tests at once so we can get one big success or failure result, rather than 20 different smaller results that we somehow have to aggregate together to get the "big picture". This is done by creating and running one big unit test suite based on the suites in the individual unit test modules. The composite suite is always run using the TextTestRunner at verbosity level 1, which prints one dot (".") on the screen for each test run. This output is the same as one would get when using unittest.main() in an individual test. Generally, I'm trying to keep all of the "special" validation logic (i.e. did we find the right Python, did we find the right libraries, etc.) in this code rather than in the individual unit tests so they're more focused on what to test than how their environment should be configured. We want to make sure the tests use the modules in the current source tree, not any versions previously-installed elsewhere, if possible. We don't actually import the modules here, but we warn if the wrong ones would be found. We also want to make sure we are running the correct 'test' package - not one found elsewhere on the user's path - since 'test' could be a relatively common name for a package. Most people will want to run the script with no arguments. This will result in a "reduced feature set" test suite that covers all of the available test suites, but executes only those tests with no surprising system, kernel or network dependencies. If "full" is specified as one of the command-line arguments, then all of the unit tests will be run, including those that require a specialized environment. For instance, some tests require remote connectivity, a loopback filesystem, etc. Other arguments on the command line are assumed to be named tests, so for instance passing "config" runs only the tests for config.py. Any number of individual tests may be listed on the command line, and unknown values will simply be ignored. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## import sys import os import logging import unittest ################## # main() function ################## def main(): """ Main routine for program. @return: Integer 0 upon success, integer 1 upon failure. """ # Check the Python version. We require 3.4 or greater. try: if list(map(int, [sys.version_info[0], sys.version_info[1]])) < [3, 4]: print("Python 3 version 3.4 or greater required, sorry.") return 1 except: # sys.version_info isn't available before 2.0 print("Python 3 version 3.4 or greater required, sorry.") return 1 # Check for the correct CedarBackup3 location and import utilities try: if os.path.exists(os.path.join(".", "CedarBackup3", "filesystem.py")): sys.path.insert(0, ".") elif os.path.basename(os.getcwd()) == "testcase" and os.path.exists(os.path.join("..", "CedarBackup3", "filesystem.py")): sys.path.insert(0, "..") else: print("WARNING: CedarBackup3 modules were not found in the expected") print("location. If the import succeeds, you may be using an") print("unexpected version of CedarBackup3.") print("") from CedarBackup3.util import nullDevice, Diagnostics except ImportError as e: print(("Failed to import CedarBackup3 util module: %s" % e)) print("You must either run the unit tests from the CedarBackup3 source") print("tree, or properly set the PYTHONPATH enviroment variable.") return 1 # Import the unit test modules try: if os.path.exists(os.path.join(".", "testcase", "filesystemtests.py")): sys.path.insert(0, ".") elif os.path.basename(os.getcwd()) == "testcase" and os.path.exists(os.path.join("..", "testcase", "filesystemtests.py")): sys.path.insert(0, "..") else: print("WARNING: CedarBackup3 unit test modules were not found in") print("the expected location. If the import succeeds, you may be") print("using an unexpected version of the test suite.") print("") from testcase import utiltests from testcase import knapsacktests from testcase import filesystemtests from testcase import peertests from testcase import actionsutiltests from testcase import writersutiltests from testcase import cdwritertests from testcase import dvdwritertests from testcase import configtests from testcase import clitests from testcase import mysqltests from testcase import postgresqltests from testcase import subversiontests from testcase import mboxtests from testcase import encrypttests from testcase import amazons3tests from testcase import splittests from testcase import spantests from testcase import synctests from testcase import capacitytests from testcase import customizetests except ImportError as e: print(("Failed to import CedarBackup3 unit test module: %s" % e)) print("You must either run the unit tests from the CedarBackup3 source") print("tree, or properly set the PYTHONPATH enviroment variable.") return 1 # Set up logging to discard everything devnull = nullDevice() handler = logging.FileHandler(filename=devnull) handler.setLevel(logging.NOTSET) logger = logging.getLogger("CedarBackup3") logger.setLevel(logging.NOTSET) logger.addHandler(handler) # Get a list of program arguments args = sys.argv[1:] # Set flags in the environment to control tests if "full" in args: full = True os.environ["PEERTESTS_FULL"] = "Y" os.environ["WRITERSUTILTESTS_FULL"] = "Y" os.environ["ENCRYPTTESTS_FULL"] = "Y" os.environ["SPLITTESTS_FULL"] = "Y" args.remove("full") # remainder of list will be specific tests to run, if any else: full = False os.environ["PEERTESTS_FULL"] = "N" os.environ["WRITERSUTILTESTS_FULL"] = "N" os.environ["ENCRYPTTESTS_FULL"] = "N" os.environ["SPLITTESTS_FULL"] = "N" # Print a starting banner print("\n*** Running CedarBackup3 unit tests.") if not full: print("*** Using reduced feature set suite with minimum system requirements.") # Make a list of tests to run unittests = { } if args == [] or "util" in args: unittests["util"] = utiltests.suite() if args == [] or "knapsack" in args: unittests["knapsack"] = knapsacktests.suite() if args == [] or "filesystem" in args: unittests["filesystem"] = filesystemtests.suite() if args == [] or "peer" in args: unittests["peer"] = peertests.suite() if args == [] or "actionsutil" in args: unittests["actionsutil"] = actionsutiltests.suite() if args == [] or "writersutil" in args: unittests["writersutil"] = writersutiltests.suite() if args == [] or "cdwriter" in args: unittests["cdwriter"] = cdwritertests.suite() if args == [] or "dvdwriter" in args: unittests["dvdwriter"] = dvdwritertests.suite() if args == [] or "config" in args: unittests["config"] = configtests.suite() if args == [] or "cli" in args: unittests["cli"] = clitests.suite() if args == [] or "mysql" in args: unittests["mysql"] = mysqltests.suite() if args == [] or "postgresql" in args: unittests["postgresql"] = postgresqltests.suite() if args == [] or "subversion" in args: unittests["subversion"] = subversiontests.suite() if args == [] or "mbox" in args: unittests["mbox"] = mboxtests.suite() if args == [] or "split" in args: unittests["split"] = splittests.suite() if args == [] or "encrypt" in args: unittests["encrypt"] = encrypttests.suite() if args == [] or "amazons3" in args: unittests["amazons3"] = amazons3tests.suite() if args == [] or "span" in args: unittests["span"] = spantests.suite() if args == [] or "sync" in args: unittests["sync"] = synctests.suite() if args == [] or "capacity" in args: unittests["capacity"] = capacitytests.suite() if args == [] or "customize" in args: unittests["customize"] = customizetests.suite() if args != []: print(("*** Executing specific tests: %s" % list(unittests.keys()))) # Print some diagnostic information print("") Diagnostics().printDiagnostics(prefix="*** ") # Create and run the test suite print("") suite = unittest.TestSuite(list(unittests.values())) suiteResult = unittest.TextTestRunner(verbosity=1).run(suite) print("") if not suiteResult.wasSuccessful(): return 1 else: return 0 ######################################################################## # Module entry point ######################################################################## # Run the main routine if the module is executed rather than sourced if __name__ == '__main__': result = main() sys.exit(result) CedarBackup3-3.1.6/util/cback3-amazons3-sync0000775000175000017500000000145012555754031022235 0ustar pronovicpronovic00000000000000#!/usr/bin/python3 # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Implements Cedar Backup cback3-amazons3-sync script. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # """ Implements Cedar Backup cback3-amazons3-sync script. @author: Kenneth J. Pronovici """ import sys from CedarBackup3.tools.amazons3 import cli result = cli() sys.exit(result) CedarBackup3-3.1.6/testcase/0002775000175000017500000000000012657665551017331 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/testcase/mboxtests.py0000664000175000017500000023613112560007327021717 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2006,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests mbox extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/extend/mbox.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in extend/mbox.py. There are also tests for several of the private methods. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a MBOXTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest # Cedar Backup modules from CedarBackup3.testutil import findResources, failUnlessAssignRaises from CedarBackup3.xmlutil import createOutputDom, serializeDom from CedarBackup3.extend.mbox import LocalConfig, MboxConfig, MboxFile, MboxDir ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "mbox.conf.1", "mbox.conf.2", "mbox.conf.3", "mbox.conf.4", ] ####################################################################### # Test Case Classes ####################################################################### ##################### # TestMboxFile class ##################### class TestMboxFile(unittest.TestCase): """Tests for the MboxFile class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = MboxFile() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ mboxFile = MboxFile() self.assertEqual(None, mboxFile.absolutePath) self.assertEqual(None, mboxFile.collectMode) self.assertEqual(None, mboxFile.compressMode) def testConstructor_002(self): """ Test constructor with all values filled in. """ mboxFile = MboxFile("/path/to/it", "daily", "gzip") self.assertEqual("/path/to/it", mboxFile.absolutePath) self.assertEqual("daily", mboxFile.collectMode) self.assertEqual("gzip", mboxFile.compressMode) def testConstructor_003(self): """ Test assignment of absolutePath attribute, None value. """ mboxFile = MboxFile(absolutePath="/path/to/something") self.assertEqual("/path/to/something", mboxFile.absolutePath) mboxFile.absolutePath = None self.assertEqual(None, mboxFile.absolutePath) def testConstructor_004(self): """ Test assignment of absolutePath attribute, valid value. """ mboxFile = MboxFile() self.assertEqual(None, mboxFile.absolutePath) mboxFile.absolutePath = "/path/to/whatever" self.assertEqual("/path/to/whatever", mboxFile.absolutePath) def testConstructor_005(self): """ Test assignment of absolutePath attribute, invalid value (empty). """ mboxFile = MboxFile() self.assertEqual(None, mboxFile.absolutePath) self.failUnlessAssignRaises(ValueError, mboxFile, "absolutePath", "") self.assertEqual(None, mboxFile.absolutePath) def testConstructor_006(self): """ Test assignment of absolutePath attribute, invalid value (not absolute). """ mboxFile = MboxFile() self.assertEqual(None, mboxFile.absolutePath) self.failUnlessAssignRaises(ValueError, mboxFile, "absolutePath", "relative/path") self.assertEqual(None, mboxFile.absolutePath) def testConstructor_007(self): """ Test assignment of collectMode attribute, None value. """ mboxFile = MboxFile(collectMode="daily") self.assertEqual("daily", mboxFile.collectMode) mboxFile.collectMode = None self.assertEqual(None, mboxFile.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, valid value. """ mboxFile = MboxFile() self.assertEqual(None, mboxFile.collectMode) mboxFile.collectMode = "daily" self.assertEqual("daily", mboxFile.collectMode) mboxFile.collectMode = "weekly" self.assertEqual("weekly", mboxFile.collectMode) mboxFile.collectMode = "incr" self.assertEqual("incr", mboxFile.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, invalid value (empty). """ mboxFile = MboxFile() self.assertEqual(None, mboxFile.collectMode) self.failUnlessAssignRaises(ValueError, mboxFile, "collectMode", "") self.assertEqual(None, mboxFile.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ mboxFile = MboxFile() self.assertEqual(None, mboxFile.collectMode) self.failUnlessAssignRaises(ValueError, mboxFile, "collectMode", "monthly") self.assertEqual(None, mboxFile.collectMode) def testConstructor_011(self): """ Test assignment of compressMode attribute, None value. """ mboxFile = MboxFile(compressMode="gzip") self.assertEqual("gzip", mboxFile.compressMode) mboxFile.compressMode = None self.assertEqual(None, mboxFile.compressMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, valid value. """ mboxFile = MboxFile() self.assertEqual(None, mboxFile.compressMode) mboxFile.compressMode = "none" self.assertEqual("none", mboxFile.compressMode) mboxFile.compressMode = "bzip2" self.assertEqual("bzip2", mboxFile.compressMode) mboxFile.compressMode = "gzip" self.assertEqual("gzip", mboxFile.compressMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, invalid value (empty). """ mboxFile = MboxFile() self.assertEqual(None, mboxFile.compressMode) self.failUnlessAssignRaises(ValueError, mboxFile, "compressMode", "") self.assertEqual(None, mboxFile.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ mboxFile = MboxFile() self.assertEqual(None, mboxFile.compressMode) self.failUnlessAssignRaises(ValueError, mboxFile, "compressMode", "compress") self.assertEqual(None, mboxFile.compressMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ mboxFile1 = MboxFile() mboxFile2 = MboxFile() self.assertEqual(mboxFile1, mboxFile2) self.assertTrue(mboxFile1 == mboxFile2) self.assertTrue(not mboxFile1 < mboxFile2) self.assertTrue(mboxFile1 <= mboxFile2) self.assertTrue(not mboxFile1 > mboxFile2) self.assertTrue(mboxFile1 >= mboxFile2) self.assertTrue(not mboxFile1 != mboxFile2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ mboxFile1 = MboxFile("/path", "daily", "gzip") mboxFile2 = MboxFile("/path", "daily", "gzip") self.assertEqual(mboxFile1, mboxFile2) self.assertTrue(mboxFile1 == mboxFile2) self.assertTrue(not mboxFile1 < mboxFile2) self.assertTrue(mboxFile1 <= mboxFile2) self.assertTrue(not mboxFile1 > mboxFile2) self.assertTrue(mboxFile1 >= mboxFile2) self.assertTrue(not mboxFile1 != mboxFile2) def testComparison_003(self): """ Test comparison of two differing objects, absolutePath differs (one None). """ mboxFile1 = MboxFile() mboxFile2 = MboxFile(absolutePath="/zippy") self.assertNotEqual(mboxFile1, mboxFile2) self.assertTrue(not mboxFile1 == mboxFile2) self.assertTrue(mboxFile1 < mboxFile2) self.assertTrue(mboxFile1 <= mboxFile2) self.assertTrue(not mboxFile1 > mboxFile2) self.assertTrue(not mboxFile1 >= mboxFile2) self.assertTrue(mboxFile1 != mboxFile2) def testComparison_004(self): """ Test comparison of two differing objects, absolutePath differs. """ mboxFile1 = MboxFile("/path", "daily", "gzip") mboxFile2 = MboxFile("/zippy", "daily", "gzip") self.assertNotEqual(mboxFile1, mboxFile2) self.assertTrue(not mboxFile1 == mboxFile2) self.assertTrue(mboxFile1 < mboxFile2) self.assertTrue(mboxFile1 <= mboxFile2) self.assertTrue(not mboxFile1 > mboxFile2) self.assertTrue(not mboxFile1 >= mboxFile2) self.assertTrue(mboxFile1 != mboxFile2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ mboxFile1 = MboxFile() mboxFile2 = MboxFile(collectMode="incr") self.assertNotEqual(mboxFile1, mboxFile2) self.assertTrue(not mboxFile1 == mboxFile2) self.assertTrue(mboxFile1 < mboxFile2) self.assertTrue(mboxFile1 <= mboxFile2) self.assertTrue(not mboxFile1 > mboxFile2) self.assertTrue(not mboxFile1 >= mboxFile2) self.assertTrue(mboxFile1 != mboxFile2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ mboxFile1 = MboxFile("/path", "daily", "gzip") mboxFile2 = MboxFile("/path", "incr", "gzip") self.assertNotEqual(mboxFile1, mboxFile2) self.assertTrue(not mboxFile1 == mboxFile2) self.assertTrue(mboxFile1 < mboxFile2) self.assertTrue(mboxFile1 <= mboxFile2) self.assertTrue(not mboxFile1 > mboxFile2) self.assertTrue(not mboxFile1 >= mboxFile2) self.assertTrue(mboxFile1 != mboxFile2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ mboxFile1 = MboxFile() mboxFile2 = MboxFile(compressMode="gzip") self.assertNotEqual(mboxFile1, mboxFile2) self.assertTrue(not mboxFile1 == mboxFile2) self.assertTrue(mboxFile1 < mboxFile2) self.assertTrue(mboxFile1 <= mboxFile2) self.assertTrue(not mboxFile1 > mboxFile2) self.assertTrue(not mboxFile1 >= mboxFile2) self.assertTrue(mboxFile1 != mboxFile2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ mboxFile1 = MboxFile("/path", "daily", "bzip2") mboxFile2 = MboxFile("/path", "daily", "gzip") self.assertNotEqual(mboxFile1, mboxFile2) self.assertTrue(not mboxFile1 == mboxFile2) self.assertTrue(mboxFile1 < mboxFile2) self.assertTrue(mboxFile1 <= mboxFile2) self.assertTrue(not mboxFile1 > mboxFile2) self.assertTrue(not mboxFile1 >= mboxFile2) self.assertTrue(mboxFile1 != mboxFile2) ##################### # TestMboxDir class ##################### class TestMboxDir(unittest.TestCase): """Tests for the MboxDir class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = MboxDir() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.absolutePath) self.assertEqual(None, mboxDir.collectMode) self.assertEqual(None, mboxDir.compressMode) self.assertEqual(None, mboxDir.relativeExcludePaths) self.assertEqual(None, mboxDir.excludePatterns) def testConstructor_002(self): """ Test constructor with all values filled in. """ mboxDir = MboxDir("/path/to/it", "daily", "gzip", [ "whatever", ], [ ".*SPAM.*", ] ) self.assertEqual("/path/to/it", mboxDir.absolutePath) self.assertEqual("daily", mboxDir.collectMode) self.assertEqual("gzip", mboxDir.compressMode) self.assertEqual([ "whatever", ], mboxDir.relativeExcludePaths) self.assertEqual([ ".*SPAM.*", ], mboxDir.excludePatterns) def testConstructor_003(self): """ Test assignment of absolutePath attribute, None value. """ mboxDir = MboxDir(absolutePath="/path/to/something") self.assertEqual("/path/to/something", mboxDir.absolutePath) mboxDir.absolutePath = None self.assertEqual(None, mboxDir.absolutePath) def testConstructor_004(self): """ Test assignment of absolutePath attribute, valid value. """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.absolutePath) mboxDir.absolutePath = "/path/to/whatever" self.assertEqual("/path/to/whatever", mboxDir.absolutePath) def testConstructor_005(self): """ Test assignment of absolutePath attribute, invalid value (empty). """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.absolutePath) self.failUnlessAssignRaises(ValueError, mboxDir, "absolutePath", "") self.assertEqual(None, mboxDir.absolutePath) def testConstructor_006(self): """ Test assignment of absolutePath attribute, invalid value (not absolute). """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.absolutePath) self.failUnlessAssignRaises(ValueError, mboxDir, "absolutePath", "relative/path") self.assertEqual(None, mboxDir.absolutePath) def testConstructor_007(self): """ Test assignment of collectMode attribute, None value. """ mboxDir = MboxDir(collectMode="daily") self.assertEqual("daily", mboxDir.collectMode) mboxDir.collectMode = None self.assertEqual(None, mboxDir.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, valid value. """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.collectMode) mboxDir.collectMode = "daily" self.assertEqual("daily", mboxDir.collectMode) mboxDir.collectMode = "weekly" self.assertEqual("weekly", mboxDir.collectMode) mboxDir.collectMode = "incr" self.assertEqual("incr", mboxDir.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, invalid value (empty). """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.collectMode) self.failUnlessAssignRaises(ValueError, mboxDir, "collectMode", "") self.assertEqual(None, mboxDir.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.collectMode) self.failUnlessAssignRaises(ValueError, mboxDir, "collectMode", "monthly") self.assertEqual(None, mboxDir.collectMode) def testConstructor_011(self): """ Test assignment of compressMode attribute, None value. """ mboxDir = MboxDir(compressMode="gzip") self.assertEqual("gzip", mboxDir.compressMode) mboxDir.compressMode = None self.assertEqual(None, mboxDir.compressMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, valid value. """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.compressMode) mboxDir.compressMode = "none" self.assertEqual("none", mboxDir.compressMode) mboxDir.compressMode = "bzip2" self.assertEqual("bzip2", mboxDir.compressMode) mboxDir.compressMode = "gzip" self.assertEqual("gzip", mboxDir.compressMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, invalid value (empty). """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.compressMode) self.failUnlessAssignRaises(ValueError, mboxDir, "compressMode", "") self.assertEqual(None, mboxDir.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.compressMode) self.failUnlessAssignRaises(ValueError, mboxDir, "compressMode", "compress") self.assertEqual(None, mboxDir.compressMode) def testConstructor_015(self): """ Test assignment of relativeExcludePaths attribute, None value. """ mboxDir = MboxDir(relativeExcludePaths=[]) self.assertEqual([], mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths = None self.assertEqual(None, mboxDir.relativeExcludePaths) def testConstructor_016(self): """ Test assignment of relativeExcludePaths attribute, [] value. """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths = [] self.assertEqual([], mboxDir.relativeExcludePaths) def testConstructor_017(self): """ Test assignment of relativeExcludePaths attribute, single valid entry. """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths = ["stuff", ] self.assertEqual(["stuff", ], mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths.insert(0, "bogus") self.assertEqual(["bogus", "stuff", ], mboxDir.relativeExcludePaths) def testConstructor_018(self): """ Test assignment of relativeExcludePaths attribute, multiple valid entries. """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths = ["bogus", "stuff", ] self.assertEqual(["bogus", "stuff", ], mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths.append("more") self.assertEqual(["bogus", "stuff", "more", ], mboxDir.relativeExcludePaths) def testConstructor_019(self): """ Test assignment of excludePatterns attribute, None value. """ mboxDir = MboxDir(excludePatterns=[]) self.assertEqual([], mboxDir.excludePatterns) mboxDir.excludePatterns = None self.assertEqual(None, mboxDir.excludePatterns) def testConstructor_020(self): """ Test assignment of excludePatterns attribute, [] value. """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.excludePatterns) mboxDir.excludePatterns = [] self.assertEqual([], mboxDir.excludePatterns) def testConstructor_021(self): """ Test assignment of excludePatterns attribute, single valid entry. """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.excludePatterns) mboxDir.excludePatterns = ["valid", ] self.assertEqual(["valid", ], mboxDir.excludePatterns) mboxDir.excludePatterns.append("more") self.assertEqual(["valid", "more", ], mboxDir.excludePatterns) def testConstructor_022(self): """ Test assignment of excludePatterns attribute, multiple valid entries. """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.excludePatterns) mboxDir.excludePatterns = ["valid", "more", ] self.assertEqual(["valid", "more", ], mboxDir.excludePatterns) mboxDir.excludePatterns.insert(1, "bogus") self.assertEqual(["valid", "bogus", "more", ], mboxDir.excludePatterns) def testConstructor_023(self): """ Test assignment of excludePatterns attribute, single invalid entry. """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.excludePatterns) self.failUnlessAssignRaises(ValueError, mboxDir, "excludePatterns", ["*.jpg", ]) self.assertEqual(None, mboxDir.excludePatterns) def testConstructor_024(self): """ Test assignment of excludePatterns attribute, multiple invalid entries. """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.excludePatterns) self.failUnlessAssignRaises(ValueError, mboxDir, "excludePatterns", ["*.jpg", "*" ]) self.assertEqual(None, mboxDir.excludePatterns) def testConstructor_025(self): """ Test assignment of excludePatterns attribute, mixed valid and invalid entries. """ mboxDir = MboxDir() self.assertEqual(None, mboxDir.excludePatterns) self.failUnlessAssignRaises(ValueError, mboxDir, "excludePatterns", ["*.jpg", "valid" ]) self.assertEqual(None, mboxDir.excludePatterns) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ mboxDir1 = MboxDir() mboxDir2 = MboxDir() self.assertEqual(mboxDir1, mboxDir2) self.assertTrue(mboxDir1 == mboxDir2) self.assertTrue(not mboxDir1 < mboxDir2) self.assertTrue(mboxDir1 <= mboxDir2) self.assertTrue(not mboxDir1 > mboxDir2) self.assertTrue(mboxDir1 >= mboxDir2) self.assertTrue(not mboxDir1 != mboxDir2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ mboxDir1 = MboxDir("/path", "daily", "gzip") mboxDir2 = MboxDir("/path", "daily", "gzip") self.assertEqual(mboxDir1, mboxDir2) self.assertTrue(mboxDir1 == mboxDir2) self.assertTrue(not mboxDir1 < mboxDir2) self.assertTrue(mboxDir1 <= mboxDir2) self.assertTrue(not mboxDir1 > mboxDir2) self.assertTrue(mboxDir1 >= mboxDir2) self.assertTrue(not mboxDir1 != mboxDir2) def testComparison_003(self): """ Test comparison of two differing objects, absolutePath differs (one None). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(absolutePath="/zippy") self.assertNotEqual(mboxDir1, mboxDir2) self.assertTrue(not mboxDir1 == mboxDir2) self.assertTrue(mboxDir1 < mboxDir2) self.assertTrue(mboxDir1 <= mboxDir2) self.assertTrue(not mboxDir1 > mboxDir2) self.assertTrue(not mboxDir1 >= mboxDir2) self.assertTrue(mboxDir1 != mboxDir2) def testComparison_004(self): """ Test comparison of two differing objects, absolutePath differs. """ mboxDir1 = MboxDir("/path", "daily", "gzip") mboxDir2 = MboxDir("/zippy", "daily", "gzip") self.assertNotEqual(mboxDir1, mboxDir2) self.assertTrue(not mboxDir1 == mboxDir2) self.assertTrue(mboxDir1 < mboxDir2) self.assertTrue(mboxDir1 <= mboxDir2) self.assertTrue(not mboxDir1 > mboxDir2) self.assertTrue(not mboxDir1 >= mboxDir2) self.assertTrue(mboxDir1 != mboxDir2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(collectMode="incr") self.assertNotEqual(mboxDir1, mboxDir2) self.assertTrue(not mboxDir1 == mboxDir2) self.assertTrue(mboxDir1 < mboxDir2) self.assertTrue(mboxDir1 <= mboxDir2) self.assertTrue(not mboxDir1 > mboxDir2) self.assertTrue(not mboxDir1 >= mboxDir2) self.assertTrue(mboxDir1 != mboxDir2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ mboxDir1 = MboxDir("/path", "daily", "gzip") mboxDir2 = MboxDir("/path", "incr", "gzip") self.assertNotEqual(mboxDir1, mboxDir2) self.assertTrue(not mboxDir1 == mboxDir2) self.assertTrue(mboxDir1 < mboxDir2) self.assertTrue(mboxDir1 <= mboxDir2) self.assertTrue(not mboxDir1 > mboxDir2) self.assertTrue(not mboxDir1 >= mboxDir2) self.assertTrue(mboxDir1 != mboxDir2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(compressMode="gzip") self.assertNotEqual(mboxDir1, mboxDir2) self.assertTrue(not mboxDir1 == mboxDir2) self.assertTrue(mboxDir1 < mboxDir2) self.assertTrue(mboxDir1 <= mboxDir2) self.assertTrue(not mboxDir1 > mboxDir2) self.assertTrue(not mboxDir1 >= mboxDir2) self.assertTrue(mboxDir1 != mboxDir2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ mboxDir1 = MboxDir("/path", "daily", "bzip2") mboxDir2 = MboxDir("/path", "daily", "gzip") self.assertNotEqual(mboxDir1, mboxDir2) self.assertTrue(not mboxDir1 == mboxDir2) self.assertTrue(mboxDir1 < mboxDir2) self.assertTrue(mboxDir1 <= mboxDir2) self.assertTrue(not mboxDir1 > mboxDir2) self.assertTrue(not mboxDir1 >= mboxDir2) self.assertTrue(mboxDir1 != mboxDir2) def testComparison_009(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one None, one empty). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(relativeExcludePaths=[]) self.assertNotEqual(mboxDir1, mboxDir2) self.assertTrue(not mboxDir1 == mboxDir2) self.assertTrue(mboxDir1 < mboxDir2) self.assertTrue(mboxDir1 <= mboxDir2) self.assertTrue(not mboxDir1 > mboxDir2) self.assertTrue(not mboxDir1 >= mboxDir2) self.assertTrue(mboxDir1 != mboxDir2) def testComparison_010(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one None, one not empty). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(relativeExcludePaths=["stuff", "other", ]) self.assertNotEqual(mboxDir1, mboxDir2) self.assertTrue(not mboxDir1 == mboxDir2) self.assertTrue(mboxDir1 < mboxDir2) self.assertTrue(mboxDir1 <= mboxDir2) self.assertTrue(not mboxDir1 > mboxDir2) self.assertTrue(not mboxDir1 >= mboxDir2) self.assertTrue(mboxDir1 != mboxDir2) def testComparison_011(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one empty, one not empty). """ mboxDir1 = MboxDir("/etc/whatever", "incr", "none", ["one", ], []) mboxDir2 = MboxDir("/etc/whatever", "incr", "none", [], []) self.assertNotEqual(mboxDir1, mboxDir2) self.assertTrue(not mboxDir1 == mboxDir2) self.assertTrue(not mboxDir1 < mboxDir2) self.assertTrue(not mboxDir1 <= mboxDir2) self.assertTrue(mboxDir1 > mboxDir2) self.assertTrue(mboxDir1 >= mboxDir2) self.assertTrue(mboxDir1 != mboxDir2) def testComparison_012(self): """ Test comparison of two differing objects, relativeExcludePaths differs (both not empty). """ mboxDir1 = MboxDir("/etc/whatever", "incr", "none", ["one", ], []) mboxDir2 = MboxDir("/etc/whatever", "incr", "none", ["two", ], []) self.assertNotEqual(mboxDir1, mboxDir2) self.assertTrue(not mboxDir1 == mboxDir2) self.assertTrue(mboxDir1 < mboxDir2) self.assertTrue(mboxDir1 <= mboxDir2) self.assertTrue(not mboxDir1 > mboxDir2) self.assertTrue(not mboxDir1 >= mboxDir2) self.assertTrue(mboxDir1 != mboxDir2) def testComparison_013(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one empty). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(excludePatterns=[]) self.assertNotEqual(mboxDir1, mboxDir2) self.assertTrue(not mboxDir1 == mboxDir2) self.assertTrue(mboxDir1 < mboxDir2) self.assertTrue(mboxDir1 <= mboxDir2) self.assertTrue(not mboxDir1 > mboxDir2) self.assertTrue(not mboxDir1 >= mboxDir2) self.assertTrue(mboxDir1 != mboxDir2) def testComparison_014(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one not empty). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(excludePatterns=["one", "two", "three", ]) self.assertNotEqual(mboxDir1, mboxDir2) self.assertTrue(not mboxDir1 == mboxDir2) self.assertTrue(mboxDir1 < mboxDir2) self.assertTrue(mboxDir1 <= mboxDir2) self.assertTrue(not mboxDir1 > mboxDir2) self.assertTrue(not mboxDir1 >= mboxDir2) self.assertTrue(mboxDir1 != mboxDir2) def testComparison_015(self): """ Test comparison of two differing objects, excludePatterns differs (one empty, one not empty). """ mboxDir1 = MboxDir("/etc/whatever", "incr", "none", [], []) mboxDir2 = MboxDir("/etc/whatever", "incr", "none", [], ["pattern", ]) self.assertNotEqual(mboxDir1, mboxDir2) self.assertTrue(not mboxDir1 == mboxDir2) self.assertTrue(mboxDir1 < mboxDir2) self.assertTrue(mboxDir1 <= mboxDir2) self.assertTrue(not mboxDir1 > mboxDir2) self.assertTrue(not mboxDir1 >= mboxDir2) self.assertTrue(mboxDir1 != mboxDir2) def testComparison_016(self): """ Test comparison of two differing objects, excludePatterns differs (both not empty). """ mboxDir1 = MboxDir("/etc/whatever", "incr", "none", [], ["p1", ]) mboxDir2 = MboxDir("/etc/whatever", "incr", "none", [], ["p2", ]) self.assertNotEqual(mboxDir1, mboxDir2) self.assertTrue(not mboxDir1 == mboxDir2) self.assertTrue(mboxDir1 < mboxDir2) self.assertTrue(mboxDir1 <= mboxDir2) self.assertTrue(not mboxDir1 > mboxDir2) self.assertTrue(not mboxDir1 >= mboxDir2) self.assertTrue(mboxDir1 != mboxDir2) ####################### # TestMboxConfig class ####################### class TestMboxConfig(unittest.TestCase): """Tests for the MboxConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = MboxConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ mbox = MboxConfig() self.assertEqual(None, mbox.collectMode) self.assertEqual(None, mbox.compressMode) self.assertEqual(None, mbox.mboxFiles) self.assertEqual(None, mbox.mboxDirs) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values, mboxFiles=None and mboxDirs=None. """ mbox = MboxConfig("daily", "gzip", None, None) self.assertEqual("daily", mbox.collectMode) self.assertEqual("gzip", mbox.compressMode) self.assertEqual(None, mbox.mboxFiles) self.assertEqual(None, mbox.mboxDirs) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values, no mboxFiles, no mboxDirs. """ mbox = MboxConfig("daily", "gzip", [], []) self.assertEqual("daily", mbox.collectMode) self.assertEqual("gzip", mbox.compressMode) self.assertEqual([], mbox.mboxFiles) self.assertEqual([], mbox.mboxDirs) def testConstructor_004(self): """ Test constructor with all values filled in, with valid values, with one mboxFile, no mboxDirs. """ mboxFiles = [ MboxFile(), ] mbox = MboxConfig("daily", "gzip", mboxFiles, []) self.assertEqual("daily", mbox.collectMode) self.assertEqual("gzip", mbox.compressMode) self.assertEqual(mboxFiles, mbox.mboxFiles) self.assertEqual([], mbox.mboxDirs) def testConstructor_005(self): """ Test constructor with all values filled in, with valid values, with no mboxFiles, one mboxDir. """ mboxDirs = [ MboxDir(), ] mbox = MboxConfig("daily", "gzip", [], mboxDirs) self.assertEqual("daily", mbox.collectMode) self.assertEqual("gzip", mbox.compressMode) self.assertEqual([], mbox.mboxFiles) self.assertEqual(mboxDirs, mbox.mboxDirs) def testConstructor_006(self): """ Test constructor with all values filled in, with valid values, with multiple mboxFiles and mboxDirs. """ mboxFiles = [ MboxFile(collectMode="daily"), MboxFile(collectMode="weekly"), ] mboxDirs = [ MboxDir(collectMode="weekly"), MboxDir(collectMode="incr"), ] mbox = MboxConfig("daily", "gzip", mboxFiles=mboxFiles, mboxDirs=mboxDirs) self.assertEqual("daily", mbox.collectMode) self.assertEqual("gzip", mbox.compressMode) self.assertEqual(mboxFiles, mbox.mboxFiles) self.assertEqual(mboxDirs, mbox.mboxDirs) def testConstructor_007(self): """ Test assignment of collectMode attribute, None value. """ mbox = MboxConfig(collectMode="daily") self.assertEqual("daily", mbox.collectMode) mbox.collectMode = None self.assertEqual(None, mbox.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, valid value. """ mbox = MboxConfig() self.assertEqual(None, mbox.collectMode) mbox.collectMode = "weekly" self.assertEqual("weekly", mbox.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, invalid value (empty). """ mbox = MboxConfig() self.assertEqual(None, mbox.collectMode) self.failUnlessAssignRaises(ValueError, mbox, "collectMode", "") self.assertEqual(None, mbox.collectMode) def testConstructor_010(self): """ Test assignment of compressMode attribute, None value. """ mbox = MboxConfig(compressMode="gzip") self.assertEqual("gzip", mbox.compressMode) mbox.compressMode = None self.assertEqual(None, mbox.compressMode) def testConstructor_011(self): """ Test assignment of compressMode attribute, valid value. """ mbox = MboxConfig() self.assertEqual(None, mbox.compressMode) mbox.compressMode = "bzip2" self.assertEqual("bzip2", mbox.compressMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, invalid value (empty). """ mbox = MboxConfig() self.assertEqual(None, mbox.compressMode) self.failUnlessAssignRaises(ValueError, mbox, "compressMode", "") self.assertEqual(None, mbox.compressMode) def testConstructor_013(self): """ Test assignment of mboxFiles attribute, None value. """ mbox = MboxConfig(mboxFiles=[]) self.assertEqual([], mbox.mboxFiles) mbox.mboxFiles = None self.assertEqual(None, mbox.mboxFiles) def testConstructor_014(self): """ Test assignment of mboxFiles attribute, [] value. """ mbox = MboxConfig() self.assertEqual(None, mbox.mboxFiles) mbox.mboxFiles = [] self.assertEqual([], mbox.mboxFiles) def testConstructor_015(self): """ Test assignment of mboxFiles attribute, single valid entry. """ mbox = MboxConfig() self.assertEqual(None, mbox.mboxFiles) mbox.mboxFiles = [ MboxFile(), ] self.assertEqual([ MboxFile(), ], mbox.mboxFiles) mbox.mboxFiles.append(MboxFile(collectMode="daily")) self.assertEqual([ MboxFile(), MboxFile(collectMode="daily"), ], mbox.mboxFiles) def testConstructor_016(self): """ Test assignment of mboxFiles attribute, multiple valid entries. """ mbox = MboxConfig() self.assertEqual(None, mbox.mboxFiles) mbox.mboxFiles = [ MboxFile(collectMode="daily"), MboxFile(collectMode="weekly"), ] self.assertEqual([ MboxFile(collectMode="daily"), MboxFile(collectMode="weekly"), ], mbox.mboxFiles) mbox.mboxFiles.append(MboxFile(collectMode="incr")) self.assertEqual([ MboxFile(collectMode="daily"), MboxFile(collectMode="weekly"), MboxFile(collectMode="incr"), ], mbox.mboxFiles) def testConstructor_017(self): """ Test assignment of mboxFiles attribute, single invalid entry (None). """ mbox = MboxConfig() self.assertEqual(None, mbox.mboxFiles) self.failUnlessAssignRaises(ValueError, mbox, "mboxFiles", [None, ]) self.assertEqual(None, mbox.mboxFiles) def testConstructor_018(self): """ Test assignment of mboxFiles attribute, single invalid entry (wrong type). """ mbox = MboxConfig() self.assertEqual(None, mbox.mboxFiles) self.failUnlessAssignRaises(ValueError, mbox, "mboxFiles", [MboxDir(), ]) self.assertEqual(None, mbox.mboxFiles) def testConstructor_019(self): """ Test assignment of mboxFiles attribute, mixed valid and invalid entries. """ mbox = MboxConfig() self.assertEqual(None, mbox.mboxFiles) self.failUnlessAssignRaises(ValueError, mbox, "mboxFiles", [MboxFile(), MboxDir(), ]) self.assertEqual(None, mbox.mboxFiles) def testConstructor_020(self): """ Test assignment of mboxDirs attribute, None value. """ mbox = MboxConfig(mboxDirs=[]) self.assertEqual([], mbox.mboxDirs) mbox.mboxDirs = None self.assertEqual(None, mbox.mboxDirs) def testConstructor_021(self): """ Test assignment of mboxDirs attribute, [] value. """ mbox = MboxConfig() self.assertEqual(None, mbox.mboxDirs) mbox.mboxDirs = [] self.assertEqual([], mbox.mboxDirs) def testConstructor_022(self): """ Test assignment of mboxDirs attribute, single valid entry. """ mbox = MboxConfig() self.assertEqual(None, mbox.mboxDirs) mbox.mboxDirs = [ MboxDir(), ] self.assertEqual([ MboxDir(), ], mbox.mboxDirs) mbox.mboxDirs.append(MboxDir(collectMode="daily")) self.assertEqual([ MboxDir(), MboxDir(collectMode="daily"), ], mbox.mboxDirs) def testConstructor_023(self): """ Test assignment of mboxDirs attribute, multiple valid entries. """ mbox = MboxConfig() self.assertEqual(None, mbox.mboxDirs) mbox.mboxDirs = [ MboxDir(collectMode="daily"), MboxDir(collectMode="weekly"), ] self.assertEqual([ MboxDir(collectMode="daily"), MboxDir(collectMode="weekly"), ], mbox.mboxDirs) mbox.mboxDirs.append(MboxDir(collectMode="incr")) self.assertEqual([ MboxDir(collectMode="daily"), MboxDir(collectMode="weekly"), MboxDir(collectMode="incr"), ], mbox.mboxDirs) def testConstructor_024(self): """ Test assignment of mboxDirs attribute, single invalid entry (None). """ mbox = MboxConfig() self.assertEqual(None, mbox.mboxDirs) self.failUnlessAssignRaises(ValueError, mbox, "mboxDirs", [None, ]) self.assertEqual(None, mbox.mboxDirs) def testConstructor_025(self): """ Test assignment of mboxDirs attribute, single invalid entry (wrong type). """ mbox = MboxConfig() self.assertEqual(None, mbox.mboxDirs) self.failUnlessAssignRaises(ValueError, mbox, "mboxDirs", [MboxFile(), ]) self.assertEqual(None, mbox.mboxDirs) def testConstructor_026(self): """ Test assignment of mboxDirs attribute, mixed valid and invalid entries. """ mbox = MboxConfig() self.assertEqual(None, mbox.mboxDirs) self.failUnlessAssignRaises(ValueError, mbox, "mboxDirs", [MboxDir(), MboxFile(), ]) self.assertEqual(None, mbox.mboxDirs) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ mbox1 = MboxConfig() mbox2 = MboxConfig() self.assertEqual(mbox1, mbox2) self.assertTrue(mbox1 == mbox2) self.assertTrue(not mbox1 < mbox2) self.assertTrue(mbox1 <= mbox2) self.assertTrue(not mbox1 > mbox2) self.assertTrue(mbox1 >= mbox2) self.assertTrue(not mbox1 != mbox2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None, lists None. """ mbox1 = MboxConfig("daily", "gzip", None, None) mbox2 = MboxConfig("daily", "gzip", None, None) self.assertEqual(mbox1, mbox2) self.assertTrue(mbox1 == mbox2) self.assertTrue(not mbox1 < mbox2) self.assertTrue(mbox1 <= mbox2) self.assertTrue(not mbox1 > mbox2) self.assertTrue(mbox1 >= mbox2) self.assertTrue(not mbox1 != mbox2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None, lists empty. """ mbox1 = MboxConfig("daily", "gzip", [], []) mbox2 = MboxConfig("daily", "gzip", [], []) self.assertEqual(mbox1, mbox2) self.assertTrue(mbox1 == mbox2) self.assertTrue(not mbox1 < mbox2) self.assertTrue(mbox1 <= mbox2) self.assertTrue(not mbox1 > mbox2) self.assertTrue(mbox1 >= mbox2) self.assertTrue(not mbox1 != mbox2) def testComparison_004(self): """ Test comparison of two identical objects, all attributes non-None, lists non-empty. """ mbox1 = MboxConfig("daily", "gzip", [ MboxFile(), ], [MboxDir(), ]) mbox2 = MboxConfig("daily", "gzip", [ MboxFile(), ], [MboxDir(), ]) self.assertEqual(mbox1, mbox2) self.assertTrue(mbox1 == mbox2) self.assertTrue(not mbox1 < mbox2) self.assertTrue(mbox1 <= mbox2) self.assertTrue(not mbox1 > mbox2) self.assertTrue(mbox1 >= mbox2) self.assertTrue(not mbox1 != mbox2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ mbox1 = MboxConfig() mbox2 = MboxConfig(collectMode="daily") self.assertNotEqual(mbox1, mbox2) self.assertTrue(not mbox1 == mbox2) self.assertTrue(mbox1 < mbox2) self.assertTrue(mbox1 <= mbox2) self.assertTrue(not mbox1 > mbox2) self.assertTrue(not mbox1 >= mbox2) self.assertTrue(mbox1 != mbox2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ mbox1 = MboxConfig("daily", "gzip", [ MboxFile(), ]) mbox2 = MboxConfig("weekly", "gzip", [ MboxFile(), ]) self.assertNotEqual(mbox1, mbox2) self.assertTrue(not mbox1 == mbox2) self.assertTrue(mbox1 < mbox2) self.assertTrue(mbox1 <= mbox2) self.assertTrue(not mbox1 > mbox2) self.assertTrue(not mbox1 >= mbox2) self.assertTrue(mbox1 != mbox2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ mbox1 = MboxConfig() mbox2 = MboxConfig(compressMode="bzip2") self.assertNotEqual(mbox1, mbox2) self.assertTrue(not mbox1 == mbox2) self.assertTrue(mbox1 < mbox2) self.assertTrue(mbox1 <= mbox2) self.assertTrue(not mbox1 > mbox2) self.assertTrue(not mbox1 >= mbox2) self.assertTrue(mbox1 != mbox2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ mbox1 = MboxConfig("daily", "bzip2", [ MboxFile(), ]) mbox2 = MboxConfig("daily", "gzip", [ MboxFile(), ]) self.assertNotEqual(mbox1, mbox2) self.assertTrue(not mbox1 == mbox2) self.assertTrue(mbox1 < mbox2) self.assertTrue(mbox1 <= mbox2) self.assertTrue(not mbox1 > mbox2) self.assertTrue(not mbox1 >= mbox2) self.assertTrue(mbox1 != mbox2) def testComparison_009(self): """ Test comparison of two differing objects, mboxFiles differs (one None, one empty). """ mbox1 = MboxConfig() mbox2 = MboxConfig(mboxFiles=[]) self.assertNotEqual(mbox1, mbox2) self.assertTrue(not mbox1 == mbox2) self.assertTrue(mbox1 < mbox2) self.assertTrue(mbox1 <= mbox2) self.assertTrue(not mbox1 > mbox2) self.assertTrue(not mbox1 >= mbox2) self.assertTrue(mbox1 != mbox2) def testComparison_010(self): """ Test comparison of two differing objects, mboxFiles differs (one None, one not empty). """ mbox1 = MboxConfig() mbox2 = MboxConfig(mboxFiles=[MboxFile(), ]) self.assertNotEqual(mbox1, mbox2) self.assertTrue(not mbox1 == mbox2) self.assertTrue(mbox1 < mbox2) self.assertTrue(mbox1 <= mbox2) self.assertTrue(not mbox1 > mbox2) self.assertTrue(not mbox1 >= mbox2) self.assertTrue(mbox1 != mbox2) def testComparison_011(self): """ Test comparison of two differing objects, mboxFiles differs (one empty, one not empty). """ mbox1 = MboxConfig("daily", "gzip", [ ], None) mbox2 = MboxConfig("daily", "gzip", [ MboxFile(), ], None) self.assertNotEqual(mbox1, mbox2) self.assertTrue(not mbox1 == mbox2) self.assertTrue(mbox1 < mbox2) self.assertTrue(mbox1 <= mbox2) self.assertTrue(not mbox1 > mbox2) self.assertTrue(not mbox1 >= mbox2) self.assertTrue(mbox1 != mbox2) def testComparison_012(self): """ Test comparison of two differing objects, mboxFiles differs (both not empty). """ mbox1 = MboxConfig("daily", "gzip", [ MboxFile(), ], None) mbox2 = MboxConfig("daily", "gzip", [ MboxFile(), MboxFile(), ], None) self.assertNotEqual(mbox1, mbox2) self.assertTrue(not mbox1 == mbox2) self.assertTrue(mbox1 < mbox2) self.assertTrue(mbox1 <= mbox2) self.assertTrue(not mbox1 > mbox2) self.assertTrue(not mbox1 >= mbox2) self.assertTrue(mbox1 != mbox2) def testComparison_013(self): """ Test comparison of two differing objects, mboxDirs differs (one None, one empty). """ mbox1 = MboxConfig() mbox2 = MboxConfig(mboxDirs=[]) self.assertNotEqual(mbox1, mbox2) self.assertTrue(not mbox1 == mbox2) self.assertTrue(mbox1 < mbox2) self.assertTrue(mbox1 <= mbox2) self.assertTrue(not mbox1 > mbox2) self.assertTrue(not mbox1 >= mbox2) self.assertTrue(mbox1 != mbox2) def testComparison_014(self): """ Test comparison of two differing objects, mboxDirs differs (one None, one not empty). """ mbox1 = MboxConfig() mbox2 = MboxConfig(mboxDirs=[MboxDir(), ]) self.assertNotEqual(mbox1, mbox2) self.assertTrue(not mbox1 == mbox2) self.assertTrue(mbox1 < mbox2) self.assertTrue(mbox1 <= mbox2) self.assertTrue(not mbox1 > mbox2) self.assertTrue(not mbox1 >= mbox2) self.assertTrue(mbox1 != mbox2) def testComparison_015(self): """ Test comparison of two differing objects, mboxDirs differs (one empty, one not empty). """ mbox1 = MboxConfig("daily", "gzip", None, [ ]) mbox2 = MboxConfig("daily", "gzip", None, [ MboxDir(), ]) self.assertNotEqual(mbox1, mbox2) self.assertTrue(not mbox1 == mbox2) self.assertTrue(mbox1 < mbox2) self.assertTrue(mbox1 <= mbox2) self.assertTrue(not mbox1 > mbox2) self.assertTrue(not mbox1 >= mbox2) self.assertTrue(mbox1 != mbox2) def testComparison_016(self): """ Test comparison of two differing objects, mboxDirs differs (both not empty). """ mbox1 = MboxConfig("daily", "gzip", None, [ MboxDir(), ]) mbox2 = MboxConfig("daily", "gzip", None, [ MboxDir(), MboxDir(), ]) self.assertNotEqual(mbox1, mbox2) self.assertTrue(not mbox1 == mbox2) self.assertTrue(mbox1 < mbox2) self.assertTrue(mbox1 <= mbox2) self.assertTrue(not mbox1 > mbox2) self.assertTrue(not mbox1 >= mbox2) self.assertTrue(mbox1 != mbox2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the mbox configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.assertEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.assertEqual(None, config.mbox) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.assertEqual(None, config.mbox) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["mbox.conf.1"] with open(path) as f: contents = f.read() self.assertRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of mbox attribute, None value. """ config = LocalConfig() config.mbox = None self.assertEqual(None, config.mbox) def testConstructor_005(self): """ Test assignment of mbox attribute, valid value. """ config = LocalConfig() config.mbox = MboxConfig() self.assertEqual(MboxConfig(), config.mbox) def testConstructor_006(self): """ Test assignment of mbox attribute, invalid value (not MboxConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "mbox", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.mbox = MboxConfig() config2 = LocalConfig() config2.mbox = MboxConfig() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, mbox differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.mbox = MboxConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, mbox differs. """ config1 = LocalConfig() config1.mbox = MboxConfig(collectMode="daily") config2 = LocalConfig() config2.mbox = MboxConfig(collectMode="weekly") self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None mbox section. """ config = LocalConfig() config.mbox = None self.assertRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty mbox section. """ config = LocalConfig() config.mbox = MboxConfig() self.assertRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty mbox section, mboxFiles=None and mboxDirs=None. """ config = LocalConfig() config.mbox = MboxConfig("weekly", "gzip", None, None) self.assertRaises(ValueError, config.validate) def testValidate_004(self): """ Test validate on a non-empty mbox section, mboxFiles=[] and mboxDirs=[]. """ config = LocalConfig() config.mbox = MboxConfig("weekly", "gzip", [], []) self.assertRaises(ValueError, config.validate) def testValidate_005(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, defaults set, no values on files. """ mboxFiles = [ MboxFile(absolutePath="/one"), MboxFile(absolutePath="/two") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_006(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, defaults set, no values on directories. """ mboxDirs = [ MboxDir(absolutePath="/one"), MboxDir(absolutePath="/two") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_007(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, no defaults set, no values on files. """ mboxFiles = [ MboxFile(absolutePath="/one"), MboxFile(absolutePath="/two") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None self.assertRaises(ValueError, config.validate) def testValidate_008(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, no defaults set, no values on directories. """ mboxDirs = [ MboxDir(absolutePath="/one"), MboxDir(absolutePath="/two") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs self.assertRaises(ValueError, config.validate) def testValidate_009(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, no defaults set, both values on files. """ mboxFiles = [ MboxFile(absolutePath="/two", collectMode="weekly", compressMode="gzip") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_010(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, no defaults set, both values on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", collectMode="weekly", compressMode="gzip") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_011(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, collectMode only on files. """ mboxFiles = [ MboxFile(absolutePath="/two", collectMode="weekly") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.compressMode = "gzip" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_012(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, collectMode only on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", collectMode="weekly") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.compressMode = "gzip" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_013(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, compressMode only on files. """ mboxFiles = [ MboxFile(absolutePath="/two", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "weekly" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_014(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, compressMode only on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "weekly" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_015(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, compressMode default and on files. """ mboxFiles = [ MboxFile(absolutePath="/two", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_016(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, compressMode default and on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_017(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, collectMode default and on files. """ mboxFiles = [ MboxFile(absolutePath="/two", collectMode="daily") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_018(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, collectMode default and on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", collectMode="daily") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_019(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, collectMode and compressMode default and on files. """ mboxFiles = [ MboxFile(absolutePath="/two", collectMode="daily", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_020(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, collectMode and compressMode default and on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", collectMode="daily", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["mbox.conf.1"] with open(path) as f: contents = f.read() self.assertRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.assertRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.assertEqual(None, config.mbox) config = LocalConfig(xmlData=contents, validate=False) self.assertEqual(None, config.mbox) def testParse_002(self): """ Parse config document with default modes, one collect file and one collect dir. """ mboxFiles = [ MboxFile(absolutePath="/home/joebob/mail/cedar-backup-users"), ] mboxDirs = [ MboxDir(absolutePath="/home/billiejoe/mail"), ] path = self.resources["mbox.conf.2"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.mbox) self.assertEqual("daily", config.mbox.collectMode) self.assertEqual("gzip", config.mbox.compressMode) self.assertEqual(mboxFiles, config.mbox.mboxFiles) self.assertEqual(mboxDirs, config.mbox.mboxDirs) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.mbox) self.assertEqual("daily", config.mbox.collectMode) self.assertEqual("gzip", config.mbox.compressMode) self.assertEqual(mboxFiles, config.mbox.mboxFiles) self.assertEqual(mboxDirs, config.mbox.mboxDirs) def testParse_003(self): """ Parse config document with no default modes, one collect file and one collect dir. """ mboxFiles = [ MboxFile(absolutePath="/home/joebob/mail/cedar-backup-users", collectMode="daily", compressMode="gzip"), ] mboxDirs = [ MboxDir(absolutePath="/home/billiejoe/mail", collectMode="weekly", compressMode="bzip2"), ] path = self.resources["mbox.conf.3"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.mbox) self.assertEqual(None, config.mbox.collectMode) self.assertEqual(None, config.mbox.compressMode) self.assertEqual(mboxFiles, config.mbox.mboxFiles) self.assertEqual(mboxDirs, config.mbox.mboxDirs) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.mbox) self.assertEqual(None, config.mbox.collectMode) self.assertEqual(None, config.mbox.compressMode) self.assertEqual(mboxFiles, config.mbox.mboxFiles) self.assertEqual(mboxDirs, config.mbox.mboxDirs) def testParse_004(self): """ Parse config document with default modes, several files with various overrides and exclusions. """ mboxFiles = [] mboxFile = MboxFile(absolutePath="/home/jimbo/mail/cedar-backup-users") mboxFiles.append(mboxFile) mboxFile = MboxFile(absolutePath="/home/joebob/mail/cedar-backup-users", collectMode="daily", compressMode="gzip") mboxFiles.append(mboxFile) mboxDirs = [] mboxDir = MboxDir(absolutePath="/home/frank/mail/cedar-backup-users") mboxDirs.append(mboxDir) mboxDir = MboxDir(absolutePath="/home/jimbob/mail", compressMode="bzip2", relativeExcludePaths=["logomachy-devel"]) mboxDirs.append(mboxDir) mboxDir = MboxDir(absolutePath="/home/billiejoe/mail", collectMode="weekly", compressMode="bzip2", excludePatterns=[".*SPAM.*"]) mboxDirs.append(mboxDir) mboxDir = MboxDir(absolutePath="/home/billybob/mail", relativeExcludePaths=["debian-devel", "debian-python", ], excludePatterns=[".*SPAM.*", ".*JUNK.*", ]) mboxDirs.append(mboxDir) path = self.resources["mbox.conf.4"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.mbox) self.assertEqual("incr", config.mbox.collectMode) self.assertEqual("none", config.mbox.compressMode) self.assertEqual(mboxFiles, config.mbox.mboxFiles) self.assertEqual(mboxDirs, config.mbox.mboxDirs) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.mbox) self.assertEqual("incr", config.mbox.collectMode) self.assertEqual("none", config.mbox.compressMode) self.assertEqual(mboxFiles, config.mbox.mboxFiles) self.assertEqual(mboxDirs, config.mbox.mboxDirs) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document. """ mbox = MboxConfig() config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_002(self): """ Test with defaults set, single mbox file with no optional values. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_003(self): """ Test with defaults set, single mbox directory with no optional values. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_004(self): """ Test with defaults set, single mbox file with collectMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="incr")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_005(self): """ Test with defaults set, single mbox directory with collectMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="incr")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_006(self): """ Test with defaults set, single mbox file with compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", compressMode="bzip2")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_007(self): """ Test with defaults set, single mbox directory with compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", compressMode="bzip2")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_008(self): """ Test with defaults set, single mbox file with collectMode and compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="weekly", compressMode="bzip2")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_009(self): """ Test with defaults set, single mbox directory with collectMode and compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="weekly", compressMode="bzip2")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_010(self): """ Test with no defaults set, single mbox file with collectMode and compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="weekly", compressMode="bzip2")) mbox = MboxConfig(mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_011(self): """ Test with no defaults set, single mbox directory with collectMode and compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="weekly", compressMode="bzip2")) mbox = MboxConfig(mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_012(self): """ Test with compressMode set, single mbox file with collectMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="weekly")) mbox = MboxConfig(compressMode="gzip", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_013(self): """ Test with compressMode set, single mbox directory with collectMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="weekly")) mbox = MboxConfig(compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_014(self): """ Test with collectMode set, single mbox file with compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", compressMode="gzip")) mbox = MboxConfig(collectMode="weekly", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_015(self): """ Test with collectMode set, single mbox directory with compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", compressMode="gzip")) mbox = MboxConfig(collectMode="weekly", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_016(self): """ Test with compressMode set, single mbox file with collectMode and compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="incr", compressMode="gzip")) mbox = MboxConfig(compressMode="bzip2", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_017(self): """ Test with compressMode set, single mbox directory with collectMode and compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="incr", compressMode="gzip")) mbox = MboxConfig(compressMode="bzip2", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_018(self): """ Test with collectMode set, single mbox file with collectMode and compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="weekly", compressMode="gzip")) mbox = MboxConfig(collectMode="incr", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_019(self): """ Test with collectMode set, single mbox directory with collectMode and compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="weekly", compressMode="gzip")) mbox = MboxConfig(collectMode="incr", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_020(self): """ Test with defaults set, single mbox directory with relativeExcludePaths set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", relativeExcludePaths=["one", "two", ])) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_021(self): """ Test with defaults set, single mbox directory with excludePatterns set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", excludePatterns=["one", "two", ])) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_022(self): """ Test with defaults set, multiple mbox files and directories with collectMode and compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path1", collectMode="daily", compressMode="gzip")) mboxFiles.append(MboxFile(absolutePath="/path2", collectMode="weekly", compressMode="gzip")) mboxFiles.append(MboxFile(absolutePath="/path3", collectMode="incr", compressMode="gzip")) mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path1", collectMode="daily", compressMode="bzip2")) mboxDirs.append(MboxDir(absolutePath="/path2", collectMode="weekly", compressMode="bzip2")) mboxDirs.append(MboxDir(absolutePath="/path3", collectMode="incr", compressMode="bzip2")) mbox = MboxConfig(collectMode="incr", compressMode="bzip2", mboxFiles=mboxFiles, mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" tests = [ ] tests.append(unittest.makeSuite(TestMboxFile, 'test')) tests.append(unittest.makeSuite(TestMboxDir, 'test')) tests.append(unittest.makeSuite(TestMboxConfig, 'test')) tests.append(unittest.makeSuite(TestLocalConfig, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/cdwritertests.py0000664000175000017500000023315312560007327022576 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests CD writer functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/writers/cdwriter.py. This code was consolidated from writertests.py and imagetests.py at the same time cdwriter.py was created. Code Coverage ============= This module contains individual tests for the public classes implemented in cdwriter.py. Unfortunately, it's rather difficult to test this code in an automated fashion, even if you have access to a physical CD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to. Because of this, there aren't any tests below that actually cause CD media to be written to. As a compromise, much of the implementation is in terms of private static methods that have well-defined behaviors. Normally, I prefer to only test the public interface to class, but in this case, testing the private methods will help give us some reasonable confidence in the code, even if we can't write a physical disc or can't run all of the tests. This isn't perfect, but it's better than nothing. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. There are no special dependencies for these tests. I used to try and run tests against an actual device, to make sure that this worked. However, those tests ended up being kind of bogus, because my main development environment doesn't have a writer, and even if it had one, any device with the same name on another user's system wouldn't necessarily return sensible results. That's just pointless. We'll just have to rely on the other tests to make sure that things seem sensible. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import unittest from CedarBackup3.writers.cdwriter import MediaDefinition, MediaCapacity, CdWriter from CedarBackup3.writers.cdwriter import MEDIA_CDR_74, MEDIA_CDRW_74, MEDIA_CDR_80, MEDIA_CDRW_80 ####################################################################### # Module-wide configuration and constants ####################################################################### MB650 = (650.0*1024.0*1024.0) # 650 MB MB700 = (700.0*1024.0*1024.0) # 700 MB ILEAD = (11400.0*2048.0) # Initial lead-in SLEAD = (6900.0*2048.0) # Session lead-in DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "tree9.tar.gz", ] SUDO_CMD = [ "sudo", ] HDIUTIL_CMD = [ "hdiutil", ] INVALID_FILE = "bogus" # This file name should never exist ####################################################################### # Test Case Classes ####################################################################### ############################ # TestMediaDefinition class ############################ class TestMediaDefinition(unittest.TestCase): """Tests for the MediaDefinition class.""" def testConstructor_001(self): """ Test the constructor with an invalid media type. """ self.assertRaises(ValueError, MediaDefinition, 100) def testConstructor_002(self): """ Test the constructor with the C{MEDIA_CDR_74} media type. """ media = MediaDefinition(MEDIA_CDR_74) self.assertEqual(MEDIA_CDR_74, media.mediaType) self.assertEqual(False, media.rewritable) self.assertNotEqual(0, media.initialLeadIn) # just care that it's set, not what its value is self.assertNotEqual(0, media.leadIn) # just care that it's set, not what its value is self.assertEqual(332800, media.capacity) def testConstructor_003(self): """ Test the constructor with the C{MEDIA_CDRW_74} media type. """ media = MediaDefinition(MEDIA_CDRW_74) self.assertEqual(MEDIA_CDRW_74, media.mediaType) self.assertEqual(True, media.rewritable) self.assertNotEqual(0, media.initialLeadIn) # just care that it's set, not what its value is self.assertNotEqual(0, media.leadIn) # just care that it's set, not what its value is self.assertEqual(332800, media.capacity) def testConstructor_004(self): """ Test the constructor with the C{MEDIA_CDR_80} media type. """ media = MediaDefinition(MEDIA_CDR_80) self.assertEqual(MEDIA_CDR_80, media.mediaType) self.assertEqual(False, media.rewritable) self.assertNotEqual(0, media.initialLeadIn) # just care that it's set, not what its value is self.assertNotEqual(0, media.leadIn) # just care that it's set, not what its value is self.assertEqual(358400, media.capacity) def testConstructor_005(self): """ Test the constructor with the C{MEDIA_CDRW_80} media type. """ media = MediaDefinition(MEDIA_CDRW_80) self.assertEqual(MEDIA_CDRW_80, media.mediaType) self.assertEqual(True, media.rewritable) self.assertNotEqual(0, media.initialLeadIn) # just care that it's set, not what its value is self.assertNotEqual(0, media.leadIn) # just care that it's set, not what its value is self.assertEqual(358400, media.capacity) ############################ # TestMediaCapacity class ############################ class TestMediaCapacity(unittest.TestCase): """Tests for the MediaCapacity class.""" def testConstructor_001(self): """ Test the constructor. """ capacity = MediaCapacity(100, 200, (300, 400)) self.assertEqual(100, capacity.bytesUsed) self.assertEqual(200, capacity.bytesAvailable) self.assertEqual((300, 400), capacity.boundaries) ##################### # TestCdWriter class ##################### class TestCdWriter(unittest.TestCase): """Tests for the CdWriter class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ################### # Test constructor ################### def testConstructor_001(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid non-ATA SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True} """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", unittest=True) self.assertEqual("/dev/null", writer.device) self.assertEqual("0,0,0", writer.scsiId) self.assertEqual("0,0,0", writer.hardwareId) self.assertEqual(None, writer.driveSpeed) self.assertEqual(MEDIA_CDRW_74, writer.media.mediaType) self.assertEqual(True, writer.isRewritable()) self.assertEqual(False, writer._noEject) def testConstructor_002(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid ATA SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="ATA:0,0,0", unittest=True) self.assertEqual("/dev/null", writer.device) self.assertEqual("ATA:0,0,0", writer.scsiId) self.assertEqual("ATA:0,0,0", writer.hardwareId) self.assertEqual(None, writer.driveSpeed) self.assertEqual(MEDIA_CDRW_74, writer.media.mediaType) self.assertEqual(True, writer.isRewritable()) self.assertEqual(False, writer._noEject) def testConstructor_003(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid ATAPI SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="ATAPI:0,0,0", unittest=True) self.assertEqual("/dev/null", writer.device) self.assertEqual("ATAPI:0,0,0", writer.scsiId) self.assertEqual("ATAPI:0,0,0", writer.hardwareId) self.assertEqual(None, writer.driveSpeed) self.assertEqual(MEDIA_CDRW_74, writer.media.mediaType) self.assertEqual(True, writer.isRewritable()) self.assertEqual(False, writer._noEject) def testConstructor_004(self): """ Test the constructor with device C{/dev/null} (which is writable and exists). Use an invalid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=False}. """ self.assertRaises(ValueError, CdWriter, device="/dev/null", scsiId="blech", unittest=False) def testConstructor_005(self): """ Test the constructor with device C{/dev/null} (which is writable and exists). Use an invalid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True}. """ self.assertRaises(ValueError, CdWriter, device="/dev/null", scsiId="blech", unittest=True) def testConstructor_006(self): """ Test the constructor with a non-absolute device path. Use a valid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=False}. """ self.assertRaises(ValueError, CdWriter, device="dev/null", scsiId="0,0,0", unittest=False) def testConstructor_007(self): """ Test the constructor with a non-absolute device path. Use a valid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True}. """ self.assertRaises(ValueError, CdWriter, device="dev/null", scsiId="0,0,0", unittest=True) def testConstructor_008(self): """ Test the constructor with an absolute device path that does not exist. Use a valid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=False}. """ self.assertRaises(ValueError, CdWriter, device="/bogus", scsiId="0,0,0", unittest=False) def testConstructor_009(self): """ Test the constructor with an absolute device path that does not exist. Use a valid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True}. """ writer = CdWriter(device="/bogus", scsiId="0,0,0", unittest=True) self.assertEqual("/bogus", writer.device) self.assertEqual("0,0,0", writer.scsiId) self.assertEqual("0,0,0", writer.hardwareId) self.assertEqual(None, writer.driveSpeed) self.assertEqual(MEDIA_CDRW_74, writer.media.mediaType) self.assertEqual(True, writer.isRewritable()) self.assertEqual(False, writer._noEject) def testConstructor_010(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a value of 0 for the drive speed. Make sure that C{unittest=False}. """ self.assertRaises(ValueError, CdWriter, device="/dev/null", scsiId="0,0,0", driveSpeed=0, unittest=False) def testConstructor_011(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a value of 0 for the drive speed. Make sure that C{unittest=True}. """ self.assertRaises(ValueError, CdWriter, device="/dev/null", scsiId="0,0,0", driveSpeed=0, unittest=True) def testConstructor_012(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a value of 1 for the drive speed. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", driveSpeed=1, unittest=True) self.assertEqual("/dev/null", writer.device) self.assertEqual("0,0,0", writer.scsiId) self.assertEqual("0,0,0", writer.hardwareId) self.assertEqual(1, writer.driveSpeed) self.assertEqual(MEDIA_CDRW_74, writer.media.mediaType) self.assertEqual(True, writer.isRewritable()) self.assertEqual(False, writer._noEject) def testConstructor_013(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a value of 5 for the drive speed. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", driveSpeed=5, unittest=True) self.assertEqual("/dev/null", writer.device) self.assertEqual("0,0,0", writer.scsiId) self.assertEqual("0,0,0", writer.hardwareId) self.assertEqual(5, writer.driveSpeed) self.assertEqual(MEDIA_CDRW_74, writer.media.mediaType) self.assertEqual(True, writer.isRewritable()) self.assertEqual(False, writer._noEject) def testConstructor_014(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and an invalid media type. Make sure that C{unittest=False}. """ self.assertRaises(ValueError, CdWriter, device="/dev/null", scsiId="0,0,0", mediaType=42, unittest=False) def testConstructor_015(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and an invalid media type. Make sure that C{unittest=True}. """ self.assertRaises(ValueError, CdWriter, device="/dev/null", scsiId="0,0,0", mediaType=42, unittest=True) def testConstructor_016(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a media type of MEDIA_CDR_74. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", mediaType=MEDIA_CDR_74, unittest=True) self.assertEqual("/dev/null", writer.device) self.assertEqual("0,0,0", writer.scsiId) self.assertEqual("0,0,0", writer.hardwareId) self.assertEqual(None, writer.driveSpeed) self.assertEqual(MEDIA_CDR_74, writer.media.mediaType) self.assertEqual(False, writer.isRewritable()) self.assertEqual(False, writer._noEject) def testConstructor_017(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a media type of MEDIA_CDRW_74. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", mediaType=MEDIA_CDRW_74, unittest=True) self.assertEqual("/dev/null", writer.device) self.assertEqual("0,0,0", writer.scsiId) self.assertEqual("0,0,0", writer.hardwareId) self.assertEqual(None, writer.driveSpeed) self.assertEqual(MEDIA_CDRW_74, writer.media.mediaType) self.assertEqual(True, writer.isRewritable()) self.assertEqual(False, writer._noEject) def testConstructor_018(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a media type of MEDIA_CDR_80. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", mediaType=MEDIA_CDR_80, unittest=True) self.assertEqual("/dev/null", writer.device) self.assertEqual("0,0,0", writer.scsiId) self.assertEqual("0,0,0", writer.hardwareId) self.assertEqual(None, writer.driveSpeed) self.assertEqual(MEDIA_CDR_80, writer.media.mediaType) self.assertEqual(False, writer.isRewritable()) self.assertEqual(False, writer._noEject) def testConstructor_019(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a media type of MEDIA_CDRW_80. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", mediaType=MEDIA_CDRW_80, unittest=True) self.assertEqual("/dev/null", writer.device) self.assertEqual("0,0,0", writer.scsiId) self.assertEqual("0,0,0", writer.hardwareId) self.assertEqual(None, writer.driveSpeed) self.assertEqual(MEDIA_CDRW_80, writer.media.mediaType) self.assertEqual(True, writer.isRewritable()) self.assertEqual(False, writer._noEject) def testConstructor_020(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use None for SCSI id and a media type of MEDIA_CDRW_80. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId=None, mediaType=MEDIA_CDRW_80, unittest=True) self.assertEqual("/dev/null", writer.device) self.assertEqual(None, writer.scsiId) self.assertEqual("/dev/null", writer.hardwareId) self.assertEqual(None, writer.driveSpeed) self.assertEqual(MEDIA_CDRW_80, writer.media.mediaType) self.assertEqual(True, writer.isRewritable()) self.assertEqual(False, writer._noEject) def testConstructor_021(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use None for SCSI id and a media type of MEDIA_CDRW_80. Make sure that C{unittest=True}. Use C{noEject=True}. """ writer = CdWriter(device="/dev/null", scsiId=None, mediaType=MEDIA_CDRW_80, noEject=True, unittest=True) self.assertEqual("/dev/null", writer.device) self.assertEqual(None, writer.scsiId) self.assertEqual("/dev/null", writer.hardwareId) self.assertEqual(None, writer.driveSpeed) self.assertEqual(MEDIA_CDRW_80, writer.media.mediaType) self.assertEqual(True, writer.isRewritable()) self.assertEqual(True, writer._noEject) #################################### # Test the capacity-related methods #################################### def testCapacity_001(self): """ Test _calculateCapacity for boundaries of None and MEDIA_CDR_74. """ expectedAvailable = MB650-ILEAD # 650 MB, minus initial lead-in media = MediaDefinition(MEDIA_CDR_74) boundaries = None capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(0, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) self.assertEqual(None, capacity.boundaries) def testCapacity_002(self): """ Test _calculateCapacity for boundaries of None and MEDIA_CDRW_74. """ expectedAvailable = MB650-ILEAD # 650 MB, minus initial lead-in media = MediaDefinition(MEDIA_CDRW_74) boundaries = None capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(0, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) self.assertEqual(None, capacity.boundaries) def testCapacity_003(self): """ Test _calculateCapacity for boundaries of None and MEDIA_CDR_80. """ expectedAvailable = MB700-ILEAD # 700 MB, minus initial lead-in media = MediaDefinition(MEDIA_CDR_80) boundaries = None capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(0, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) self.assertEqual(None, capacity.boundaries) def testCapacity_004(self): """ Test _calculateCapacity for boundaries of None and MEDIA_CDRW_80. """ expectedAvailable = MB700-ILEAD # 700 MB, minus initial lead-in media = MediaDefinition(MEDIA_CDRW_80) boundaries = None capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(0, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) self.assertEqual(None, capacity.boundaries) def testCapacity_005(self): """ Test _calculateCapacity for boundaries of (0, 1) and MEDIA_CDR_74. """ expectedUsed = (1*2048.0) # 1 sector expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 1 sector media = MediaDefinition(MEDIA_CDR_74) boundaries = (0, 1) capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(expectedUsed, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) self.assertEqual((0, 1), capacity.boundaries) def testCapacity_006(self): """ Test _calculateCapacity for boundaries of (0, 1) and MEDIA_CDRW_74. """ expectedUsed = (1*2048.0) # 1 sector expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 1 sector media = MediaDefinition(MEDIA_CDRW_74) boundaries = (0, 1) capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(expectedUsed, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) self.assertEqual((0, 1), capacity.boundaries) def testCapacity_007(self): """ Test _calculateCapacity for boundaries of (0, 1) and MEDIA_CDR_80. """ expectedUsed = (1*2048.0) # 1 sector expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 1 sector media = MediaDefinition(MEDIA_CDR_80) boundaries = (0, 1) capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(expectedUsed, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) # 700 MB - lead-in - 1 sector self.assertEqual((0, 1), capacity.boundaries) def testCapacity_008(self): """ Test _calculateCapacity for boundaries of (0, 1) and MEDIA_CDRW_80. """ expectedUsed = (1*2048.0) # 1 sector expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 1 sector media = MediaDefinition(MEDIA_CDRW_80) boundaries = (0, 1) capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(expectedUsed, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) self.assertEqual((0, 1), capacity.boundaries) def testCapacity_009(self): """ Test _calculateCapacity for boundaries of (0, 999) and MEDIA_CDR_74. """ expectedUsed = (999*2048.0) # 999 sectors expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 999 sectors media = MediaDefinition(MEDIA_CDR_74) boundaries = (0, 999) capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(expectedUsed, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) self.assertEqual((0, 999), capacity.boundaries) def testCapacity_010(self): """ Test _calculateCapacity for boundaries of (0, 999) and MEDIA_CDRW_74. """ expectedUsed = (999*2048.0) # 999 sectors expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 999 sectors media = MediaDefinition(MEDIA_CDRW_74) boundaries = (0, 999) capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(expectedUsed, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) self.assertEqual((0, 999), capacity.boundaries) def testCapacity_011(self): """ Test _calculateCapacity for boundaries of (0, 999) and MEDIA_CDR_80. """ expectedUsed = (999*2048.0) # 999 sectors expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 999 sectors media = MediaDefinition(MEDIA_CDR_80) boundaries = (0, 999) capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(expectedUsed, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) self.assertEqual((0, 999), capacity.boundaries) def testCapacity_012(self): """ Test _calculateCapacity for boundaries of (0, 999) and MEDIA_CDRW_80. """ expectedUsed = (999*2048.0) # 999 sectors expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 999 sectors media = MediaDefinition(MEDIA_CDRW_80) boundaries = (0, 999) capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(expectedUsed, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) self.assertEqual((0, 999), capacity.boundaries) def testCapacity_013(self): """ Test _calculateCapacity for boundaries of (500, 1000) and MEDIA_CDR_74. """ expectedUsed = (1000*2048.0) # 1000 sectors expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 1000 sectors media = MediaDefinition(MEDIA_CDR_74) boundaries = (500, 1000) capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(expectedUsed, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) self.assertEqual((500, 1000), capacity.boundaries) def testCapacity_014(self): """ Test _calculateCapacity for boundaries of (500, 1000) and MEDIA_CDRW_74. """ expectedUsed = (1000*2048.0) # 1000 sectors expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 1000 sectors media = MediaDefinition(MEDIA_CDRW_74) boundaries = (500, 1000) capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(expectedUsed, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) self.assertEqual((500, 1000), capacity.boundaries) def testCapacity_015(self): """ Test _calculateCapacity for boundaries of (500, 1000) and MEDIA_CDR_80. """ expectedUsed = (1000*2048.0) # 1000 sectors expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 1000 sectors media = MediaDefinition(MEDIA_CDR_80) boundaries = (500, 1000) capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(expectedUsed, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) self.assertEqual((500, 1000), capacity.boundaries) def testCapacity_016(self): """ Test _calculateCapacity for boundaries of (500, 1000) and MEDIA_CDRW_80. """ expectedUsed = (1000*2048.0) # 1000 sectors expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 1000 sectors media = MediaDefinition(MEDIA_CDRW_80) boundaries = (500, 1000) capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(expectedUsed, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) # 650 MB minus lead-in self.assertEqual((500, 1000), capacity.boundaries) def testCapacity_017(self): """ Test _getBoundaries when self.deviceSupportsMulti is False; entireDisc=False, useMulti=True. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = False boundaries = writer._getBoundaries(entireDisc=False, useMulti=True) self.assertEqual(None, boundaries) def testCapacity_018(self): """ Test _getBoundaries when self.deviceSupportsMulti is False; entireDisc=True, useMulti=True. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = False boundaries = writer._getBoundaries(entireDisc=True, useMulti=True) self.assertEqual(None, boundaries) def testCapacity_019(self): """ Test _getBoundaries when self.deviceSupportsMulti is False; entireDisc=True, useMulti=False. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = False boundaries = writer._getBoundaries(entireDisc=False, useMulti=False) self.assertEqual(None, boundaries) def testCapacity_020(self): """ Test _getBoundaries when self.deviceSupportsMulti is False; entireDisc=False, useMulti=False. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = False boundaries = writer._getBoundaries(entireDisc=False, useMulti=False) self.assertEqual(None, boundaries) def testCapacity_021(self): """ Test _getBoundaries when self.deviceSupportsMulti is True; entireDisc=True, useMulti=True. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = True boundaries = writer._getBoundaries(entireDisc=True, useMulti=True) self.assertEqual(None, boundaries) def testCapacity_022(self): """ Test _getBoundaries when self.deviceSupportsMulti is True; entireDisc=True, useMulti=False. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = True boundaries = writer._getBoundaries(entireDisc=True, useMulti=False) self.assertEqual(None, boundaries) def testCapacity_023(self): """ Test _calculateCapacity for boundaries of (321342, 330042) and MEDIA_CDRW_74. This was a bug fixed for v2.1.2. """ expectedUsed = (330042*2048.0) # 330042 sectors expectedAvailable = 0 # nothing should be available media = MediaDefinition(MEDIA_CDRW_74) boundaries = (321342, 330042) capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(expectedUsed, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) self.assertEqual((321342, 330042), capacity.boundaries) def testCapacity_024(self): """ Test _calculateCapacity for boundaries of (0, 330042) and MEDIA_CDRW_74. This was a bug fixed for v2.1.3. """ expectedUsed = (330042*2048.0) # 330042 sectors expectedAvailable = 0 # nothing should be available media = MediaDefinition(MEDIA_CDRW_74) boundaries = (0, 330042) capacity = CdWriter._calculateCapacity(media, boundaries) self.assertEqual(expectedUsed, capacity.bytesUsed) self.assertEqual(expectedAvailable, capacity.bytesAvailable) self.assertEqual((0, 330042), capacity.boundaries) ######################################### # Test methods that build argument lists ######################################### def testBuildArgs_001(self): """ Test _buildOpenTrayArgs(). """ args = CdWriter._buildOpenTrayArgs(device="/dev/stuff") self.assertEqual(["/dev/stuff", ], args) def testBuildArgs_002(self): """ Test _buildCloseTrayArgs(). """ args = CdWriter._buildCloseTrayArgs(device="/dev/stuff") self.assertEqual(["-t", "/dev/stuff", ], args) def testBuildArgs_003(self): """ Test _buildPropertiesArgs(). """ args = CdWriter._buildPropertiesArgs(hardwareId="0,0,0") self.assertEqual(["-prcap", "dev=0,0,0", ], args) def testBuildArgs_004(self): """ Test _buildBoundariesArgs(). """ args = CdWriter._buildBoundariesArgs(hardwareId="ATA:0,0,0") self.assertEqual(["-msinfo", "dev=ATA:0,0,0", ], args) def testBuildArgs_005(self): """ Test _buildBoundariesArgs(). """ args = CdWriter._buildBoundariesArgs(hardwareId="ATAPI:0,0,0") self.assertEqual(["-msinfo", "dev=ATAPI:0,0,0", ], args) def testBuildArgs_006(self): """ Test _buildBlankArgs(), default drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="ATA:0,0,0") self.assertEqual(["-v", "blank=fast", "dev=ATA:0,0,0", ], args) def testBuildArgs_007(self): """ Test _buildBlankArgs(), default drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="ATAPI:0,0,0") self.assertEqual(["-v", "blank=fast", "dev=ATAPI:0,0,0", ], args) def testBuildArgs_008(self): """ Test _buildBlankArgs(), with None for drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="0,0,0", driveSpeed=None) self.assertEqual(["-v", "blank=fast", "dev=0,0,0", ], args) def testBuildArgs_009(self): """ Test _buildBlankArgs(), with 1 for drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="0,0,0", driveSpeed=1) self.assertEqual(["-v", "blank=fast", "speed=1", "dev=0,0,0", ], args) def testBuildArgs_010(self): """ Test _buildBlankArgs(), with 5 for drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="ATA:1,2,3", driveSpeed=5) self.assertEqual(["-v", "blank=fast", "speed=5", "dev=ATA:1,2,3", ], args) def testBuildArgs_011(self): """ Test _buildBlankArgs(), with 5 for drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="ATAPI:1,2,3", driveSpeed=5) self.assertEqual(["-v", "blank=fast", "speed=5", "dev=ATAPI:1,2,3", ], args) def testBuildArgs_012(self): """ Test _buildWriteArgs(), default drive speed and writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,0,0", imagePath="/whatever") self.assertEqual(["-v", "dev=0,0,0", "-multi", "-data", "/whatever" ], args) def testBuildArgs_013(self): """ Test _buildWriteArgs(), None for drive speed, True for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,0,0", imagePath="/whatever", driveSpeed=None, writeMulti=True) self.assertEqual(["-v", "dev=0,0,0", "-multi", "-data", "/whatever" ], args) def testBuildArgs_014(self): """ Test _buildWriteArgs(), None for drive speed, False for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,0,0", imagePath="/whatever", driveSpeed=None, writeMulti=False) self.assertEqual(["-v", "dev=0,0,0", "-data", "/whatever" ], args) def testBuildArgs_015(self): """ Test _buildWriteArgs(), 1 for drive speed, True for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,0,0", imagePath="/whatever", driveSpeed=1, writeMulti=True) self.assertEqual(["-v", "speed=1", "dev=0,0,0", "-multi", "-data", "/whatever" ], args) def testBuildArgs_016(self): """ Test _buildWriteArgs(), 5 for drive speed, True for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,1,2", imagePath="/whatever", driveSpeed=5, writeMulti=True) self.assertEqual(["-v", "speed=5", "dev=0,1,2", "-multi", "-data", "/whatever" ], args) def testBuildArgs_017(self): """ Test _buildWriteArgs(), 1 for drive speed, False for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,0,0", imagePath="/dvl/stuff/whatever/more", driveSpeed=1, writeMulti=False) self.assertEqual(["-v", "speed=1", "dev=0,0,0", "-data", "/dvl/stuff/whatever/more" ], args) def testBuildArgs_018(self): """ Test _buildWriteArgs(), 5 for drive speed, False for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="ATA:1,2,3", imagePath="/whatever", driveSpeed=5, writeMulti=False) self.assertEqual(["-v", "speed=5", "dev=ATA:1,2,3", "-data", "/whatever" ], args) def testBuildArgs_019(self): """ Test _buildWriteArgs(), 5 for drive speed, False for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="ATAPI:1,2,3", imagePath="/whatever", driveSpeed=5, writeMulti=False) self.assertEqual(["-v", "speed=5", "dev=ATAPI:1,2,3", "-data", "/whatever" ], args) ########################################## # Test methods that parse cdrecord output ########################################## def testParseOutput_001(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example. """ output = [ "268582,302230\n", ] boundaries = CdWriter._parseBoundariesOutput(output) self.assertEqual((268582, 302230), boundaries) def testParseOutput_002(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, lots of extra whitespace around the values. """ output = [ " 268582 , 302230 \n", ] boundaries = CdWriter._parseBoundariesOutput(output) self.assertEqual((268582, 302230), boundaries) def testParseOutput_003(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, lots of extra garbage after the first line. """ output = [ "268582,302230\n", "more\n", "bogus\n", "crap\n", "here\n", "to\n", "confuse\n", "things\n", ] boundaries = CdWriter._parseBoundariesOutput(output) self.assertEqual((268582, 302230), boundaries) def testParseOutput_004(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, lots of extra garbage before the first line. """ output = [ "more\n", "bogus\n", "crap\n", "here\n", "to\n", "confuse\n", "things\n", "268582,302230\n", ] self.assertRaises(IOError, CdWriter._parseBoundariesOutput, output) def testParseOutput_005(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with first value converted to negative. """ output = [ "-268582,302230\n", ] self.assertRaises(IOError, CdWriter._parseBoundariesOutput, output) def testParseOutput_006(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with second value converted to negative. """ output = [ "268582,-302230\n", ] self.assertRaises(IOError, CdWriter._parseBoundariesOutput, output) def testParseOutput_007(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with first value converted to zero. """ output = [ "0,302230\n", ] boundaries = CdWriter._parseBoundariesOutput(output) self.assertEqual((0, 302230), boundaries) def testParseOutput_008(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with second value converted to zero. """ output = [ "268582,0\n", ] boundaries = CdWriter._parseBoundariesOutput(output) self.assertEqual((268582, 0), boundaries) def testParseOutput_009(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with first value converted to negative and second value converted to zero. """ output = [ "-268582,0\n", ] self.assertRaises(IOError, CdWriter._parseBoundariesOutput, output) def testParseOutput_010(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with first value converted to zero and second value converted to negative. """ output = [ "0,-302230\n", ] self.assertRaises(IOError, CdWriter._parseBoundariesOutput, output) def testParseOutput_011(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.assertEqual("Removable CD-ROM", deviceType) self.assertEqual("SONY", deviceVendor) self.assertEqual("CD-RW CRX140E", deviceId) self.assertEqual(4096.0*1024.0, deviceBufferSize) self.assertEqual(True, deviceSupportsMulti) self.assertEqual(True, deviceHasTray) self.assertEqual(True, deviceCanEject) def testParseOutput_012(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including only stdout. """ output = ['Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.assertEqual("Removable CD-ROM", deviceType) self.assertEqual("SONY", deviceVendor) self.assertEqual("CD-RW CRX140E", deviceId) self.assertEqual(4096.0*1024.0, deviceBufferSize) self.assertEqual(True, deviceSupportsMulti) self.assertEqual(True, deviceHasTray) self.assertEqual(True, deviceCanEject) def testParseOutput_013(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, device type removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.assertEqual(None, deviceType) self.assertEqual("SONY", deviceVendor) self.assertEqual("CD-RW CRX140E", deviceId) self.assertEqual(4096.0*1024.0, deviceBufferSize) self.assertEqual(True, deviceSupportsMulti) self.assertEqual(True, deviceHasTray) self.assertEqual(True, deviceCanEject) def testParseOutput_014(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, device vendor removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.assertEqual("Removable CD-ROM", deviceType) self.assertEqual(None, deviceVendor) self.assertEqual("CD-RW CRX140E", deviceId) self.assertEqual(4096.0*1024.0, deviceBufferSize) self.assertEqual(True, deviceSupportsMulti) self.assertEqual(True, deviceHasTray) self.assertEqual(True, deviceCanEject) def testParseOutput_015(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, device id removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.assertEqual("Removable CD-ROM", deviceType) self.assertEqual("SONY", deviceVendor) self.assertEqual(None, deviceId) self.assertEqual(4096.0*1024.0, deviceBufferSize) self.assertEqual(True, deviceSupportsMulti) self.assertEqual(True, deviceHasTray) self.assertEqual(True, deviceCanEject) def testParseOutput_016(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, buffer size removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.assertEqual("Removable CD-ROM", deviceType) self.assertEqual("SONY", deviceVendor) self.assertEqual("CD-RW CRX140E", deviceId) self.assertEqual(None, deviceBufferSize) self.assertEqual(True, deviceSupportsMulti) self.assertEqual(True, deviceHasTray) self.assertEqual(True, deviceCanEject) def testParseOutput_017(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, "supports multi" removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.assertEqual("Removable CD-ROM", deviceType) self.assertEqual("SONY", deviceVendor) self.assertEqual("CD-RW CRX140E", deviceId) self.assertEqual(4096.0*1024.0, deviceBufferSize) self.assertEqual(False, deviceSupportsMulti) self.assertEqual(True, deviceHasTray) self.assertEqual(True, deviceCanEject) def testParseOutput_018(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, "has tray" removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.assertEqual("Removable CD-ROM", deviceType) self.assertEqual("SONY", deviceVendor) self.assertEqual("CD-RW CRX140E", deviceId) self.assertEqual(4096.0*1024.0, deviceBufferSize) self.assertEqual(True, deviceSupportsMulti) self.assertEqual(False, deviceHasTray) self.assertEqual(True, deviceCanEject) def testParseOutput_019(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, "can eject" removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.assertEqual("Removable CD-ROM", deviceType) self.assertEqual("SONY", deviceVendor) self.assertEqual("CD-RW CRX140E", deviceId) self.assertEqual(4096.0*1024.0, deviceBufferSize) self.assertEqual(True, deviceSupportsMulti) self.assertEqual(True, deviceHasTray) self.assertEqual(False, deviceCanEject) def testParseOutput_020(self): """ Test _parsePropertiesOutput() for nonsensical data, just a bunch of empty lines. """ output = [ '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.assertEqual(None, deviceType) self.assertEqual(None, deviceVendor) self.assertEqual(None, deviceId) self.assertEqual(None, deviceBufferSize) self.assertEqual(False, deviceSupportsMulti) self.assertEqual(False, deviceHasTray) self.assertEqual(False, deviceCanEject) def testParseOutput_021(self): """ Test _parsePropertiesOutput() for nonsensical data, just an empty list. """ output = [ ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.assertEqual(None, deviceType) self.assertEqual(None, deviceVendor) self.assertEqual(None, deviceId) self.assertEqual(None, deviceBufferSize) self.assertEqual(False, deviceSupportsMulti) self.assertEqual(False, deviceHasTray) self.assertEqual(False, deviceCanEject) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" tests = [ ] tests.append(unittest.makeSuite(TestMediaDefinition, 'test')) tests.append(unittest.makeSuite(TestMediaCapacity, 'test')) tests.append(unittest.makeSuite(TestCdWriter, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/peertests.py0000664000175000017500000017113212560007327021704 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests peer functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/peer.py. Code Coverage ============= This module contains individual tests for most of the public functions and classes implemented in peer.py, including the C{LocalPeer} and C{RemotePeer} classes. Unfortunately, some of the code can't be tested. In particular, the stage code allows the caller to change ownership on files. Generally, this can only be done by root, and most people won't be running these tests as root. As such, we can't test this functionality. There are also some other pieces of functionality that can only be tested in certain environments (see below). Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. If you want to run all of the tests, set PEERTESTS_FULL to "Y" in the environment. In this module, network-related testing is what causes us our biggest problems. In order to test the RemotePeer, we need a "remote" host that we can rcp to and from. We want to fall back on using localhost and the current user, but that might not be safe or appropriate. As such, we'll only run these tests if PEERTESTS_FULL is set to "Y" in the environment. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # Import standard modules import os import stat import unittest import tempfile from CedarBackup3.testutil import findResources, buildPath, removedir, extractTar from CedarBackup3.testutil import getMaskAsMode, getLogin, runningAsRoot, failUnlessAssignRaises from CedarBackup3.peer import LocalPeer, RemotePeer from CedarBackup3.peer import DEF_RCP_COMMAND, DEF_RSH_COMMAND from CedarBackup3.peer import DEF_COLLECT_INDICATOR, DEF_STAGE_INDICATOR ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "tree1.tar.gz", "tree2.tar.gz", "tree9.tar.gz", ] REMOTE_HOST = "localhost" # Always use login@localhost as our "remote" host NONEXISTENT_FILE = "bogus" # This file name should never exist NONEXISTENT_HOST = "hostname.invalid" # RFC 2606 reserves the ".invalid" TLD for "obviously invalid" names NONEXISTENT_USER = "unittestuser" # This user name should never exist on localhost NONEXISTENT_CMD = "/bogus/~~~ZZZZ/bad/not/there" # This command should never exist in the filesystem ####################################################################### # Utility functions ####################################################################### def runAllTests(): """Returns true/false depending on whether the full test suite should be run.""" if "PEERTESTS_FULL" in os.environ: return os.environ["PEERTESTS_FULL"] == "Y" else: return False ####################################################################### # Test Case Classes ####################################################################### ###################### # TestLocalPeer class ###################### class TestLocalPeer(unittest.TestCase): """Tests for the LocalPeer class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def getFileMode(self, components): """Calls buildPath on components and then returns file mode for the file.""" return stat.S_IMODE(os.stat(self.buildPath(components)).st_mode) def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ########################### # Test basic functionality ########################### def testBasic_001(self): """ Make sure exception is thrown for non-absolute collect directory. """ name = "peer1" collectDir = "whatever/something/else/not/absolute" self.assertRaises(ValueError, LocalPeer, name, collectDir) def testBasic_002(self): """ Make sure attributes are set properly for valid constructor input. """ name = "peer1" collectDir = "/absolute/path/name" ignoreFailureMode = "all" peer = LocalPeer(name, collectDir, ignoreFailureMode) self.assertEqual(name, peer.name) self.assertEqual(collectDir, peer.collectDir) self.assertEqual(ignoreFailureMode, peer.ignoreFailureMode) def testBasic_003(self): """ Make sure attributes are set properly for valid constructor input, with spaces in the collect directory path. """ name = "peer1" collectDir = "/ absolute / path/ name " peer = LocalPeer(name, collectDir) self.assertEqual(name, peer.name) self.assertEqual(collectDir, peer.collectDir) def testBasic_004(self): """ Make sure assignment works for all valid failure modes. """ name = "peer1" collectDir = "/absolute/path/name" ignoreFailureMode = "all" peer = LocalPeer(name, collectDir, ignoreFailureMode) self.assertEqual("all", peer.ignoreFailureMode) peer.ignoreFailureMode = "none" self.assertEqual("none", peer.ignoreFailureMode) peer.ignoreFailureMode = "daily" self.assertEqual("daily", peer.ignoreFailureMode) peer.ignoreFailureMode = "weekly" self.assertEqual("weekly", peer.ignoreFailureMode) self.failUnlessAssignRaises(ValueError, peer, "ignoreFailureMode", "bogus") ############################### # Test checkCollectIndicator() ############################### def testCheckCollectIndicator_001(self): """ Attempt to check collect indicator with non-existent collect directory. """ name = "peer1" collectDir = self.buildPath([NONEXISTENT_FILE, ]) self.assertTrue(not os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator() self.assertEqual(False, result) def testCheckCollectIndicator_002(self): """ Attempt to check collect indicator with non-readable collect directory. """ name = "peer1" collectDir = self.buildPath(["collect", ]) os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) os.chmod(collectDir, 0o200) # user can't read his own directory peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator() self.assertEqual(False, result) os.chmod(collectDir, 0o777) # so we can remove it safely def testCheckCollectIndicator_003(self): """ Attempt to check collect indicator collect indicator file that does not exist. """ name = "peer1" collectDir = self.buildPath(["collect", ]) collectIndicator = self.buildPath(["collect", DEF_COLLECT_INDICATOR, ]) os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(not os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator() self.assertEqual(False, result) def testCheckCollectIndicator_004(self): """ Attempt to check collect indicator collect indicator file that does not exist, custom name. """ name = "peer1" collectDir = self.buildPath(["collect", ]) collectIndicator = self.buildPath(["collect", NONEXISTENT_FILE, ]) os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(not os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator(collectIndicator=NONEXISTENT_FILE) self.assertEqual(False, result) def testCheckCollectIndicator_005(self): """ Attempt to check collect indicator collect indicator file that does exist. """ name = "peer1" collectDir = self.buildPath(["collect", ]) collectIndicator = self.buildPath(["collect", DEF_COLLECT_INDICATOR, ]) os.mkdir(collectDir) with open(collectIndicator, "w") as f: f.write("") # touch the file self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator() self.assertEqual(True, result) def testCheckCollectIndicator_006(self): """ Attempt to check collect indicator collect indicator file that does exist, custom name. """ name = "peer1" collectDir = self.buildPath(["collect", ]) collectIndicator = self.buildPath(["collect", "different", ]) os.mkdir(collectDir) with open(collectIndicator, "w") as f: f.write("") # touch the file self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator(collectIndicator="different") self.assertEqual(True, result) def testCheckCollectIndicator_007(self): """ Attempt to check collect indicator collect indicator file that does exist, with spaces in the collect directory path. """ name = "peer1" collectDir = self.buildPath(["collect directory here", ]) collectIndicator = self.buildPath(["collect directory here", DEF_COLLECT_INDICATOR, ]) os.mkdir(collectDir) with open(collectIndicator, "w") as f: f.write("") # touch the file self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator() self.assertEqual(True, result) def testCheckCollectIndicator_008(self): """ Attempt to check collect indicator collect indicator file that does exist, custom name, with spaces in the collect directory path and collect indicator file name. """ name = "peer1" collectDir = self.buildPath([" collect dir ", ]) collectIndicator = self.buildPath([" collect dir ", "different, file", ]) os.mkdir(collectDir) with open(collectIndicator, "w") as f: f.write("") # touch the file self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator(collectIndicator="different, file") self.assertEqual(True, result) ############################# # Test writeStageIndicator() ############################# def testWriteStageIndicator_001(self): """ Attempt to write stage indicator with non-existent collect directory. """ name = "peer1" collectDir = self.buildPath([NONEXISTENT_FILE, ]) self.assertTrue(not os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) self.assertRaises(ValueError, peer.writeStageIndicator) def testWriteStageIndicator_002(self): """ Attempt to write stage indicator with non-writable collect directory. """ if not runningAsRoot(): # root doesn't get this error name = "peer1" collectDir = self.buildPath(["collect", ]) os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) os.chmod(collectDir, 0o500) # read-only for user peer = LocalPeer(name, collectDir) self.assertRaises((IOError, OSError), peer.writeStageIndicator) os.chmod(collectDir, 0o777) # so we can remove it safely def testWriteStageIndicator_003(self): """ Attempt to write stage indicator with non-writable collect directory, custom name. """ if not runningAsRoot(): # root doesn't get this error name = "peer1" collectDir = self.buildPath(["collect", ]) os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) os.chmod(collectDir, 0o500) # read-only for user peer = LocalPeer(name, collectDir) self.assertRaises((IOError, OSError), peer.writeStageIndicator, stageIndicator="something") os.chmod(collectDir, 0o777) # so we can remove it safely def testWriteStageIndicator_004(self): """ Attempt to write stage indicator in a valid directory. """ name = "peer1" collectDir = self.buildPath(["collect", ]) stageIndicator = self.buildPath(["collect", DEF_STAGE_INDICATOR, ]) os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) peer.writeStageIndicator() self.assertTrue(os.path.exists(stageIndicator)) def testWriteStageIndicator_005(self): """ Attempt to write stage indicator in a valid directory, custom name. """ name = "peer1" collectDir = self.buildPath(["collect", ]) stageIndicator = self.buildPath(["collect", "whatever", ]) os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) peer.writeStageIndicator(stageIndicator="whatever") self.assertTrue(os.path.exists(stageIndicator)) def testWriteStageIndicator_006(self): """ Attempt to write stage indicator in a valid directory, with spaces in the directory name. """ name = "peer1" collectDir = self.buildPath(["collect from this directory", ]) stageIndicator = self.buildPath(["collect from this directory", DEF_STAGE_INDICATOR, ]) os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) peer.writeStageIndicator() self.assertTrue(os.path.exists(stageIndicator)) def testWriteStageIndicator_007(self): """ Attempt to write stage indicator in a valid directory, custom name, with spaces in the directory name and the file name. """ name = "peer1" collectDir = self.buildPath(["collect ME", ]) stageIndicator = self.buildPath(["collect ME", " whatever-it-takes you", ]) os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) peer.writeStageIndicator(stageIndicator=" whatever-it-takes you") self.assertTrue(os.path.exists(stageIndicator)) ################### # Test stagePeer() ################### def testStagePeer_001(self): """ Attempt to stage files with non-existent collect directory. """ name = "peer1" collectDir = self.buildPath([NONEXISTENT_FILE, ]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.assertTrue(not os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) peer = LocalPeer(name, collectDir) self.assertRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_002(self): """ Attempt to stage files with non-readable collect directory. """ name = "peer1" collectDir = self.buildPath(["collect", ]) targetDir = self.buildPath(["target", ]) os.mkdir(collectDir) os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) os.chmod(collectDir, 0o200) # user can't read his own directory peer = LocalPeer(name, collectDir) self.assertRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) os.chmod(collectDir, 0o777) # so we can remove it safely def testStagePeer_003(self): """ Attempt to stage files with non-absolute target directory. """ name = "peer1" collectDir = self.buildPath(["collect", ]) targetDir = "this/is/not/absolute" os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) self.assertRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_004(self): """ Attempt to stage files with non-existent target directory. """ name = "peer1" collectDir = self.buildPath(["collect", ]) targetDir = self.buildPath(["target", ]) os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(not os.path.exists(targetDir)) peer = LocalPeer(name, collectDir) self.assertRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_005(self): """ Attempt to stage files with non-writable target directory. """ if not runningAsRoot(): # root doesn't get this error self.extractTar("tree1") name = "peer1" collectDir = self.buildPath(["tree1"]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) os.chmod(targetDir, 0o500) # read-only for user peer = LocalPeer(name, collectDir) self.assertRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) os.chmod(targetDir, 0o777) # so we can remove it safely self.assertEqual(0, len(os.listdir(targetDir))) def testStagePeer_006(self): """ Attempt to stage files with empty collect directory. @note: This test assumes that scp returns an error if the directory is empty. """ self.extractTar("tree2") name = "peer1" collectDir = self.buildPath(["tree2", "dir001", ]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) peer = LocalPeer(name, collectDir) self.assertRaises(IOError, peer.stagePeer, targetDir=targetDir) stagedFiles = os.listdir(targetDir) self.assertEqual([], stagedFiles) def testStagePeer_007(self): """ Attempt to stage files with empty collect directory, where the target directory name contains spaces. """ self.extractTar("tree2") name = "peer1" collectDir = self.buildPath(["tree2", "dir001", ]) targetDir = self.buildPath([" target directory ", ]) os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) peer = LocalPeer(name, collectDir) self.assertRaises(IOError, peer.stagePeer, targetDir=targetDir) stagedFiles = os.listdir(targetDir) self.assertEqual([], stagedFiles) def testStagePeer_008(self): """ Attempt to stage files with non-empty collect directory. """ self.extractTar("tree1") name = "peer1" collectDir = self.buildPath(["tree1", ]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) self.assertEqual(0, len(os.listdir(targetDir))) peer = LocalPeer(name, collectDir) count = peer.stagePeer(targetDir=targetDir) self.assertEqual(7, count) stagedFiles = os.listdir(targetDir) self.assertEqual(7, len(stagedFiles)) self.assertTrue("file001" in stagedFiles) self.assertTrue("file002" in stagedFiles) self.assertTrue("file003" in stagedFiles) self.assertTrue("file004" in stagedFiles) self.assertTrue("file005" in stagedFiles) self.assertTrue("file006" in stagedFiles) self.assertTrue("file007" in stagedFiles) def testStagePeer_009(self): """ Attempt to stage files with non-empty collect directory, where the target directory name contains spaces. """ self.extractTar("tree1") name = "peer1" collectDir = self.buildPath(["tree1", ]) targetDir = self.buildPath(["target directory place", ]) os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) self.assertEqual(0, len(os.listdir(targetDir))) peer = LocalPeer(name, collectDir) count = peer.stagePeer(targetDir=targetDir) self.assertEqual(7, count) stagedFiles = os.listdir(targetDir) self.assertEqual(7, len(stagedFiles)) self.assertTrue("file001" in stagedFiles) self.assertTrue("file002" in stagedFiles) self.assertTrue("file003" in stagedFiles) self.assertTrue("file004" in stagedFiles) self.assertTrue("file005" in stagedFiles) self.assertTrue("file006" in stagedFiles) self.assertTrue("file007" in stagedFiles) def testStagePeer_010(self): """ Attempt to stage files with non-empty collect directory containing links and directories. """ self.extractTar("tree9") name = "peer1" collectDir = self.buildPath(["tree9", ]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) self.assertEqual(0, len(os.listdir(targetDir))) peer = LocalPeer(name, collectDir) self.assertRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_011(self): """ Attempt to stage files with non-empty collect directory and attempt to set valid permissions. """ self.extractTar("tree1") name = "peer1" collectDir = self.buildPath(["tree1", ]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) self.assertEqual(0, len(os.listdir(targetDir))) peer = LocalPeer(name, collectDir) if getMaskAsMode() == 0o400: permissions = 0o642 # arbitrary, but different than umask would give else: permissions = 0o400 # arbitrary count = peer.stagePeer(targetDir=targetDir, permissions=permissions) self.assertEqual(7, count) stagedFiles = os.listdir(targetDir) self.assertEqual(7, len(stagedFiles)) self.assertTrue("file001" in stagedFiles) self.assertTrue("file002" in stagedFiles) self.assertTrue("file003" in stagedFiles) self.assertTrue("file004" in stagedFiles) self.assertTrue("file005" in stagedFiles) self.assertTrue("file006" in stagedFiles) self.assertTrue("file007" in stagedFiles) self.assertEqual(permissions, self.getFileMode(["target", "file001", ])) self.assertEqual(permissions, self.getFileMode(["target", "file002", ])) self.assertEqual(permissions, self.getFileMode(["target", "file003", ])) self.assertEqual(permissions, self.getFileMode(["target", "file004", ])) self.assertEqual(permissions, self.getFileMode(["target", "file005", ])) self.assertEqual(permissions, self.getFileMode(["target", "file006", ])) self.assertEqual(permissions, self.getFileMode(["target", "file007", ])) ###################### # TestRemotePeer class ###################### class TestRemotePeer(unittest.TestCase): """Tests for the RemotePeer class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def getFileMode(self, components): """Calls buildPath on components and then returns file mode for the file.""" return stat.S_IMODE(os.stat(self.buildPath(components)).st_mode) def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Tests basic functionality ############################ def testBasic_001(self): """ Make sure exception is thrown for non-absolute collect or working directory. """ name = REMOTE_HOST collectDir = "whatever/something/else/not/absolute" workingDir = "/tmp" remoteUser = getLogin() self.assertRaises(ValueError, RemotePeer, name, collectDir, workingDir, remoteUser) name = REMOTE_HOST collectDir = "/whatever/something/else/not/absolute" workingDir = "tmp" remoteUser = getLogin() self.assertRaises(ValueError, RemotePeer, name, collectDir, workingDir, remoteUser) def testBasic_002(self): """ Make sure attributes are set properly for valid constructor input. """ name = REMOTE_HOST collectDir = "/absolute/path/name" workingDir = "/tmp" remoteUser = getLogin() peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.assertEqual(name, peer.name) self.assertEqual(collectDir, peer.collectDir) self.assertEqual(workingDir, peer.workingDir) self.assertEqual(remoteUser, peer.remoteUser) self.assertEqual(None, peer.localUser) self.assertEqual(None, peer.rcpCommand) self.assertEqual(None, peer.rshCommand) self.assertEqual(None, peer.cbackCommand) self.assertEqual(DEF_RCP_COMMAND, peer._rcpCommandList) self.assertEqual(DEF_RSH_COMMAND, peer._rshCommandList) self.assertEqual(None, peer.ignoreFailureMode) def testBasic_003(self): """ Make sure attributes are set properly for valid constructor input, where the collect directory contains spaces. """ name = REMOTE_HOST collectDir = "/absolute/path/to/ a large directory" workingDir = "/tmp" remoteUser = getLogin() peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.assertEqual(name, peer.name) self.assertEqual(collectDir, peer.collectDir) self.assertEqual(workingDir, peer.workingDir) self.assertEqual(remoteUser, peer.remoteUser) self.assertEqual(None, peer.localUser) self.assertEqual(None, peer.rcpCommand) self.assertEqual(None, peer.rshCommand) self.assertEqual(None, peer.cbackCommand) self.assertEqual(DEF_RCP_COMMAND, peer._rcpCommandList) self.assertEqual(DEF_RSH_COMMAND, peer._rshCommandList) def testBasic_004(self): """ Make sure attributes are set properly for valid constructor input, custom rcp command. """ name = REMOTE_HOST collectDir = "/absolute/path/name" workingDir = "/tmp" remoteUser = getLogin() rcpCommand = "rcp -one --two three \"four five\" 'six seven' eight" peer = RemotePeer(name, collectDir, workingDir, remoteUser, rcpCommand) self.assertEqual(name, peer.name) self.assertEqual(collectDir, peer.collectDir) self.assertEqual(workingDir, peer.workingDir) self.assertEqual(remoteUser, peer.remoteUser) self.assertEqual(None, peer.localUser) self.assertEqual(rcpCommand, peer.rcpCommand) self.assertEqual(None, peer.rshCommand) self.assertEqual(None, peer.cbackCommand) self.assertEqual(["rcp", "-one", "--two", "three", "four five", "'six", "seven'", "eight", ], peer._rcpCommandList) self.assertEqual(DEF_RSH_COMMAND, peer._rshCommandList) def testBasic_005(self): """ Make sure attributes are set properly for valid constructor input, custom local user command. """ name = REMOTE_HOST collectDir = "/absolute/path/to/ a large directory" workingDir = "/tmp" remoteUser = getLogin() localUser = "pronovic" peer = RemotePeer(name, collectDir, workingDir, remoteUser, localUser=localUser) self.assertEqual(name, peer.name) self.assertEqual(collectDir, peer.collectDir) self.assertEqual(workingDir, peer.workingDir) self.assertEqual(remoteUser, peer.remoteUser) self.assertEqual(localUser, peer.localUser) self.assertEqual(None, peer.rcpCommand) self.assertEqual(DEF_RCP_COMMAND, peer._rcpCommandList) self.assertEqual(DEF_RSH_COMMAND, peer._rshCommandList) def testBasic_006(self): """ Make sure attributes are set properly for valid constructor input, custom rsh command. """ name = REMOTE_HOST remoteUser = getLogin() rshCommand = "rsh --whatever -something \"a b\" else" peer = RemotePeer(name, remoteUser=remoteUser, rshCommand=rshCommand) self.assertEqual(name, peer.name) self.assertEqual(None, peer.collectDir) self.assertEqual(None, peer.workingDir) self.assertEqual(remoteUser, peer.remoteUser) self.assertEqual(None, peer.localUser) self.assertEqual(None, peer.rcpCommand) self.assertEqual(rshCommand, peer.rshCommand) self.assertEqual(None, peer.cbackCommand) self.assertEqual(DEF_RCP_COMMAND, peer._rcpCommandList) self.assertEqual(DEF_RCP_COMMAND, peer._rcpCommandList) self.assertEqual(["rsh", "--whatever", "-something", "a b", "else", ], peer._rshCommandList) def testBasic_007(self): """ Make sure attributes are set properly for valid constructor input, custom cback command. """ name = REMOTE_HOST remoteUser = getLogin() cbackCommand = "cback --config=whatever --logfile=whatever --mode=064" peer = RemotePeer(name, remoteUser=remoteUser, cbackCommand=cbackCommand) self.assertEqual(name, peer.name) self.assertEqual(None, peer.collectDir) self.assertEqual(None, peer.workingDir) self.assertEqual(remoteUser, peer.remoteUser) self.assertEqual(None, peer.localUser) self.assertEqual(None, peer.rcpCommand) self.assertEqual(None, peer.rshCommand) self.assertEqual(cbackCommand, peer.cbackCommand) def testBasic_008(self): """ Make sure assignment works for all valid failure modes. """ peer = RemotePeer(name="name", remoteUser="user", ignoreFailureMode="all") self.assertEqual("all", peer.ignoreFailureMode) peer.ignoreFailureMode = "none" self.assertEqual("none", peer.ignoreFailureMode) peer.ignoreFailureMode = "daily" self.assertEqual("daily", peer.ignoreFailureMode) peer.ignoreFailureMode = "weekly" self.assertEqual("weekly", peer.ignoreFailureMode) self.failUnlessAssignRaises(ValueError, peer, "ignoreFailureMode", "bogus") ############################### # Test checkCollectIndicator() ############################### def testCheckCollectIndicator_001(self): """ Attempt to check collect indicator with invalid hostname. """ name = NONEXISTENT_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) remoteUser = getLogin() peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.assertEqual(False, result) def testCheckCollectIndicator_002(self): """ Attempt to check collect indicator with invalid remote user. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = NONEXISTENT_USER os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.assertEqual(False, result) def testCheckCollectIndicator_003(self): """ Attempt to check collect indicator with invalid rcp command. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = getLogin() rcpCommand = NONEXISTENT_CMD os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser, rcpCommand) result = peer.checkCollectIndicator() self.assertEqual(False, result) def testCheckCollectIndicator_004(self): """ Attempt to check collect indicator with non-existent collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = getLogin() self.assertTrue(not os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.assertEqual(False, result) def testCheckCollectIndicator_005(self): """ Attempt to check collect indicator with non-readable collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = getLogin() os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) os.chmod(collectDir, 0o200) # user can't read his own directory peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.assertEqual(False, result) os.chmod(collectDir, 0o777) # so we can remove it safely def testCheckCollectIndicator_006(self): """ Attempt to check collect indicator collect indicator file that does not exist. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect", DEF_COLLECT_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(not os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.assertEqual(False, result) def testCheckCollectIndicator_007(self): """ Attempt to check collect indicator collect indicator file that does not exist, custom name. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect", NONEXISTENT_FILE, ]) remoteUser = getLogin() os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(not os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.assertEqual(False, result) def testCheckCollectIndicator_008(self): """ Attempt to check collect indicator collect indicator file that does not exist, where the collect directory contains spaces. """ name = REMOTE_HOST collectDir = self.buildPath(["collect directory path", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect directory path", DEF_COLLECT_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(not os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.assertEqual(False, result) def testCheckCollectIndicator_009(self): """ Attempt to check collect indicator collect indicator file that does not exist, custom name, where the collect directory contains spaces. """ name = REMOTE_HOST collectDir = self.buildPath([" you collect here ", ]) workingDir = "/tmp" collectIndicator = self.buildPath([" you collect here ", NONEXISTENT_FILE, ]) remoteUser = getLogin() os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(not os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.assertEqual(False, result) def testCheckCollectIndicator_010(self): """ Attempt to check collect indicator collect indicator file that does exist. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect", DEF_COLLECT_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) with open(collectIndicator, "w") as f: f.write("") # touch the file self.assertTrue(os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.assertEqual(True, result) def testCheckCollectIndicator_011(self): """ Attempt to check collect indicator collect indicator file that does exist, custom name. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect", "whatever", ]) remoteUser = getLogin() os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) with open(collectIndicator, "w") as f: f.write("") # touch the file self.assertTrue(os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator(collectIndicator="whatever") self.assertEqual(True, result) def testCheckCollectIndicator_012(self): """ Attempt to check collect indicator collect indicator file that does exist, where the collect directory contains spaces. """ name = REMOTE_HOST collectDir = self.buildPath(["collect NOT", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect NOT", DEF_COLLECT_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) with open(collectIndicator, "w") as f: f.write("") # touch the file self.assertTrue(os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.assertEqual(True, result) def testCheckCollectIndicator_013(self): """ Attempt to check collect indicator collect indicator file that does exist, custom name, where the collect directory and indicator file contain spaces. """ name = REMOTE_HOST collectDir = self.buildPath([" from here collect!", ]) workingDir = "/tmp" collectIndicator = self.buildPath([" from here collect!", "whatever, dude", ]) remoteUser = getLogin() os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) with open(collectIndicator, "w") as f: f.write("") # touch the file self.assertTrue(os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator(collectIndicator="whatever, dude") self.assertEqual(True, result) ############################# # Test writeStageIndicator() ############################# def testWriteStageIndicator_001(self): """ Attempt to write stage indicator with invalid hostname. """ name = NONEXISTENT_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) remoteUser = getLogin() peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.assertRaises((IOError, OSError), peer.writeStageIndicator) def testWriteStageIndicator_002(self): """ Attempt to write stage indicator with invalid remote user. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = NONEXISTENT_USER os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.assertRaises((IOError, OSError), peer.writeStageIndicator) def testWriteStageIndicator_003(self): """ Attempt to write stage indicator with invalid rcp command. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = getLogin() rcpCommand = NONEXISTENT_CMD os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser, rcpCommand) self.assertRaises((IOError, OSError), peer.writeStageIndicator) def testWriteStageIndicator_004(self): """ Attempt to write stage indicator with non-existent collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = getLogin() self.assertTrue(not os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.assertRaises(IOError, peer.writeStageIndicator) def testWriteStageIndicator_005(self): """ Attempt to write stage indicator with non-writable collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" stageIndicator = self.buildPath(["collect", DEF_STAGE_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(not os.path.exists(stageIndicator)) os.chmod(collectDir, 0o400) # read-only for user peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.assertRaises((IOError, OSError), peer.writeStageIndicator) self.assertTrue(not os.path.exists(stageIndicator)) os.chmod(collectDir, 0o777) # so we can remove it safely def testWriteStageIndicator_006(self): """ Attempt to write stage indicator in a valid directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" stageIndicator = self.buildPath(["collect", DEF_STAGE_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(not os.path.exists(stageIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) peer.writeStageIndicator() self.assertTrue(os.path.exists(stageIndicator)) def testWriteStageIndicator_007(self): """ Attempt to write stage indicator in a valid directory, custom name. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" stageIndicator = self.buildPath(["collect", "newname", ]) remoteUser = getLogin() os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(not os.path.exists(stageIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) peer.writeStageIndicator(stageIndicator="newname") self.assertTrue(os.path.exists(stageIndicator)) def testWriteStageIndicator_008(self): """ Attempt to write stage indicator in a valid directory that contains spaces. """ name = REMOTE_HOST collectDir = self.buildPath(["with spaces collect", ]) workingDir = "/tmp" stageIndicator = self.buildPath(["with spaces collect", DEF_STAGE_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(not os.path.exists(stageIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) peer.writeStageIndicator() self.assertTrue(os.path.exists(stageIndicator)) def testWriteStageIndicator_009(self): """ Attempt to write stage indicator in a valid directory, custom name, where the collect directory and the custom name contain spaces. """ name = REMOTE_HOST collectDir = self.buildPath(["collect, soon", ]) workingDir = "/tmp" stageIndicator = self.buildPath(["collect, soon", "new name with spaces", ]) remoteUser = getLogin() os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(not os.path.exists(stageIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) peer.writeStageIndicator(stageIndicator="new name with spaces") self.assertTrue(os.path.exists(stageIndicator)) ################### # Test stagePeer() ################### def testStagePeer_001(self): """ Attempt to stage files with invalid hostname. """ name = NONEXISTENT_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(collectDir) os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.assertRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) def testStagePeer_002(self): """ Attempt to stage files with invalid remote user. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = NONEXISTENT_USER os.mkdir(collectDir) os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.assertRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) def testStagePeer_003(self): """ Attempt to stage files with invalid rcp command. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() rcpCommand = NONEXISTENT_CMD os.mkdir(collectDir) os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser, rcpCommand) self.assertRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) def testStagePeer_004(self): """ Attempt to stage files with non-existent collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(targetDir) self.assertTrue(not os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.assertRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) def testStagePeer_005(self): """ Attempt to stage files with non-readable collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(collectDir) os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) os.chmod(collectDir, 0o200) # user can't read his own directory peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.assertRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) os.chmod(collectDir, 0o777) # so we can remove it safely def testStagePeer_006(self): """ Attempt to stage files with non-absolute target directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = "non/absolute/target" remoteUser = getLogin() self.assertTrue(not os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.assertRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_007(self): """ Attempt to stage files with non-existent target directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(collectDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(not os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.assertRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_008(self): """ Attempt to stage files with non-writable target directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(collectDir) os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) os.chmod(targetDir, 0o400) # read-only for user peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.assertRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) os.chmod(collectDir, 0o777) # so we can remove it safely self.assertEqual(0, len(os.listdir(targetDir))) def testStagePeer_009(self): """ Attempt to stage files with empty collect directory. @note: This test assumes that scp returns an error if the directory is empty. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(collectDir) os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.assertRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) stagedFiles = os.listdir(targetDir) self.assertEqual([], stagedFiles) def testStagePeer_010(self): """ Attempt to stage files with empty collect directory, with a target directory that contains spaces. @note: This test assumes that scp returns an error if the directory is empty. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target DIR", ]) remoteUser = getLogin() os.mkdir(collectDir) os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.assertRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) stagedFiles = os.listdir(targetDir) self.assertEqual([], stagedFiles) def testStagePeer_011(self): """ Attempt to stage files with non-empty collect directory. """ self.extractTar("tree1") name = REMOTE_HOST collectDir = self.buildPath(["tree1", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) self.assertEqual(0, len(os.listdir(targetDir))) peer = RemotePeer(name, collectDir, workingDir, remoteUser) count = peer.stagePeer(targetDir=targetDir) self.assertEqual(7, count) stagedFiles = os.listdir(targetDir) self.assertEqual(7, len(stagedFiles)) self.assertTrue("file001" in stagedFiles) self.assertTrue("file002" in stagedFiles) self.assertTrue("file003" in stagedFiles) self.assertTrue("file004" in stagedFiles) self.assertTrue("file005" in stagedFiles) self.assertTrue("file006" in stagedFiles) self.assertTrue("file007" in stagedFiles) def testStagePeer_012(self): """ Attempt to stage files with non-empty collect directory, with a target directory that contains spaces. """ self.extractTar("tree1") name = REMOTE_HOST collectDir = self.buildPath(["tree1", ]) workingDir = "/tmp" targetDir = self.buildPath(["write the target here, now!", ]) remoteUser = getLogin() os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) self.assertEqual(0, len(os.listdir(targetDir))) peer = RemotePeer(name, collectDir, workingDir, remoteUser) count = peer.stagePeer(targetDir=targetDir) self.assertEqual(7, count) stagedFiles = os.listdir(targetDir) self.assertEqual(7, len(stagedFiles)) self.assertTrue("file001" in stagedFiles) self.assertTrue("file002" in stagedFiles) self.assertTrue("file003" in stagedFiles) self.assertTrue("file004" in stagedFiles) self.assertTrue("file005" in stagedFiles) self.assertTrue("file006" in stagedFiles) self.assertTrue("file007" in stagedFiles) def testStagePeer_013(self): """ Attempt to stage files with non-empty collect directory containing links and directories. @note: We assume that scp copies the files even though it returns an error due to directories. """ self.extractTar("tree9") name = REMOTE_HOST collectDir = self.buildPath(["tree9", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) self.assertEqual(0, len(os.listdir(targetDir))) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.assertRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) stagedFiles = os.listdir(targetDir) self.assertEqual(2, len(stagedFiles)) self.assertTrue("file001" in stagedFiles) self.assertTrue("file002" in stagedFiles) def testStagePeer_014(self): """ Attempt to stage files with non-empty collect directory and attempt to set valid permissions. """ self.extractTar("tree1") name = REMOTE_HOST collectDir = self.buildPath(["tree1", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(targetDir) self.assertTrue(os.path.exists(collectDir)) self.assertTrue(os.path.exists(targetDir)) self.assertEqual(0, len(os.listdir(targetDir))) peer = RemotePeer(name, collectDir, workingDir, remoteUser) if getMaskAsMode() == 0o400: permissions = 0o642 # arbitrary, but different than umask would give else: permissions = 0o400 # arbitrary count = peer.stagePeer(targetDir=targetDir, permissions=permissions) self.assertEqual(7, count) stagedFiles = os.listdir(targetDir) self.assertEqual(7, len(stagedFiles)) self.assertTrue("file001" in stagedFiles) self.assertTrue("file002" in stagedFiles) self.assertTrue("file003" in stagedFiles) self.assertTrue("file004" in stagedFiles) self.assertTrue("file005" in stagedFiles) self.assertTrue("file006" in stagedFiles) self.assertTrue("file007" in stagedFiles) self.assertEqual(permissions, self.getFileMode(["target", "file001", ])) self.assertEqual(permissions, self.getFileMode(["target", "file002", ])) self.assertEqual(permissions, self.getFileMode(["target", "file003", ])) self.assertEqual(permissions, self.getFileMode(["target", "file004", ])) self.assertEqual(permissions, self.getFileMode(["target", "file005", ])) self.assertEqual(permissions, self.getFileMode(["target", "file006", ])) self.assertEqual(permissions, self.getFileMode(["target", "file007", ])) ############################## # Test executeRemoteCommand() ############################## def testExecuteRemoteCommand(self): """ Test that a simple remote command succeeds. """ target = self.buildPath(["test.txt", ]) name = REMOTE_HOST remoteUser = getLogin() command = "touch %s" % target self.assertFalse(os.path.exists(target)) peer = RemotePeer(name=name, remoteUser=remoteUser) peer.executeRemoteCommand(command) self.assertTrue(os.path.exists(target)) ############################ # Test _buildCbackCommand() ############################ def testBuildCbackCommand_001(self): """ Test with None for cbackCommand and action, False for fullBackup. """ self.assertRaises(ValueError, RemotePeer._buildCbackCommand, None, None, False) def testBuildCbackCommand_002(self): """ Test with None for cbackCommand, "collect" for action, False for fullBackup. """ result = RemotePeer._buildCbackCommand(None, "collect", False) self.assertEqual("/usr/bin/cback3 collect", result) def testBuildCbackCommand_003(self): """ Test with "cback" for cbackCommand, "collect" for action, False for fullBackup. """ result = RemotePeer._buildCbackCommand("cback", "collect", False) self.assertEqual("cback collect", result) def testBuildCbackCommand_004(self): """ Test with "cback" for cbackCommand, "collect" for action, True for fullBackup. """ result = RemotePeer._buildCbackCommand("cback", "collect", True) self.assertEqual("cback --full collect", result) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" if runAllTests(): tests = [ ] tests.append(unittest.makeSuite(TestLocalPeer, 'test')) tests.append(unittest.makeSuite(TestRemotePeer, 'test')) return unittest.TestSuite(tests) else: tests = [ ] tests.append(unittest.makeSuite(TestLocalPeer, 'test')) tests.append(unittest.makeSuite(TestRemotePeer, 'testBasic')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/synctests.py0000664000175000017500000041202612560007327021725 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests Amazon S3 sync tool functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/tools/amazons3.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in tools/amazons3.py. Where possible, we test functions that print output by passing a custom file descriptor. Sometimes, we only ensure that a function or method runs without failure, and we don't validate what its result is or what it prints out. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a SYNCTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import unittest from getopt import GetoptError from CedarBackup3.testutil import failUnlessAssignRaises, captureOutput from CedarBackup3.tools.amazons3 import _usage, _version from CedarBackup3.tools.amazons3 import Options ####################################################################### # Test Case Classes ####################################################################### ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the public functions.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ######################## # Test simple functions ######################## def testSimpleFuncs_001(self): """ Test that the _usage() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_usage) def testSimpleFuncs_002(self): """ Test that the _version() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_version) #################### # TestOptions class #################### class TestOptions(unittest.TestCase): """Tests for the Options class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = Options() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no arguments. """ options = Options() self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_002(self): """ Test constructor with validate=False, no other arguments. """ options = Options(validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_003(self): """ Test constructor with argumentList=[], validate=False. """ options = Options(argumentList=[], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_004(self): """ Test constructor with argumentString="", validate=False. """ options = Options(argumentString="", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_005(self): """ Test constructor with argumentList=["--help", ], validate=False. """ options = Options(argumentList=["--help", ], validate=False) self.assertEqual(True, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_006(self): """ Test constructor with argumentString="--help", validate=False. """ options = Options(argumentString="--help", validate=False) self.assertEqual(True, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_007(self): """ Test constructor with argumentList=["-h", ], validate=False. """ options = Options(argumentList=["-h", ], validate=False) self.assertEqual(True, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_008(self): """ Test constructor with argumentString="-h", validate=False. """ options = Options(argumentString="-h", validate=False) self.assertEqual(True, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_009(self): """ Test constructor with argumentList=["--version", ], validate=False. """ options = Options(argumentList=["--version", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(True, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_010(self): """ Test constructor with argumentString="--version", validate=False. """ options = Options(argumentString="--version", validate=False) self.assertEqual(False, options.help) self.assertEqual(True, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_011(self): """ Test constructor with argumentList=["-V", ], validate=False. """ options = Options(argumentList=["-V", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(True, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_012(self): """ Test constructor with argumentString="-V", validate=False. """ options = Options(argumentString="-V", validate=False) self.assertEqual(False, options.help) self.assertEqual(True, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_013(self): """ Test constructor with argumentList=["--verbose", ], validate=False. """ options = Options(argumentList=["--verbose", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(True, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_014(self): """ Test constructor with argumentString="--verbose", validate=False. """ options = Options(argumentString="--verbose", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(True, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_015(self): """ Test constructor with argumentList=["-b", ], validate=False. """ options = Options(argumentList=["-b", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(True, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_016(self): """ Test constructor with argumentString="-b", validate=False. """ options = Options(argumentString="-b", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(True, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_017(self): """ Test constructor with argumentList=["--quiet", ], validate=False. """ options = Options(argumentList=["--quiet", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(True, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_018(self): """ Test constructor with argumentString="--quiet", validate=False. """ options = Options(argumentString="--quiet", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(True, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_019(self): """ Test constructor with argumentList=["-q", ], validate=False. """ options = Options(argumentList=["-q", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(True, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_020(self): """ Test constructor with argumentString="-q", validate=False. """ options = Options(argumentString="-q", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(True, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_021(self): """ Test constructor with argumentList=["--logfile", ], validate=False. """ self.assertRaises(GetoptError, Options, argumentList=["--logfile", ], validate=False) def testConstructor_022(self): """ Test constructor with argumentString="--logfile", validate=False. """ self.assertRaises(GetoptError, Options, argumentString="--logfile", validate=False) def testConstructor_023(self): """ Test constructor with argumentList=["-l", ], validate=False. """ self.assertRaises(GetoptError, Options, argumentList=["-l", ], validate=False) def testConstructor_024(self): """ Test constructor with argumentString="-l", validate=False. """ self.assertRaises(GetoptError, Options, argumentString="-l", validate=False) def testConstructor_025(self): """ Test constructor with argumentList=["--logfile", "something", ], validate=False. """ options = Options(argumentList=["--logfile", "something", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual("something", options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_026(self): """ Test constructor with argumentString="--logfile something", validate=False. """ options = Options(argumentString="--logfile something", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual("something", options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_027(self): """ Test constructor with argumentList=["-l", "something", ], validate=False. """ options = Options(argumentList=["-l", "something", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual("something", options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_028(self): """ Test constructor with argumentString="-l something", validate=False. """ options = Options(argumentString="-l something", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual("something", options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_029(self): """ Test constructor with argumentList=["--owner", ], validate=False. """ self.assertRaises(GetoptError, Options, argumentList=["--owner", ], validate=False) def testConstructor_030(self): """ Test constructor with argumentString="--owner", validate=False. """ self.assertRaises(GetoptError, Options, argumentString="--owner", validate=False) def testConstructor_040(self): """ Test constructor with argumentList=["-o", ], validate=False. """ self.assertRaises(GetoptError, Options, argumentList=["-o", ], validate=False) def testConstructor_041(self): """ Test constructor with argumentString="-o", validate=False. """ self.assertRaises(GetoptError, Options, argumentString="-o", validate=False) def testConstructor_042(self): """ Test constructor with argumentList=["--owner", "something", ], validate=False. """ self.assertRaises(ValueError, Options, argumentList=["--owner", "something", ], validate=False) def testConstructor_043(self): """ Test constructor with argumentString="--owner something", validate=False. """ self.assertRaises(ValueError, Options, argumentString="--owner something", validate=False) def testConstructor_044(self): """ Test constructor with argumentList=["-o", "something", ], validate=False. """ self.assertRaises(ValueError, Options, argumentList=["-o", "something", ], validate=False) def testConstructor_045(self): """ Test constructor with argumentString="-o something", validate=False. """ self.assertRaises(ValueError, Options, argumentString="-o something", validate=False) def testConstructor_046(self): """ Test constructor with argumentList=["--owner", "a:b", ], validate=False. """ options = Options(argumentList=["--owner", "a:b", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(("a", "b"), options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_047(self): """ Test constructor with argumentString="--owner a:b", validate=False. """ options = Options(argumentString="--owner a:b", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(("a", "b"), options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_048(self): """ Test constructor with argumentList=["-o", "a:b", ], validate=False. """ options = Options(argumentList=["-o", "a:b", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(("a", "b"), options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_049(self): """ Test constructor with argumentString="-o a:b", validate=False. """ options = Options(argumentString="-o a:b", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(("a", "b"), options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_050(self): """ Test constructor with argumentList=["--mode", ], validate=False. """ self.assertRaises(GetoptError, Options, argumentList=["--mode", ], validate=False) def testConstructor_051(self): """ Test constructor with argumentString="--mode", validate=False. """ self.assertRaises(GetoptError, Options, argumentString="--mode", validate=False) def testConstructor_052(self): """ Test constructor with argumentList=["-m", ], validate=False. """ self.assertRaises(GetoptError, Options, argumentList=["-m", ], validate=False) def testConstructor_053(self): """ Test constructor with argumentString="-m", validate=False. """ self.assertRaises(GetoptError, Options, argumentString="-m", validate=False) def testConstructor_054(self): """ Test constructor with argumentList=["--mode", "something", ], validate=False. """ self.assertRaises(ValueError, Options, argumentList=["--mode", "something", ], validate=False) def testConstructor_055(self): """ Test constructor with argumentString="--mode something", validate=False. """ self.assertRaises(ValueError, Options, argumentString="--mode something", validate=False) def testConstructor_056(self): """ Test constructor with argumentList=["-m", "something", ], validate=False. """ self.assertRaises(ValueError, Options, argumentList=["-m", "something", ], validate=False) def testConstructor_057(self): """ Test constructor with argumentString="-m something", validate=False. """ self.assertRaises(ValueError, Options, argumentString="-m something", validate=False) def testConstructor_058(self): """ Test constructor with argumentList=["--mode", "631", ], validate=False. """ options = Options(argumentList=["--mode", "631", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(0o631, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_059(self): """ Test constructor with argumentString="--mode 631", validate=False. """ options = Options(argumentString="--mode 631", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(0o631, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_060(self): """ Test constructor with argumentList=["-m", "631", ], validate=False. """ options = Options(argumentList=["-m", "631", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(0o631, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_061(self): """ Test constructor with argumentString="-m 631", validate=False. """ options = Options(argumentString="-m 631", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(0o631, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_062(self): """ Test constructor with argumentList=["--output", ], validate=False. """ options = Options(argumentList=["--output", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(True, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_063(self): """ Test constructor with argumentString="--output", validate=False. """ options = Options(argumentString="--output", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(True, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_064(self): """ Test constructor with argumentList=["-O", ], validate=False. """ options = Options(argumentList=["-O", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(True, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_065(self): """ Test constructor with argumentString="-O", validate=False. """ options = Options(argumentString="-O", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(True, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_066(self): """ Test constructor with argumentList=["--debug", ], validate=False. """ options = Options(argumentList=["--debug", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(True, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_067(self): """ Test constructor with argumentString="--debug", validate=False. """ options = Options(argumentString="--debug", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(True, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_068(self): """ Test constructor with argumentList=["-d", ], validate=False. """ options = Options(argumentList=["-d", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(True, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_069(self): """ Test constructor with argumentString="-d", validate=False. """ options = Options(argumentString="-d", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(True, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_070(self): """ Test constructor with argumentList=["--stack", ], validate=False. """ options = Options(argumentList=["--stack", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(True, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_071(self): """ Test constructor with argumentString="--stack", validate=False. """ options = Options(argumentString="--stack", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(True, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_072(self): """ Test constructor with argumentList=["-s", ], validate=False. """ options = Options(argumentList=["-s", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(True, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_073(self): """ Test constructor with argumentString="-s", validate=False. """ options = Options(argumentString="-s", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(True, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_074(self): """ Test constructor with argumentList=["--diagnostics", ], validate=False. """ options = Options(argumentList=["--diagnostics", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(True, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_075(self): """ Test constructor with argumentString="--diagnostics", validate=False. """ options = Options(argumentString="--diagnostics", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(True, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_076(self): """ Test constructor with argumentList=["-D", ], validate=False. """ options = Options(argumentList=["-D", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(True, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_077(self): """ Test constructor with argumentString="-D", validate=False. """ options = Options(argumentString="-D", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(True, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_078(self): """ Test constructor with argumentList=["--verifyOnly", ], validate=False. """ options = Options(argumentList=["--verifyOnly", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(True, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_079(self): """ Test constructor with argumentString="--verifyOnly", validate=False. """ options = Options(argumentString="--verifyOnly", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(True, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_080(self): """ Test constructor with argumentList=["-v", ], validate=False. """ options = Options(argumentList=["-v", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(True, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_081(self): """ Test constructor with argumentString="-v", validate=False. """ options = Options(argumentString="-v", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(True, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_082(self): """ Test constructor with argumentList=["--ignoreWarnings", ], validate=False. """ options = Options(argumentList=["--ignoreWarnings", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(True, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_083(self): """ Test constructor with argumentString="--ignoreWarnings", validate=False. """ options = Options(argumentString="--ignoreWarnings", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(True, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_084(self): """ Test constructor with argumentList=["-w", ], validate=False. """ options = Options(argumentList=["-w", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(True, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_085(self): """ Test constructor with argumentString="-w", validate=False. """ options = Options(argumentString="-w", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(True, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_086(self): """ Test constructor with argumentList=["source", "bucket", ], validate=False. """ options = Options(argumentList=[ "source", "bucket", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual("source", options.sourceDir) self.assertEqual("bucket", options.s3BucketUrl) def testConstructor_087(self): """ Test constructor with argumentString="source bucket", validate=False. """ options = Options(argumentString="source bucket", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual("source", options.sourceDir) self.assertEqual("bucket", options.s3BucketUrl) def testConstructor_088(self): """ Test constructor with argumentList=["-d", "--verbose", "-O", "--mode", "600", "source", "bucket", ], validate=False. """ options = Options(argumentList=["-d", "--verbose", "-O", "--mode", "600", "source", "bucket", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(True, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(0o600, options.mode) self.assertEqual(True, options.output) self.assertEqual(True, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual("source", options.sourceDir) self.assertEqual("bucket", options.s3BucketUrl) def testConstructor_089(self): """ Test constructor with argumentString="-d --verbose -O --mode 600 source bucket", validate=False. """ options = Options(argumentString="-d --verbose -O --mode 600 source bucket", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(True, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(0o600, options.mode) self.assertEqual(True, options.output) self.assertEqual(True, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual("source", options.sourceDir) self.assertEqual("bucket", options.s3BucketUrl) def testConstructor_090(self): """ Test constructor with argumentList=[], validate=True. """ self.assertRaises(ValueError, Options, argumentList=[], validate=True) def testConstructor_091(self): """ Test constructor with argumentString="", validate=True. """ self.assertRaises(ValueError, Options, argumentString="", validate=True) def testConstructor_092(self): """ Test constructor with argumentList=["--help", ], validate=True. """ options = Options(argumentList=["--help", ], validate=True) self.assertEqual(True, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_093(self): """ Test constructor with argumentString="--help", validate=True. """ options = Options(argumentString="--help", validate=True) self.assertEqual(True, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_094(self): """ Test constructor with argumentList=["-h", ], validate=True. """ options = Options(argumentList=["-h", ], validate=True) self.assertEqual(True, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_095(self): """ Test constructor with argumentString="-h", validate=True. """ options = Options(argumentString="-h", validate=True) self.assertEqual(True, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_096(self): """ Test constructor with argumentList=["--version", ], validate=True. """ options = Options(argumentList=["--version", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(True, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_097(self): """ Test constructor with argumentString="--version", validate=True. """ options = Options(argumentString="--version", validate=True) self.assertEqual(False, options.help) self.assertEqual(True, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_098(self): """ Test constructor with argumentList=["-V", ], validate=True. """ options = Options(argumentList=["-V", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(True, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_099(self): """ Test constructor with argumentString="-V", validate=True. """ options = Options(argumentString="-V", validate=True) self.assertEqual(False, options.help) self.assertEqual(True, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_100(self): """ Test constructor with argumentList=["--verbose", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--verbose", ], validate=True) def testConstructor_101(self): """ Test constructor with argumentString="--verbose", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--verbose", validate=True) def testConstructor_102(self): """ Test constructor with argumentList=["-b", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-b", ], validate=True) def testConstructor_103(self): """ Test constructor with argumentString="-b", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-b", validate=True) def testConstructor_104(self): """ Test constructor with argumentList=["--quiet", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--quiet", ], validate=True) def testConstructor_105(self): """ Test constructor with argumentString="--quiet", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--quiet", validate=True) def testConstructor_106(self): """ Test constructor with argumentList=["-q", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-q", ], validate=True) def testConstructor_107(self): """ Test constructor with argumentString="-q", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-q", validate=True) def testConstructor_108(self): """ Test constructor with argumentList=["--logfile", ], validate=True. """ self.assertRaises(GetoptError, Options, argumentList=["--logfile", ], validate=True) def testConstructor_109(self): """ Test constructor with argumentString="--logfile", validate=True. """ self.assertRaises(GetoptError, Options, argumentString="--logfile", validate=True) def testConstructor_110(self): """ Test constructor with argumentList=["-l", ], validate=True. """ self.assertRaises(GetoptError, Options, argumentList=["-l", ], validate=True) def testConstructor_111(self): """ Test constructor with argumentString="-l", validate=True. """ self.assertRaises(GetoptError, Options, argumentString="-l", validate=True) def testConstructor_112(self): """ Test constructor with argumentList=["--logfile", "something", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--logfile", "something", ], validate=True) def testConstructor_113(self): """ Test constructor with argumentString="--logfile something", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--logfile something", validate=True) def testConstructor_114(self): """ Test constructor with argumentList=["-l", "something", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-l", "something", ], validate=True) def testConstructor_115(self): """ Test constructor with argumentString="-l something", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-l something", validate=True) def testConstructor_116(self): """ Test constructor with argumentList=["--owner", ], validate=True. """ self.assertRaises(GetoptError, Options, argumentList=["--owner", ], validate=True) def testConstructor_117(self): """ Test constructor with argumentString="--owner", validate=True. """ self.assertRaises(GetoptError, Options, argumentString="--owner", validate=True) def testConstructor_118(self): """ Test constructor with argumentList=["-o", ], validate=True. """ self.assertRaises(GetoptError, Options, argumentList=["-o", ], validate=True) def testConstructor_119(self): """ Test constructor with argumentString="-o", validate=True. """ self.assertRaises(GetoptError, Options, argumentString="-o", validate=True) def testConstructor_120(self): """ Test constructor with argumentList=["--owner", "something", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--owner", "something", ], validate=True) def testConstructor_121(self): """ Test constructor with argumentString="--owner something", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--owner something", validate=True) def testConstructor_122(self): """ Test constructor with argumentList=["-o", "something", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-o", "something", ], validate=True) def testConstructor_123(self): """ Test constructor with argumentString="-o something", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-o something", validate=True) def testConstructor_124(self): """ Test constructor with argumentList=["--owner", "a:b", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--owner", "a:b", ], validate=True) def testConstructor_125(self): """ Test constructor with argumentString="--owner a:b", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--owner a:b", validate=True) def testConstructor_126(self): """ Test constructor with argumentList=["-o", "a:b", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-o", "a:b", ], validate=True) def testConstructor_127(self): """ Test constructor with argumentString="-o a:b", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-o a:b", validate=True) def testConstructor_128(self): """ Test constructor with argumentList=["--mode", ], validate=True. """ self.assertRaises(GetoptError, Options, argumentList=["--mode", ], validate=True) def testConstructor_129(self): """ Test constructor with argumentString="--mode", validate=True. """ self.assertRaises(GetoptError, Options, argumentString="--mode", validate=True) def testConstructor_130(self): """ Test constructor with argumentList=["-m", ], validate=True. """ self.assertRaises(GetoptError, Options, argumentList=["-m", ], validate=True) def testConstructor_131(self): """ Test constructor with argumentString="-m", validate=True. """ self.assertRaises(GetoptError, Options, argumentString="-m", validate=True) def testConstructor_132(self): """ Test constructor with argumentList=["--mode", "something", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--mode", "something", ], validate=True) def testConstructor_133(self): """ Test constructor with argumentString="--mode something", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--mode something", validate=True) def testConstructor_134(self): """ Test constructor with argumentList=["-m", "something", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-m", "something", ], validate=True) def testConstructor_135(self): """ Test constructor with argumentString="-m something", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-m something", validate=True) def testConstructor_136(self): """ Test constructor with argumentList=["--mode", "631", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--mode", "631", ], validate=True) def testConstructor_137(self): """ Test constructor with argumentString="--mode 631", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--mode 631", validate=True) def testConstructor_138(self): """ Test constructor with argumentList=["-m", "631", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-m", "631", ], validate=True) def testConstructor_139(self): """ Test constructor with argumentString="-m 631", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-m 631", validate=True) def testConstructor_140(self): """ Test constructor with argumentList=["--output", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--output", ], validate=True) def testConstructor_141(self): """ Test constructor with argumentString="--output", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--output", validate=True) def testConstructor_142(self): """ Test constructor with argumentList=["-O", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-O", ], validate=True) def testConstructor_143(self): """ Test constructor with argumentString="-O", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-O", validate=True) def testConstructor_144(self): """ Test constructor with argumentList=["--debug", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--debug", ], validate=True) def testConstructor_145(self): """ Test constructor with argumentString="--debug", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--debug", validate=True) def testConstructor_146(self): """ Test constructor with argumentList=["-d", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-d", ], validate=True) def testConstructor_147(self): """ Test constructor with argumentString="-d", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-d", validate=True) def testConstructor_148(self): """ Test constructor with argumentList=["--stack", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--stack", ], validate=True) def testConstructor_149(self): """ Test constructor with argumentString="--stack", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--stack", validate=True) def testConstructor_150(self): """ Test constructor with argumentList=["-s", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-s", ], validate=True) def testConstructor_151(self): """ Test constructor with argumentString="-s", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-s", validate=True) def testConstructor_152(self): """ Test constructor with argumentList=["--diagnostics", ], validate=True. """ options = Options(argumentList=["--diagnostics", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(True, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_153(self): """ Test constructor with argumentString="--diagnostics", validate=True. """ options = Options(argumentString="--diagnostics", validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(True, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_154(self): """ Test constructor with argumentList=["-D", ], validate=True. """ options = Options(argumentList=["-D", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(True, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_155(self): """ Test constructor with argumentString="-D", validate=True. """ options = Options(argumentString="-D", validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(True, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual(None, options.sourceDir) self.assertEqual(None, options.s3BucketUrl) def testConstructor_156(self): """ Test constructor with argumentList=["--verifyOnly", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--verifyOnly", ], validate=True) def testConstructor_157(self): """ Test constructor with argumentString="--verifyOnly", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--verifyOnly", validate=True) def testConstructor_158(self): """ Test constructor with argumentList=["-v", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-v", ], validate=True) def testConstructor_159(self): """ Test constructor with argumentString="-v", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-v", validate=True) def testConstructor_160(self): """ Test constructor with argumentList=["--ignoreWarnings", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--ignoreWarnings", ], validate=True) def testConstructor_161(self): """ Test constructor with argumentString="--ignoreWarnings", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--ignoreWarnings", validate=True) def testConstructor_162(self): """ Test constructor with argumentList=["-w", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-w", ], validate=True) def testConstructor_163(self): """ Test constructor with argumentString="-w", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-w", validate=True) def testConstructor_164(self): """ Test constructor with argumentList=["source", "bucket", ], validate=True. """ options = Options(argumentList=["source", "bucket", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual("source", options.sourceDir) self.assertEqual("bucket", options.s3BucketUrl) def testConstructor_165(self): """ Test constructor with argumentString="source bucket", validate=True. """ options = Options(argumentString="source bucket", validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual("source", options.sourceDir) self.assertEqual("bucket", options.s3BucketUrl) def testConstructor_166(self): """ Test constructor with argumentList=["source", "bucket", ], validate=True. """ options = Options(argumentList=["source", "bucket", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual("source", options.sourceDir) self.assertEqual("bucket", options.s3BucketUrl) def testConstructor_167(self): """ Test constructor with argumentString="source bucket", validate=True. """ options = Options(argumentString="source bucket", validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual("source", options.sourceDir) self.assertEqual("bucket", options.s3BucketUrl) def testConstructor_168(self): """ Test constructor with argumentList=["-d", "--verbose", "-O", "--mode", "600", "source", "bucket", ], validate=True. """ options = Options(argumentList=["-d", "--verbose", "-O", "--mode", "600", "source", "bucket", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(True, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(0o600, options.mode) self.assertEqual(True, options.output) self.assertEqual(True, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual("source", options.sourceDir) self.assertEqual("bucket", options.s3BucketUrl) def testConstructor_169(self): """ Test constructor with argumentString="-d --verbose -O --mode 600 source bucket", validate=True. """ options = Options(argumentString="-d --verbose -O --mode 600 source bucket", validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(True, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(0o600, options.mode) self.assertEqual(True, options.output) self.assertEqual(True, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.verifyOnly) self.assertEqual(False, options.ignoreWarnings) self.assertEqual("source", options.sourceDir) self.assertEqual("bucket", options.s3BucketUrl) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes at defaults. """ options1 = Options() options2 = Options() self.assertEqual(options1, options2) self.assertTrue(options1 == options2) self.assertTrue(not options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(not options1 != options2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes filled in and same. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.assertEqual(options1, options2) self.assertTrue(options1 == options2) self.assertTrue(not options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(not options1 != options2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes filled in, help different. """ options1 = Options() options2 = Options() options1.help = False options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_004(self): """ Test comparison of two identical objects, all attributes filled in, version different. """ options1 = Options() options2 = Options() options1.help = True options1.version = False options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_005(self): """ Test comparison of two identical objects, all attributes filled in, verbose different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = False options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_006(self): """ Test comparison of two identical objects, all attributes filled in, quiet different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = False options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_007(self): """ Test comparison of two identical objects, all attributes filled in, logfile different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = None options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_008(self): """ Test comparison of two identical objects, all attributes filled in, owner different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = None options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_009(self): """ Test comparison of two identical objects, all attributes filled in, mode different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = None options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_010(self): """ Test comparison of two identical objects, all attributes filled in, output different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = False options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_011(self): """ Test comparison of two identical objects, all attributes filled in, debug different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = False options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_012(self): """ Test comparison of two identical objects, all attributes filled in, stacktrace different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_013(self): """ Test comparison of two identical objects, all attributes filled in, diagnostics different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = False options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_014(self): """ Test comparison of two identical objects, all attributes filled in, verifyOnly different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = False options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_015(self): """ Test comparison of two identical objects, all attributes filled in, sourceDir different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = None options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_016(self): """ Test comparison of two identical objects, all attributes filled in, s3BucketUrl different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = None options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) ########################### # Test buildArgumentList() ########################### def testBuildArgumentList_001(self): """Test with no values set, validate=False.""" options = Options() argumentList = options.buildArgumentList(validate=False) self.assertEqual([], argumentList) def testBuildArgumentList_002(self): """Test with help set, validate=False.""" options = Options() options.help = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--help", ], argumentList) def testBuildArgumentList_003(self): """Test with version set, validate=False.""" options = Options() options.version = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--version", ], argumentList) def testBuildArgumentList_004(self): """Test with verbose set, validate=False.""" options = Options() options.verbose = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--verbose", ], argumentList) def testBuildArgumentList_005(self): """Test with quiet set, validate=False.""" options = Options() options.quiet = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--quiet", ], argumentList) def testBuildArgumentList_006(self): """Test with logfile set, validate=False.""" options = Options() options.logfile = "bogus" argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--logfile", "bogus", ], argumentList) def testBuildArgumentList_007(self): """Test with owner set, validate=False.""" options = Options() options.owner = ("ken", "group") argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--owner", "ken:group", ], argumentList) def testBuildArgumentList_008(self): """Test with mode set, validate=False.""" options = Options() options.mode = 0o644 argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--mode", "644", ], argumentList) def testBuildArgumentList_009(self): """Test with output set, validate=False.""" options = Options() options.output = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--output", ], argumentList) def testBuildArgumentList_010(self): """Test with debug set, validate=False.""" options = Options() options.debug = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--debug", ], argumentList) def testBuildArgumentList_011(self): """Test with stacktrace set, validate=False.""" options = Options() options.stacktrace = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--stack", ], argumentList) def testBuildArgumentList_012(self): """Test with diagnostics set, validate=False.""" options = Options() options.diagnostics = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--diagnostics", ], argumentList) def testBuildArgumentList_013(self): """Test with verifyOnly set, validate=False.""" options = Options() options.verifyOnly = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--verifyOnly", ], argumentList) def testBuildArgumentList_014(self): """Test with ignoreWarnings set, validate=False.""" options = Options() options.ignoreWarnings = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--ignoreWarnings", ], argumentList) def testBuildArgumentList_015(self): """Test with valid source and target, validate=False.""" options = Options() options.sourceDir = "source" options.s3BucketUrl = "bucket" argumentList = options.buildArgumentList(validate=False) self.assertEqual(["source", "bucket", ], argumentList) def testBuildArgumentList_016(self): """Test with all values set, actions containing one item, validate=False.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.verifyOnly = True options.ignoreWarnings = True options.sourceDir = "source" options.s3BucketUrl = "bucket" argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--help", "--version", "--verbose", "--quiet", "--logfile", "logfile", "--owner", "a:b", "--mode", "631", "--output", "--debug", "--stack", "--diagnostics", "--verifyOnly", "--ignoreWarnings", "source", "bucket", ], argumentList) def testBuildArgumentList_017(self): """Test with no values set, validate=True.""" options = Options() self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_018(self): """Test with help set, validate=True.""" options = Options() options.help = True argumentList = options.buildArgumentList(validate=True) self.assertEqual(["--help", ], argumentList) def testBuildArgumentList_019(self): """Test with version set, validate=True.""" options = Options() options.version = True argumentList = options.buildArgumentList(validate=True) self.assertEqual(["--version", ], argumentList) def testBuildArgumentList_020(self): """Test with verbose set, validate=True.""" options = Options() options.verbose = True self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_021(self): """Test with quiet set, validate=True.""" options = Options() options.quiet = True self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_022(self): """Test with logfile set, validate=True.""" options = Options() options.logfile = "bogus" self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_023(self): """Test with owner set, validate=True.""" options = Options() options.owner = ("ken", "group") self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_024(self): """Test with mode set, validate=True.""" options = Options() options.mode = 0o644 self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_025(self): """Test with output set, validate=True.""" options = Options() options.output = True self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_026(self): """Test with debug set, validate=True.""" options = Options() options.debug = True self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_027(self): """Test with stacktrace set, validate=True.""" options = Options() options.stacktrace = True self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_028(self): """Test with diagnostics set, validate=True.""" options = Options() options.diagnostics = True argumentList = options.buildArgumentList(validate=True) self.assertEqual(["--diagnostics", ], argumentList) def testBuildArgumentList_029(self): """Test with verifyOnly set, validate=True.""" options = Options() options.verifyOnly = True self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_030(self): """Test with ignoreWarnings set, validate=True.""" options = Options() options.ignoreWarnings = True self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_031(self): """Test with valid source and target, validate=True.""" options = Options() options.sourceDir = "source" options.s3BucketUrl = "bucket" argumentList = options.buildArgumentList(validate=True) self.assertEqual(["source", "bucket", ], argumentList) def testBuildArgumentList_032(self): """Test with all values set (except managed ones), actions containing one item, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.verifyOnly = True options.ignoreWarnings = True options.sourceDir = "source" options.s3BucketUrl = "bucket" argumentList = options.buildArgumentList(validate=True) self.assertEqual(["--help", "--version", "--verbose", "--quiet", "--logfile", "logfile", "--owner", "a:b", "--mode", "631", "--output", "--debug", "--stack", "--diagnostics", "--verifyOnly", "--ignoreWarnings", "source", "bucket", ], argumentList) ############################# # Test buildArgumentString() ############################# def testBuildArgumentString_001(self): """Test with no values set, validate=False.""" options = Options() argumentString = options.buildArgumentString(validate=False) self.assertEqual("", argumentString) def testBuildArgumentString_002(self): """Test with help set, validate=False.""" options = Options() options.help = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--help ", argumentString) def testBuildArgumentString_003(self): """Test with version set, validate=False.""" options = Options() options.version = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--version ", argumentString) def testBuildArgumentString_004(self): """Test with verbose set, validate=False.""" options = Options() options.verbose = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--verbose ", argumentString) def testBuildArgumentString_005(self): """Test with quiet set, validate=False.""" options = Options() options.quiet = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--quiet ", argumentString) def testBuildArgumentString_006(self): """Test with logfile set, validate=False.""" options = Options() options.logfile = "bogus" argumentString = options.buildArgumentString(validate=False) self.assertEqual('--logfile "bogus" ', argumentString) def testBuildArgumentString_007(self): """Test with owner set, validate=False.""" options = Options() options.owner = ("ken", "group") argumentString = options.buildArgumentString(validate=False) self.assertEqual('--owner "ken:group" ', argumentString) def testBuildArgumentString_008(self): """Test with mode set, validate=False.""" options = Options() options.mode = 0o644 argumentString = options.buildArgumentString(validate=False) self.assertEqual('--mode 644 ', argumentString) def testBuildArgumentString_009(self): """Test with output set, validate=False.""" options = Options() options.output = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--output ", argumentString) def testBuildArgumentString_010(self): """Test with debug set, validate=False.""" options = Options() options.debug = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--debug ", argumentString) def testBuildArgumentString_011(self): """Test with stacktrace set, validate=False.""" options = Options() options.stacktrace = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--stack ", argumentString) def testBuildArgumentString_012(self): """Test with diagnostics set, validate=False.""" options = Options() options.diagnostics = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--diagnostics ", argumentString) def testBuildArgumentString_013(self): """Test with verifyOnly set, validate=False.""" options = Options() options.verifyOnly = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--verifyOnly ", argumentString) def testBuildArgumentString_014(self): """Test with ignoreWarnings set, validate=False.""" options = Options() options.ignoreWarnings = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--ignoreWarnings ", argumentString) def testBuildArgumentString_015(self): """Test with valid source and target, validate=False.""" options = Options() options.sourceDir = "source" options.s3BucketUrl = "bucket" argumentString = options.buildArgumentString(validate=False) self.assertEqual('"source" "bucket" ', argumentString) def testBuildArgumentString_016(self): """Test with all values set, actions containing one item, validate=False.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.verifyOnly = True options.ignoreWarnings = True options.sourceDir = "source" options.s3BucketUrl = "bucket" argumentString = options.buildArgumentString(validate=False) self.assertEqual('--help --version --verbose --quiet --logfile "logfile" --owner "a:b" --mode 631 --output --debug --stack --diagnostics --verifyOnly --ignoreWarnings "source" "bucket" ', argumentString) def testBuildArgumentString_017(self): """Test with no values set, validate=True.""" options = Options() self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_018(self): """Test with help set, validate=True.""" options = Options() options.help = True argumentString = options.buildArgumentString(validate=True) self.assertEqual("--help ", argumentString) def testBuildArgumentString_019(self): """Test with version set, validate=True.""" options = Options() options.version = True argumentString = options.buildArgumentString(validate=True) self.assertEqual("--version ", argumentString) def testBuildArgumentString_020(self): """Test with verbose set, validate=True.""" options = Options() options.verbose = True self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_021(self): """Test with quiet set, validate=True.""" options = Options() options.quiet = True self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_022(self): """Test with logfile set, validate=True.""" options = Options() options.logfile = "bogus" self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_023(self): """Test with owner set, validate=True.""" options = Options() options.owner = ("ken", "group") self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_024(self): """Test with mode set, validate=True.""" options = Options() options.mode = 0o644 self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_025(self): """Test with output set, validate=True.""" options = Options() options.output = True self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_026(self): """Test with debug set, validate=True.""" options = Options() options.debug = True self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_027(self): """Test with stacktrace set, validate=True.""" options = Options() options.stacktrace = True self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_028(self): """Test with diagnostics set, validate=True.""" options = Options() options.diagnostics = True argumentString = options.buildArgumentString(validate=True) self.assertEqual("--diagnostics ", argumentString) def testBuildArgumentString_029(self): """Test with verifyOnly set, validate=True.""" options = Options() options.verifyOnly = True self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_030(self): """Test with ignoreWarnings set, validate=True.""" options = Options() options.ignoreWarnings = True self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_031(self): """Test with valid source and target, validate=True.""" options = Options() options.sourceDir = "source" options.s3BucketUrl = "bucket" argumentString = options.buildArgumentString(validate=True) self.assertEqual('"source" "bucket" ', argumentString) def testBuildArgumentString_032(self): """Test with all values set (except managed ones), actions containing one item, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.verifyOnly = True options.ignoreWarnings = True options.sourceDir = "source" options.s3BucketUrl = "bucket" argumentString = options.buildArgumentString(validate=True) self.assertEqual('--help --version --verbose --quiet --logfile "logfile" --owner "a:b" --mode 631 --output --debug --stack --diagnostics --verifyOnly --ignoreWarnings "source" "bucket" ', argumentString) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" tests = [ ] tests.append(unittest.makeSuite(TestFunctions, 'test')) tests.append(unittest.makeSuite(TestOptions, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/postgresqltests.py0000664000175000017500000011224312642032421023144 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2006,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests PostgreSQL extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/extend/postgresql.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in extend/postgresql.py. There are also tests for several of the private methods. Unfortunately, it's rather difficult to test this code in an automated fashion, even if you have access to PostgreSQL, since the actual dump would need to have access to a real database. Because of this, there aren't any tests below that actually talk to a database. As a compromise, I test some of the private methods in the implementation. Normally, I don't like to test private methods, but in this case, testing the private methods will help give us some reasonable confidence in the code even if we can't talk to a database.. This isn't perfect, but it's better than nothing. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a POSTGRESQLTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest # Cedar Backup modules from CedarBackup3.testutil import findResources, failUnlessAssignRaises from CedarBackup3.xmlutil import createOutputDom, serializeDom from CedarBackup3.extend.postgresql import LocalConfig, PostgresqlConfig ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "postgresql.conf.1", "postgresql.conf.2", "postgresql.conf.3", "postgresql.conf.4", "postgresql.conf.5", ] ####################################################################### # Test Case Classes ####################################################################### ############################# # TestPostgresqlConfig class ############################# class TestPostgresqlConfig(unittest.TestCase): """Tests for the PostgresqlConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PostgresqlConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ postgresql = PostgresqlConfig() self.assertEqual(None, postgresql.user) self.assertEqual(None, postgresql.compressMode) self.assertEqual(False, postgresql.all) self.assertEqual(None, postgresql.databases) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values, databases=None. """ postgresql = PostgresqlConfig("user", "none", False, None) self.assertEqual("user", postgresql.user) self.assertEqual("none", postgresql.compressMode) self.assertEqual(False, postgresql.all) self.assertEqual(None, postgresql.databases) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values, no databases. """ postgresql = PostgresqlConfig("user", "none", True, []) self.assertEqual("user", postgresql.user) self.assertEqual("none", postgresql.compressMode) self.assertEqual(True, postgresql.all) self.assertEqual([], postgresql.databases) def testConstructor_004(self): """ Test constructor with all values filled in, with valid values, with one database. """ postgresql = PostgresqlConfig("user", "gzip", True, [ "one", ]) self.assertEqual("user", postgresql.user) self.assertEqual("gzip", postgresql.compressMode) self.assertEqual(True, postgresql.all) self.assertEqual([ "one", ], postgresql.databases) def testConstructor_005(self): """ Test constructor with all values filled in, with valid values, with multiple databases. """ postgresql = PostgresqlConfig("user", "bzip2", True, [ "one", "two", ]) self.assertEqual("user", postgresql.user) self.assertEqual("bzip2", postgresql.compressMode) self.assertEqual(True, postgresql.all) self.assertEqual([ "one", "two", ], postgresql.databases) def testConstructor_006(self): """ Test assignment of user attribute, None value. """ postgresql = PostgresqlConfig(user="user") self.assertEqual("user", postgresql.user) postgresql.user = None self.assertEqual(None, postgresql.user) def testConstructor_007(self): """ Test assignment of user attribute, valid value. """ postgresql = PostgresqlConfig() self.assertEqual(None, postgresql.user) postgresql.user = "user" self.assertEqual("user", postgresql.user) def testConstructor_008(self): """ Test assignment of user attribute, invalid value (empty). """ postgresql = PostgresqlConfig() self.assertEqual(None, postgresql.user) self.failUnlessAssignRaises(ValueError, postgresql, "user", "") self.assertEqual(None, postgresql.user) def testConstructor_009(self): """ Test assignment of compressMode attribute, None value. """ postgresql = PostgresqlConfig(compressMode="none") self.assertEqual("none", postgresql.compressMode) postgresql.compressMode = None self.assertEqual(None, postgresql.compressMode) def testConstructor_010(self): """ Test assignment of compressMode attribute, valid value. """ postgresql = PostgresqlConfig() self.assertEqual(None, postgresql.compressMode) postgresql.compressMode = "none" self.assertEqual("none", postgresql.compressMode) postgresql.compressMode = "gzip" self.assertEqual("gzip", postgresql.compressMode) postgresql.compressMode = "bzip2" self.assertEqual("bzip2", postgresql.compressMode) def testConstructor_011(self): """ Test assignment of compressMode attribute, invalid value (empty). """ postgresql = PostgresqlConfig() self.assertEqual(None, postgresql.compressMode) self.failUnlessAssignRaises(ValueError, postgresql, "compressMode", "") self.assertEqual(None, postgresql.compressMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ postgresql = PostgresqlConfig() self.assertEqual(None, postgresql.compressMode) self.failUnlessAssignRaises(ValueError, postgresql, "compressMode", "bogus") self.assertEqual(None, postgresql.compressMode) def testConstructor_013(self): """ Test assignment of all attribute, None value. """ postgresql = PostgresqlConfig(all=True) self.assertEqual(True, postgresql.all) postgresql.all = None self.assertEqual(False, postgresql.all) def testConstructor_014(self): """ Test assignment of all attribute, valid value (real boolean). """ postgresql = PostgresqlConfig() self.assertEqual(False, postgresql.all) postgresql.all = True self.assertEqual(True, postgresql.all) postgresql.all = False self.assertEqual(False, postgresql.all) #pylint: disable=R0204 def testConstructor_015(self): """ Test assignment of all attribute, valid value (expression). """ postgresql = PostgresqlConfig() self.assertEqual(False, postgresql.all) postgresql.all = 0 self.assertEqual(False, postgresql.all) postgresql.all = [] self.assertEqual(False, postgresql.all) postgresql.all = None self.assertEqual(False, postgresql.all) postgresql.all = ['a'] self.assertEqual(True, postgresql.all) postgresql.all = 3 self.assertEqual(True, postgresql.all) def testConstructor_016(self): """ Test assignment of databases attribute, None value. """ postgresql = PostgresqlConfig(databases=[]) self.assertEqual([], postgresql.databases) postgresql.databases = None self.assertEqual(None, postgresql.databases) def testConstructor_017(self): """ Test assignment of databases attribute, [] value. """ postgresql = PostgresqlConfig() self.assertEqual(None, postgresql.databases) postgresql.databases = [] self.assertEqual([], postgresql.databases) def testConstructor_018(self): """ Test assignment of databases attribute, single valid entry. """ postgresql = PostgresqlConfig() self.assertEqual(None, postgresql.databases) postgresql.databases = ["/whatever", ] self.assertEqual(["/whatever", ], postgresql.databases) postgresql.databases.append("/stuff") self.assertEqual(["/whatever", "/stuff", ], postgresql.databases) def testConstructor_019(self): """ Test assignment of databases attribute, multiple valid entries. """ postgresql = PostgresqlConfig() self.assertEqual(None, postgresql.databases) postgresql.databases = ["/whatever", "/stuff", ] self.assertEqual(["/whatever", "/stuff", ], postgresql.databases) postgresql.databases.append("/etc/X11") self.assertEqual(["/whatever", "/stuff", "/etc/X11", ], postgresql.databases) def testConstructor_020(self): """ Test assignment of databases attribute, single invalid entry (empty). """ postgresql = PostgresqlConfig() self.assertEqual(None, postgresql.databases) self.failUnlessAssignRaises(ValueError, postgresql, "databases", ["", ]) self.assertEqual(None, postgresql.databases) def testConstructor_021(self): """ Test assignment of databases attribute, mixed valid and invalid entries. """ postgresql = PostgresqlConfig() self.assertEqual(None, postgresql.databases) self.failUnlessAssignRaises(ValueError, postgresql, "databases", ["good", "", "alsogood", ]) self.assertEqual(None, postgresql.databases) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig() self.assertEqual(postgresql1, postgresql2) self.assertTrue(postgresql1 == postgresql2) self.assertTrue(not postgresql1 < postgresql2) self.assertTrue(postgresql1 <= postgresql2) self.assertTrue(not postgresql1 > postgresql2) self.assertTrue(postgresql1 >= postgresql2) self.assertTrue(not postgresql1 != postgresql2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None, list None. """ postgresql1 = PostgresqlConfig("user", "gzip", True, None) postgresql2 = PostgresqlConfig("user", "gzip", True, None) self.assertEqual(postgresql1, postgresql2) self.assertTrue(postgresql1 == postgresql2) self.assertTrue(not postgresql1 < postgresql2) self.assertTrue(postgresql1 <= postgresql2) self.assertTrue(not postgresql1 > postgresql2) self.assertTrue(postgresql1 >= postgresql2) self.assertTrue(not postgresql1 != postgresql2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None, list empty. """ postgresql1 = PostgresqlConfig("user", "bzip2", True, []) postgresql2 = PostgresqlConfig("user", "bzip2", True, []) self.assertEqual(postgresql1, postgresql2) self.assertTrue(postgresql1 == postgresql2) self.assertTrue(not postgresql1 < postgresql2) self.assertTrue(postgresql1 <= postgresql2) self.assertTrue(not postgresql1 > postgresql2) self.assertTrue(postgresql1 >= postgresql2) self.assertTrue(not postgresql1 != postgresql2) def testComparison_004(self): """ Test comparison of two identical objects, all attributes non-None, list non-empty. """ postgresql1 = PostgresqlConfig("user", "none", True, [ "whatever", ]) postgresql2 = PostgresqlConfig("user", "none", True, [ "whatever", ]) self.assertEqual(postgresql1, postgresql2) self.assertTrue(postgresql1 == postgresql2) self.assertTrue(not postgresql1 < postgresql2) self.assertTrue(postgresql1 <= postgresql2) self.assertTrue(not postgresql1 > postgresql2) self.assertTrue(postgresql1 >= postgresql2) self.assertTrue(not postgresql1 != postgresql2) def testComparison_005(self): """ Test comparison of two differing objects, user differs (one None). """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig(user="user") self.assertNotEqual(postgresql1, postgresql2) self.assertTrue(not postgresql1 == postgresql2) self.assertTrue(postgresql1 < postgresql2) self.assertTrue(postgresql1 <= postgresql2) self.assertTrue(not postgresql1 > postgresql2) self.assertTrue(not postgresql1 >= postgresql2) self.assertTrue(postgresql1 != postgresql2) def testComparison_006(self): """ Test comparison of two differing objects, user differs. """ postgresql1 = PostgresqlConfig("user1", "gzip", True, [ "whatever", ]) postgresql2 = PostgresqlConfig("user2", "gzip", True, [ "whatever", ]) self.assertNotEqual(postgresql1, postgresql2) self.assertTrue(not postgresql1 == postgresql2) self.assertTrue(postgresql1 < postgresql2) self.assertTrue(postgresql1 <= postgresql2) self.assertTrue(not postgresql1 > postgresql2) self.assertTrue(not postgresql1 >= postgresql2) self.assertTrue(postgresql1 != postgresql2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig(compressMode="gzip") self.assertNotEqual(postgresql1, postgresql2) self.assertTrue(not postgresql1 == postgresql2) self.assertTrue(postgresql1 < postgresql2) self.assertTrue(postgresql1 <= postgresql2) self.assertTrue(not postgresql1 > postgresql2) self.assertTrue(not postgresql1 >= postgresql2) self.assertTrue(postgresql1 != postgresql2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ postgresql1 = PostgresqlConfig("user", "bzip2", True, [ "whatever", ]) postgresql2 = PostgresqlConfig("user", "gzip", True, [ "whatever", ]) self.assertNotEqual(postgresql1, postgresql2) self.assertTrue(not postgresql1 == postgresql2) self.assertTrue(postgresql1 < postgresql2) self.assertTrue(postgresql1 <= postgresql2) self.assertTrue(not postgresql1 > postgresql2) self.assertTrue(not postgresql1 >= postgresql2) self.assertTrue(postgresql1 != postgresql2) def testComparison_009(self): """ Test comparison of two differing objects, all differs (one None). """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig(all=True) self.assertNotEqual(postgresql1, postgresql2) self.assertTrue(not postgresql1 == postgresql2) self.assertTrue(postgresql1 < postgresql2) self.assertTrue(postgresql1 <= postgresql2) self.assertTrue(not postgresql1 > postgresql2) self.assertTrue(not postgresql1 >= postgresql2) self.assertTrue(postgresql1 != postgresql2) def testComparison_010(self): """ Test comparison of two differing objects, all differs. """ postgresql1 = PostgresqlConfig("user", "gzip", False, [ "whatever", ]) postgresql2 = PostgresqlConfig("user", "gzip", True, [ "whatever", ]) self.assertNotEqual(postgresql1, postgresql2) self.assertTrue(not postgresql1 == postgresql2) self.assertTrue(postgresql1 < postgresql2) self.assertTrue(postgresql1 <= postgresql2) self.assertTrue(not postgresql1 > postgresql2) self.assertTrue(not postgresql1 >= postgresql2) self.assertTrue(postgresql1 != postgresql2) def testComparison_011(self): """ Test comparison of two differing objects, databases differs (one None, one empty). """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig(databases=[]) self.assertNotEqual(postgresql1, postgresql2) self.assertTrue(not postgresql1 == postgresql2) self.assertTrue(postgresql1 < postgresql2) self.assertTrue(postgresql1 <= postgresql2) self.assertTrue(not postgresql1 > postgresql2) self.assertTrue(not postgresql1 >= postgresql2) self.assertTrue(postgresql1 != postgresql2) def testComparison_012(self): """ Test comparison of two differing objects, databases differs (one None, one not empty). """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig(databases=["whatever", ]) self.assertNotEqual(postgresql1, postgresql2) self.assertTrue(not postgresql1 == postgresql2) self.assertTrue(postgresql1 < postgresql2) self.assertTrue(postgresql1 <= postgresql2) self.assertTrue(not postgresql1 > postgresql2) self.assertTrue(not postgresql1 >= postgresql2) self.assertTrue(postgresql1 != postgresql2) def testComparison_013(self): """ Test comparison of two differing objects, databases differs (one empty, one not empty). """ postgresql1 = PostgresqlConfig("user", "gzip", True, [ ]) postgresql2 = PostgresqlConfig("user", "gzip", True, [ "whatever", ]) self.assertNotEqual(postgresql1, postgresql2) self.assertTrue(not postgresql1 == postgresql2) self.assertTrue(postgresql1 < postgresql2) self.assertTrue(postgresql1 <= postgresql2) self.assertTrue(not postgresql1 > postgresql2) self.assertTrue(not postgresql1 >= postgresql2) self.assertTrue(postgresql1 != postgresql2) def testComparison_014(self): """ Test comparison of two differing objects, databases differs (both not empty). """ postgresql1 = PostgresqlConfig("user", "gzip", True, [ "whatever", ]) postgresql2 = PostgresqlConfig("user", "gzip", True, [ "whatever", "bogus", ]) self.assertNotEqual(postgresql1, postgresql2) self.assertTrue(not postgresql1 == postgresql2) self.assertTrue(not postgresql1 < postgresql2) # note: different than standard due to unsorted list self.assertTrue(not postgresql1 <= postgresql2) # note: different than standard due to unsorted list self.assertTrue(postgresql1 > postgresql2) # note: different than standard due to unsorted list self.assertTrue(postgresql1 >= postgresql2) # note: different than standard due to unsorted list self.assertTrue(postgresql1 != postgresql2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the postgresql configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.assertEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.assertEqual(None, config.postgresql) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.assertEqual(None, config.postgresql) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["postgresql.conf.1"] with open(path) as f: contents = f.read() self.assertRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of postgresql attribute, None value. """ config = LocalConfig() config.postgresql = None self.assertEqual(None, config.postgresql) def testConstructor_005(self): """ Test assignment of postgresql attribute, valid value. """ config = LocalConfig() config.postgresql = PostgresqlConfig() self.assertEqual(PostgresqlConfig(), config.postgresql) def testConstructor_006(self): """ Test assignment of postgresql attribute, invalid value (not PostgresqlConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "postgresql", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.postgresql = PostgresqlConfig() config2 = LocalConfig() config2.postgresql = PostgresqlConfig() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, postgresql differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.postgresql = PostgresqlConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, postgresql differs. """ config1 = LocalConfig() config1.postgresql = PostgresqlConfig(user="one") config2 = LocalConfig() config2.postgresql = PostgresqlConfig(user="two") self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None postgresql section. """ config = LocalConfig() config.postgresql = None self.assertRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty postgresql section. """ config = LocalConfig() config.postgresql = PostgresqlConfig() self.assertRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty postgresql section, all=True, databases=None. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "gzip", True, None) config.validate() def testValidate_004(self): """ Test validate on a non-empty postgresql section, all=True, empty databases. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "none", True, []) config.validate() def testValidate_005(self): """ Test validate on a non-empty postgresql section, all=True, non-empty databases. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "bzip2", True, ["whatever", ]) self.assertRaises(ValueError, config.validate) def testValidate_006(self): """ Test validate on a non-empty postgresql section, all=False, databases=None. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "gzip", False, None) self.assertRaises(ValueError, config.validate) def testValidate_007(self): """ Test validate on a non-empty postgresql section, all=False, empty databases. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "bzip2", False, []) self.assertRaises(ValueError, config.validate) def testValidate_008(self): """ Test validate on a non-empty postgresql section, all=False, non-empty databases. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "gzip", False, ["whatever", ]) config.validate() def testValidate_009(self): """ Test validate on a non-empty postgresql section, with user=None. """ config = LocalConfig() config.postgresql = PostgresqlConfig(None, "gzip", True, None) config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["postgresql.conf.1"] with open(path) as f: contents = f.read() self.assertRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.assertRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.assertEqual(None, config.postgresql) config = LocalConfig(xmlData=contents, validate=False) self.assertEqual(None, config.postgresql) def testParse_003(self): """ Parse config document containing only a postgresql section, no databases, all=True. """ path = self.resources["postgresql.conf.2"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.postgresql) self.assertEqual("user", config.postgresql.user) self.assertEqual("none", config.postgresql.compressMode) self.assertEqual(True, config.postgresql.all) self.assertEqual(None, config.postgresql.databases) config = LocalConfig(xmlData=contents, validate=False) self.assertEqual("user", config.postgresql.user) self.assertEqual("none", config.postgresql.compressMode) self.assertEqual(True, config.postgresql.all) self.assertEqual(None, config.postgresql.databases) def testParse_004(self): """ Parse config document containing only a postgresql section, single database, all=False. """ path = self.resources["postgresql.conf.3"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.postgresql) self.assertEqual("user", config.postgresql.user) self.assertEqual("gzip", config.postgresql.compressMode) self.assertEqual(False, config.postgresql.all) self.assertEqual(["database", ], config.postgresql.databases) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.postgresql) self.assertEqual("user", config.postgresql.user) self.assertEqual("gzip", config.postgresql.compressMode) self.assertEqual(False, config.postgresql.all) self.assertEqual(["database", ], config.postgresql.databases) def testParse_005(self): """ Parse config document containing only a postgresql section, multiple databases, all=False. """ path = self.resources["postgresql.conf.4"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.postgresql) self.assertEqual("user", config.postgresql.user) self.assertEqual("bzip2", config.postgresql.compressMode) self.assertEqual(False, config.postgresql.all) self.assertEqual(["database1", "database2", ], config.postgresql.databases) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.postgresql) self.assertEqual("user", config.postgresql.user) self.assertEqual("bzip2", config.postgresql.compressMode) self.assertEqual(False, config.postgresql.all) self.assertEqual(["database1", "database2", ], config.postgresql.databases) def testParse_006(self): """ Parse config document containing only a postgresql section, no user, multiple databases, all=False. """ path = self.resources["postgresql.conf.5"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.postgresql) self.assertEqual(None, config.postgresql.user) self.assertEqual("bzip2", config.postgresql.compressMode) self.assertEqual(False, config.postgresql.all) self.assertEqual(["database1", "database2", ], config.postgresql.databases) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.postgresql) self.assertEqual(None, config.postgresql.user) self.assertEqual("bzip2", config.postgresql.compressMode) self.assertEqual(False, config.postgresql.all) self.assertEqual(["database1", "database2", ], config.postgresql.databases) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document """ config = LocalConfig() self.validateAddConfig(config) def testAddConfig_003(self): """ Test with no databases, all other values filled in, all=True. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "none", True, None) self.validateAddConfig(config) def testAddConfig_004(self): """ Test with no databases, all other values filled in, all=False. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "gzip", False, None) self.validateAddConfig(config) def testAddConfig_005(self): """ Test with single database, all other values filled in, all=True. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "bzip2", True, [ "database", ]) self.validateAddConfig(config) def testAddConfig_006(self): """ Test with single database, all other values filled in, all=False. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "none", False, [ "database", ]) self.validateAddConfig(config) def testAddConfig_007(self): """ Test with multiple databases, all other values filled in, all=True. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "bzip2", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_008(self): """ Test with multiple databases, all other values filled in, all=False. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_009(self): """ Test with multiple databases, user=None but all other values filled in, all=False. """ config = LocalConfig() config.postgresql = PostgresqlConfig(None, "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" tests = [ ] tests.append(unittest.makeSuite(TestPostgresqlConfig, 'test')) tests.append(unittest.makeSuite(TestLocalConfig, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/splittests.py0000664000175000017500000013144612560007327022110 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007-2008,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests split extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/extend/split.py. Code Coverage ============= This module contains individual tests for the the public classes implemented in extend/split.py. There are also tests for some of the private functions. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. If you want to run all of the tests, set SPLITTESTS_FULL to "Y" in the environment. In this module, the primary dependency is that the split utility must be available. There is also one test that wants at least one non-English locale (fr_FR, ru_RU or pt_PT) available to check localization issues (but that test will just automatically be skipped if such a locale is not available). @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest import os import tempfile # Cedar Backup modules from CedarBackup3.util import UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES from CedarBackup3.testutil import findResources, buildPath, removedir, extractTar from CedarBackup3.testutil import failUnlessAssignRaises, availableLocales from CedarBackup3.xmlutil import createOutputDom, serializeDom from CedarBackup3.extend.split import LocalConfig, SplitConfig, ByteQuantity from CedarBackup3.extend.split import _splitFile, _splitDailyDir ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "split.conf.1", "split.conf.2", "split.conf.3", "split.conf.4", "split.conf.5", "tree21.tar.gz", ] INVALID_PATH = "bogus" # This path name should never exist ####################################################################### # Utility functions ####################################################################### def runAllTests(): """Returns true/false depending on whether the full test suite should be run.""" if "SPLITTESTS_FULL" in os.environ: return os.environ["SPLITTESTS_FULL"] == "Y" else: return False ####################################################################### # Test Case Classes ####################################################################### ########################## # TestSplitConfig class ########################## class TestSplitConfig(unittest.TestCase): """Tests for the SplitConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = SplitConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ split = SplitConfig() self.assertEqual(None, split.sizeLimit) self.assertEqual(None, split.splitSize) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ split = SplitConfig(ByteQuantity("1.0", UNIT_BYTES), ByteQuantity("2.0", UNIT_KBYTES)) self.assertEqual(ByteQuantity("1.0", UNIT_BYTES), split.sizeLimit) self.assertEqual(ByteQuantity("2.0", UNIT_KBYTES), split.splitSize) def testConstructor_003(self): """ Test assignment of sizeLimit attribute, None value. """ split = SplitConfig(sizeLimit=ByteQuantity("1.0", UNIT_BYTES)) self.assertEqual(ByteQuantity("1.0", UNIT_BYTES), split.sizeLimit) split.sizeLimit = None self.assertEqual(None, split.sizeLimit) def testConstructor_004(self): """ Test assignment of sizeLimit attribute, valid value. """ split = SplitConfig() self.assertEqual(None, split.sizeLimit) split.sizeLimit = ByteQuantity("1.0", UNIT_BYTES) self.assertEqual(ByteQuantity("1.0", UNIT_BYTES), split.sizeLimit) def testConstructor_005(self): """ Test assignment of sizeLimit attribute, invalid value (empty). """ split = SplitConfig() self.assertEqual(None, split.sizeLimit) self.failUnlessAssignRaises(ValueError, split, "sizeLimit", "") self.assertEqual(None, split.sizeLimit) def testConstructor_006(self): """ Test assignment of sizeLimit attribute, invalid value (not a ByteQuantity). """ split = SplitConfig() self.assertEqual(None, split.sizeLimit) self.failUnlessAssignRaises(ValueError, split, "sizeLimit", "1.0 GB") self.assertEqual(None, split.sizeLimit) def testConstructor_007(self): """ Test assignment of splitSize attribute, None value. """ split = SplitConfig(splitSize=ByteQuantity("1.00", UNIT_KBYTES)) self.assertEqual(ByteQuantity("1.00", UNIT_KBYTES), split.splitSize) split.splitSize = None self.assertEqual(None, split.splitSize) def testConstructor_008(self): """ Test assignment of splitSize attribute, valid value. """ split = SplitConfig() self.assertEqual(None, split.splitSize) split.splitSize = ByteQuantity("1.00", UNIT_KBYTES) self.assertEqual(ByteQuantity("1.00", UNIT_KBYTES), split.splitSize) def testConstructor_009(self): """ Test assignment of splitSize attribute, invalid value (empty). """ split = SplitConfig() self.assertEqual(None, split.splitSize) self.failUnlessAssignRaises(ValueError, split, "splitSize", "") self.assertEqual(None, split.splitSize) def testConstructor_010(self): """ Test assignment of splitSize attribute, invalid value (not a ByteQuantity). """ split = SplitConfig() self.assertEqual(None, split.splitSize) self.failUnlessAssignRaises(ValueError, split, "splitSize", 12) self.assertEqual(None, split.splitSize) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ split1 = SplitConfig() split2 = SplitConfig() self.assertEqual(split1, split2) self.assertTrue(split1 == split2) self.assertTrue(not split1 < split2) self.assertTrue(split1 <= split2) self.assertTrue(not split1 > split2) self.assertTrue(split1 >= split2) self.assertTrue(not split1 != split2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ split1 = SplitConfig(ByteQuantity("99", UNIT_KBYTES), ByteQuantity("1.00", UNIT_MBYTES)) split2 = SplitConfig(ByteQuantity("99", UNIT_KBYTES), ByteQuantity("1.00", UNIT_MBYTES)) self.assertEqual(split1, split2) self.assertTrue(split1 == split2) self.assertTrue(not split1 < split2) self.assertTrue(split1 <= split2) self.assertTrue(not split1 > split2) self.assertTrue(split1 >= split2) self.assertTrue(not split1 != split2) def testComparison_003(self): """ Test comparison of two differing objects, sizeLimit differs (one None). """ split1 = SplitConfig() split2 = SplitConfig(sizeLimit=ByteQuantity("99", UNIT_KBYTES)) self.assertNotEqual(split1, split2) self.assertTrue(not split1 == split2) self.assertTrue(split1 < split2) self.assertTrue(split1 <= split2) self.assertTrue(not split1 > split2) self.assertTrue(not split1 >= split2) self.assertTrue(split1 != split2) def testComparison_004(self): """ Test comparison of two differing objects, sizeLimit differs. """ split1 = SplitConfig(ByteQuantity("99", UNIT_BYTES), ByteQuantity("1.00", UNIT_MBYTES)) split2 = SplitConfig(ByteQuantity("99", UNIT_KBYTES), ByteQuantity("1.00", UNIT_MBYTES)) self.assertNotEqual(split1, split2) self.assertTrue(not split1 == split2) self.assertTrue(split1 < split2) self.assertTrue(split1 <= split2) self.assertTrue(not split1 > split2) self.assertTrue(not split1 >= split2) self.assertTrue(split1 != split2) def testComparison_005(self): """ Test comparison of two differing objects, splitSize differs (one None). """ split1 = SplitConfig() split2 = SplitConfig(splitSize=ByteQuantity("1.00", UNIT_MBYTES)) self.assertNotEqual(split1, split2) self.assertTrue(not split1 == split2) self.assertTrue(split1 < split2) self.assertTrue(split1 <= split2) self.assertTrue(not split1 > split2) self.assertTrue(not split1 >= split2) self.assertTrue(split1 != split2) def testComparison_006(self): """ Test comparison of two differing objects, splitSize differs. """ split1 = SplitConfig(ByteQuantity("99", UNIT_KBYTES), ByteQuantity("0.5", UNIT_MBYTES)) split2 = SplitConfig(ByteQuantity("99", UNIT_KBYTES), ByteQuantity("1.00", UNIT_MBYTES)) self.assertNotEqual(split1, split2) self.assertTrue(not split1 == split2) self.assertTrue(split1 < split2) self.assertTrue(split1 <= split2) self.assertTrue(not split1 > split2) self.assertTrue(not split1 >= split2) self.assertTrue(split1 != split2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the split configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.assertEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.assertEqual(None, config.split) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.assertEqual(None, config.split) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["split.conf.1"] with open(path) as f: contents = f.read() self.assertRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of split attribute, None value. """ config = LocalConfig() config.split = None self.assertEqual(None, config.split) def testConstructor_005(self): """ Test assignment of split attribute, valid value. """ config = LocalConfig() config.split = SplitConfig() self.assertEqual(SplitConfig(), config.split) def testConstructor_006(self): """ Test assignment of split attribute, invalid value (not SplitConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "split", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.split = SplitConfig() config2 = LocalConfig() config2.split = SplitConfig() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, split differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.split = SplitConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, split differs. """ config1 = LocalConfig() config1.split = SplitConfig(sizeLimit=ByteQuantity("0.1", UNIT_MBYTES)) config2 = LocalConfig() config2.split = SplitConfig(sizeLimit=ByteQuantity("1.00", UNIT_MBYTES)) self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None split section. """ config = LocalConfig() config.split = None self.assertRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty split section. """ config = LocalConfig() config.split = SplitConfig() self.assertRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty split section with no values filled in. """ config = LocalConfig() config.split = SplitConfig(None, None) self.assertRaises(ValueError, config.validate) def testValidate_004(self): """ Test validate on a non-empty split section with only one value filled in. """ config = LocalConfig() config.split = SplitConfig(ByteQuantity("1.00", UNIT_MBYTES), None) self.assertRaises(ValueError, config.validate) config.split = SplitConfig(None, ByteQuantity("1.00", UNIT_MBYTES)) self.assertRaises(ValueError, config.validate) def testValidate_005(self): """ Test validate on a non-empty split section with valid values filled in. """ config = LocalConfig() config.split = SplitConfig(ByteQuantity("1.00", UNIT_MBYTES), ByteQuantity("1.00", UNIT_MBYTES)) config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["split.conf.1"] with open(path) as f: contents = f.read() self.assertRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.assertRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.assertEqual(None, config.split) config = LocalConfig(xmlData=contents, validate=False) self.assertEqual(None, config.split) def testParse_002(self): """ Parse config document with filled-in values, size in bytes. """ path = self.resources["split.conf.2"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.split) self.assertEqual(ByteQuantity("12345", UNIT_BYTES), config.split.sizeLimit) self.assertEqual(ByteQuantity("67890.0", UNIT_BYTES), config.split.splitSize) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.split) self.assertEqual(ByteQuantity("12345", UNIT_BYTES), config.split.sizeLimit) self.assertEqual(ByteQuantity("67890.0", UNIT_BYTES), config.split.splitSize) def testParse_003(self): """ Parse config document with filled-in values, size in KB. """ path = self.resources["split.conf.3"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.split) self.assertEqual(ByteQuantity("1.25", UNIT_KBYTES), config.split.sizeLimit) self.assertEqual(ByteQuantity("0.6", UNIT_KBYTES), config.split.splitSize) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.split) self.assertEqual(ByteQuantity("1.25", UNIT_KBYTES), config.split.sizeLimit) self.assertEqual(ByteQuantity("0.6", UNIT_KBYTES), config.split.splitSize) def testParse_004(self): """ Parse config document with filled-in values, size in MB. """ path = self.resources["split.conf.4"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.split) self.assertEqual(ByteQuantity("1.25", UNIT_MBYTES), config.split.sizeLimit) self.assertEqual(ByteQuantity("0.6", UNIT_MBYTES), config.split.splitSize) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.split) self.assertEqual(ByteQuantity("1.25", UNIT_MBYTES), config.split.sizeLimit) self.assertEqual(ByteQuantity("0.6", UNIT_MBYTES), config.split.splitSize) def testParse_005(self): """ Parse config document with filled-in values, size in GB. """ path = self.resources["split.conf.5"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.split) self.assertEqual(ByteQuantity("1.25", UNIT_GBYTES), config.split.sizeLimit) self.assertEqual(ByteQuantity("0.6", UNIT_GBYTES), config.split.splitSize) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.split) self.assertEqual(ByteQuantity("1.25", UNIT_GBYTES), config.split.sizeLimit) self.assertEqual(ByteQuantity("0.6", UNIT_GBYTES), config.split.splitSize) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document. """ split = SplitConfig() config = LocalConfig() config.split = split self.validateAddConfig(config) def testAddConfig_002(self): """ Test with values set, byte values. """ split = SplitConfig(ByteQuantity("57521.0", UNIT_BYTES), ByteQuantity("121231", UNIT_BYTES)) config = LocalConfig() config.split = split self.validateAddConfig(config) def testAddConfig_003(self): """ Test with values set, KB values. """ split = SplitConfig(ByteQuantity("12", UNIT_KBYTES), ByteQuantity("63352", UNIT_KBYTES)) config = LocalConfig() config.split = split self.validateAddConfig(config) def testAddConfig_004(self): """ Test with values set, MB values. """ split = SplitConfig(ByteQuantity("12", UNIT_MBYTES), ByteQuantity("63352", UNIT_MBYTES)) config = LocalConfig() config.split = split self.validateAddConfig(config) def testAddConfig_005(self): """ Test with values set, GB values. """ split = SplitConfig(ByteQuantity("12", UNIT_GBYTES), ByteQuantity("63352", UNIT_GBYTES)) config = LocalConfig() config.split = split self.validateAddConfig(config) ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the functions in split.py.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def checkSplit(self, sourcePath, origSize, splitSize): """Checks that a file was split properly.""" wholeFiles = int(float(origSize) / float(splitSize)) leftoverBytes = int(float(origSize) % float(splitSize)) for i in range(0, wholeFiles): splitPath = "%s_%05d" % (sourcePath, i) self.assertTrue(os.path.exists(splitPath)) self.assertEqual(splitSize, os.stat(splitPath).st_size) if leftoverBytes > 0: splitPath = "%s_%05d" % (sourcePath, wholeFiles) self.assertTrue(os.path.exists(splitPath)) self.assertEqual(leftoverBytes, os.stat(splitPath).st_size) def findBadLocale(self): """ The split command localizes its output for certain locales. This breaks the parsing code in split.py. This method returns a list of the locales (if any) that are currently configured which could be expected to cause a failure if the localization-fixing code doesn't work. """ locales = availableLocales() if 'fr_FR' in locales: return 'fr_FR' if 'pl_PL' in locales: return 'pl_PL' if 'ru_RU' in locales: return 'ru_RU' return None #################### # Test _splitFile() #################### def testSplitFile_001(self): """ Test with a nonexistent file. """ self.extractTar("tree21") sourcePath = self.buildPath(["tree21", "2007", "01", "01", INVALID_PATH ]) self.assertFalse(os.path.exists(sourcePath)) splitSize = ByteQuantity("320", UNIT_BYTES) self.assertRaises(ValueError, _splitFile, sourcePath, splitSize, None, None, removeSource=False) def testSplitFile_002(self): """ Test with integer split size, removeSource=False. """ self.extractTar("tree21") sourcePath = self.buildPath(["tree21", "2007", "01", "01", "system1", "file001.a.b", ]) self.assertTrue(os.path.exists(sourcePath)) splitSize = ByteQuantity("320", UNIT_BYTES) _splitFile(sourcePath, splitSize, None, None, removeSource=False) self.assertTrue(os.path.exists(sourcePath)) self.checkSplit(sourcePath, 3200, 320) def testSplitFile_003(self): """ Test with floating point split size, removeSource=False. """ self.extractTar("tree21") sourcePath = self.buildPath(["tree21", "2007", "01", "01", "system1", "file001.a.b", ]) self.assertTrue(os.path.exists(sourcePath)) splitSize = ByteQuantity("320.1", UNIT_BYTES) _splitFile(sourcePath, splitSize, None, None, removeSource=False) self.assertTrue(os.path.exists(sourcePath)) self.checkSplit(sourcePath, 3200, 320) def testSplitFile_004(self): """ Test with integer split size, removeSource=True. """ self.extractTar("tree21") sourcePath = self.buildPath(["tree21", "2007", "01", "01", "system1", "file001.a.b", ]) self.assertTrue(os.path.exists(sourcePath)) splitSize = ByteQuantity("320", UNIT_BYTES) _splitFile(sourcePath, splitSize, None, None, removeSource=True) self.assertFalse(os.path.exists(sourcePath)) self.checkSplit(sourcePath, 3200, 320) def testSplitFile_005(self): """ Test with a local other than "C" or "en_US" set. """ locale = self.findBadLocale() if locale is not None: os.environ["LANG"] = locale os.environ["LC_ADDRESS"] = locale os.environ["LC_ALL"] = locale os.environ["LC_COLLATE"] = locale os.environ["LC_CTYPE"] = locale os.environ["LC_IDENTIFICATION"] = locale os.environ["LC_MEASUREMENT"] = locale os.environ["LC_MESSAGES"] = locale os.environ["LC_MONETARY"] = locale os.environ["LC_NAME"] = locale os.environ["LC_NUMERIC"] = locale os.environ["LC_PAPER"] = locale os.environ["LC_TELEPHONE"] = locale os.environ["LC_TIME"] = locale self.extractTar("tree21") sourcePath = self.buildPath(["tree21", "2007", "01", "01", "system1", "file001.a.b", ]) self.assertTrue(os.path.exists(sourcePath)) splitSize = ByteQuantity("320", UNIT_BYTES) _splitFile(sourcePath, splitSize, None, None, removeSource=True) self.assertFalse(os.path.exists(sourcePath)) self.checkSplit(sourcePath, 3200, 320) ########################## # Test _splitDailyDir() ########################## def testSplitDailyDir_001(self): """ Test with a nonexistent daily staging directory. """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", INVALID_PATH, ]) self.assertFalse(os.path.exists(dailyDir)) sizeLimit = ByteQuantity("1.0", UNIT_MBYTES) splitSize = ByteQuantity("100000", UNIT_BYTES) self.assertRaises(ValueError, _splitDailyDir, dailyDir, sizeLimit, splitSize, None, None) def testSplitDailyDir_002(self): """ Test with 1.0 MB limit. """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.assertTrue(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("1.0", UNIT_MBYTES) splitSize = ByteQuantity("100000", UNIT_BYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) def testSplitDailyDir_003(self): """ Test with 100,000 byte limit, chopped down to 10 KB """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.assertTrue(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("100000", UNIT_BYTES) splitSize = ByteQuantity("10", UNIT_KBYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) self.checkSplit(os.path.join(dailyDir, "system1", "file003"), 320000, 10*1024) self.checkSplit(os.path.join(dailyDir, "system3", "file003"), 100001, 10*1024) def testSplitDailyDir_004(self): """ Test with 99,999 byte limit, chopped down to 5,000 bytes """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.assertTrue(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("99999", UNIT_BYTES) splitSize = ByteQuantity("5000", UNIT_BYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) self.checkSplit(os.path.join(dailyDir, "system1", "file003"), 320000, 5000) self.checkSplit(os.path.join(dailyDir, "system2", "file003"), 100000, 5000) self.checkSplit(os.path.join(dailyDir, "system3", "file002"), 100000, 5000) self.checkSplit(os.path.join(dailyDir, "system3", "file003"), 100001, 5000) def testSplitDailyDir_005(self): """ Test with 99,998 byte limit, chopped down to 2500 bytes """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.assertTrue(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("10000.0", UNIT_BYTES) splitSize = ByteQuantity("2500", UNIT_BYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) self.checkSplit(os.path.join(dailyDir, "system1", "file002"), 32000, 2500) self.checkSplit(os.path.join(dailyDir, "system1", "file003"), 320000, 2500) self.checkSplit(os.path.join(dailyDir, "system2", "file003"), 100000, 2500) self.checkSplit(os.path.join(dailyDir, "system3", "file001"), 99999, 2500) self.checkSplit(os.path.join(dailyDir, "system3", "file002"), 100000, 2500) self.checkSplit(os.path.join(dailyDir, "system3", "file003"), 100001, 2500) def testSplitDailyDir_006(self): """ Test with 10,000 byte limit, chopped down to 1024 bytes """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.assertTrue(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("10000", UNIT_BYTES) splitSize = ByteQuantity("1.0", UNIT_KBYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) self.checkSplit(os.path.join(dailyDir, "system1", "file002"), 32000, 1*1024) self.checkSplit(os.path.join(dailyDir, "system1", "file003"), 320000, 1*1024) self.checkSplit(os.path.join(dailyDir, "system2", "file003"), 100000, 1*1024) self.checkSplit(os.path.join(dailyDir, "system3", "file001"), 99999, 1*1024) self.checkSplit(os.path.join(dailyDir, "system3", "file002"), 100000, 1*1024) self.checkSplit(os.path.join(dailyDir, "system3", "file003"), 100001, 1*1024) def testSplitDailyDir_007(self): """ Test with 9,999 byte limit, chopped down to 1000 bytes """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.assertTrue(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("9999", UNIT_BYTES) splitSize = ByteQuantity("1000", UNIT_BYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.assertTrue(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.assertFalse(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) self.checkSplit(os.path.join(dailyDir, "system1", "file002"), 32000, 1000) self.checkSplit(os.path.join(dailyDir, "system1", "file003"), 320000, 1000) self.checkSplit(os.path.join(dailyDir, "system2", "file002"), 10000, 1000) self.checkSplit(os.path.join(dailyDir, "system2", "file003"), 100000, 1000) self.checkSplit(os.path.join(dailyDir, "system3", "file001"), 99999, 1000) self.checkSplit(os.path.join(dailyDir, "system3", "file002"), 100000, 1000) self.checkSplit(os.path.join(dailyDir, "system3", "file003"), 100001, 1000) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" if runAllTests(): tests = [ ] tests.append(unittest.makeSuite(TestSplitConfig, 'test')) tests.append(unittest.makeSuite(TestLocalConfig, 'test')) tests.append(unittest.makeSuite(TestFunctions, 'test')) return unittest.TestSuite(tests) else: tests = [ ] tests.append(unittest.makeSuite(TestSplitConfig, 'test')) tests.append(unittest.makeSuite(TestLocalConfig, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/configtests.py0000664000175000017500000175267112642032617022235 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010,2011,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests configuration functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/config.py. Code Coverage ============= This module contains individual tests for the public functions and classes implemented in config.py. I usually prefer to test only the public interface to a class, because that way the regression tests don't depend on the internal implementation. In this case, I've decided to test some of the private methods, because their "privateness" is more a matter of presenting a clean external interface than anything else. In particular, this is the case with the private validation functions (I use the private functions so I can test just the validations for one specific case, even if the public interface only exposes one broad validation). Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract the XML and then feed it back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a CONFIGTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import unittest from CedarBackup3.util import UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES from CedarBackup3.testutil import findResources, failUnlessAssignRaises from CedarBackup3.config import ActionHook, PreActionHook, PostActionHook, CommandOverride from CedarBackup3.config import ExtendedAction, ActionDependencies, BlankBehavior from CedarBackup3.config import CollectFile, CollectDir, PurgeDir, LocalPeer, RemotePeer from CedarBackup3.config import ReferenceConfig, ExtensionsConfig, OptionsConfig, PeersConfig from CedarBackup3.config import CollectConfig, StageConfig, StoreConfig, PurgeConfig, Config from CedarBackup3.config import ByteQuantity ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "cback.conf.1", "cback.conf.2", "cback.conf.3", "cback.conf.4", "cback.conf.5", "cback.conf.6", "cback.conf.7", "cback.conf.8", "cback.conf.9", "cback.conf.10", "cback.conf.11", "cback.conf.12", "cback.conf.13", "cback.conf.14", "cback.conf.15", "cback.conf.16", "cback.conf.17", "cback.conf.18", "cback.conf.19", "cback.conf.20", "cback.conf.21", "cback.conf.22", "cback.conf.23", ] ####################################################################### # Test Case Classes ####################################################################### ########################## # TestByteQuantity class ########################## class TestByteQuantity(unittest.TestCase): """Tests for the ByteQuantity class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ByteQuantity() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ quantity = ByteQuantity() self.assertEqual(None, quantity.quantity) self.assertEqual(UNIT_BYTES, quantity.units) self.assertEqual(0.0, quantity.bytes) def testConstructor_002a(self): """ Test constructor with all values filled in, with valid string quantity. """ quantity = ByteQuantity("6", UNIT_BYTES) self.assertEqual("6", quantity.quantity) self.assertEqual(UNIT_BYTES, quantity.units) self.assertEqual(6.0, quantity.bytes) quantity = ByteQuantity("2684354560", UNIT_BYTES) self.assertEqual("2684354560", quantity.quantity) self.assertEqual(UNIT_BYTES, quantity.units) self.assertEqual(2684354560.0, quantity.bytes) quantity = ByteQuantity("629145600", UNIT_BYTES) self.assertEqual("629145600", quantity.quantity) self.assertEqual(UNIT_BYTES, quantity.units) self.assertEqual(629145600.0, quantity.bytes) quantity = ByteQuantity("2.5", UNIT_GBYTES) self.assertEqual("2.5", quantity.quantity) self.assertEqual(UNIT_GBYTES, quantity.units) self.assertEqual(2684354560.0, quantity.bytes) quantity = ByteQuantity("600", UNIT_MBYTES) self.assertEqual("600", quantity.quantity) self.assertEqual(UNIT_MBYTES, quantity.units) self.assertEqual(629145600.0, quantity.bytes) def testConstructor_002b(self): """ Test constructor with all values filled in, with valid integer quantity. """ quantity = ByteQuantity(6, UNIT_BYTES) self.assertEqual("6", quantity.quantity) self.assertEqual(UNIT_BYTES, quantity.units) self.assertEqual(6.0, quantity.bytes) quantity = ByteQuantity(2684354560, UNIT_BYTES) self.assertEqual("2684354560", quantity.quantity) self.assertEqual(UNIT_BYTES, quantity.units) self.assertEqual(2684354560.0, quantity.bytes) quantity = ByteQuantity(629145600, UNIT_BYTES) self.assertEqual("629145600", quantity.quantity) self.assertEqual(UNIT_BYTES, quantity.units) self.assertEqual(629145600.0, quantity.bytes) quantity = ByteQuantity(600, UNIT_MBYTES) self.assertEqual("600", quantity.quantity) self.assertEqual(UNIT_MBYTES, quantity.units) self.assertEqual(629145600.0, quantity.bytes) def testConstructor_002c(self): """ Test constructor with all values filled in, with valid float quantity. """ quantity = ByteQuantity(6.0, UNIT_BYTES) self.assertEqual("6.0", quantity.quantity) self.assertEqual(UNIT_BYTES, quantity.units) self.assertEqual(6.0, quantity.bytes) quantity = ByteQuantity(2684354560.0, UNIT_BYTES) self.assertEqual("2684354560.0", quantity.quantity) self.assertEqual(UNIT_BYTES, quantity.units) self.assertEqual(2684354560.0, quantity.bytes) quantity = ByteQuantity(629145600.0, UNIT_BYTES) self.assertEqual("629145600.0", quantity.quantity) self.assertEqual(UNIT_BYTES, quantity.units) self.assertEqual(629145600.0, quantity.bytes) quantity = ByteQuantity(2.5, UNIT_GBYTES) self.assertEqual("2.5", quantity.quantity) self.assertEqual(UNIT_GBYTES, quantity.units) self.assertEqual(2684354560.0, quantity.bytes) quantity = ByteQuantity(600.0, UNIT_MBYTES) self.assertEqual("600.0", quantity.quantity) self.assertEqual(UNIT_MBYTES, quantity.units) self.assertEqual(629145600.0, quantity.bytes) def testConstructor_003(self): """ Test assignment of quantity attribute, None value. """ quantity = ByteQuantity(quantity="1.0") self.assertEqual("1.0", quantity.quantity) self.assertEqual(1.0, quantity.bytes) quantity.quantity = None self.assertEqual(None, quantity.quantity) self.assertEqual(0.0, quantity.bytes) def testConstructor_004a(self): """ Test assignment of quantity attribute, valid string values. """ quantity = ByteQuantity() quantity.units = UNIT_BYTES # so we can test the bytes attribute self.assertEqual(None, quantity.quantity) self.assertEqual(0.0, quantity.bytes) quantity.quantity = "1.0" self.assertEqual("1.0", quantity.quantity) self.assertEqual(1.0, quantity.bytes) quantity.quantity = ".1" self.assertEqual(".1", quantity.quantity) self.assertEqual(0.1, quantity.bytes) quantity.quantity = "12" self.assertEqual("12", quantity.quantity) self.assertEqual(12.0, quantity.bytes) quantity.quantity = "0.5" self.assertEqual("0.5", quantity.quantity) self.assertEqual(0.5, quantity.bytes) quantity.quantity = "181281" self.assertEqual("181281", quantity.quantity) self.assertEqual(181281.0, quantity.bytes) quantity.quantity = "1E6" self.assertEqual("1E6", quantity.quantity) self.assertEqual(1.0e6, quantity.bytes) quantity.quantity = "0.25E2" self.assertEqual("0.25E2", quantity.quantity) self.assertEqual(0.25e2, quantity.bytes) def testConstructor_004b(self): """ Test assignment of quantity attribute, valid integer values. """ quantity = ByteQuantity() quantity.units = UNIT_BYTES # so we can test the bytes attribute quantity.quantity = 1 self.assertEqual("1", quantity.quantity) self.assertEqual(1.0, quantity.bytes) quantity.quantity = 12 self.assertEqual("12", quantity.quantity) self.assertEqual(12.0, quantity.bytes) quantity.quantity = 181281 self.assertEqual("181281", quantity.quantity) self.assertEqual(181281.0, quantity.bytes) #pylint: disable=R0204 def testConstructor_004c(self): """ Test assignment of quantity attribute, valid float values. """ quantity = ByteQuantity() quantity.units = UNIT_BYTES # so we can test the bytes attribute quantity.quantity = 1.0 self.assertEqual("1.0", quantity.quantity) self.assertEqual(1.0, quantity.bytes) quantity.quantity = 0.1 self.assertEqual("0.1", quantity.quantity) self.assertEqual(0.1, quantity.bytes) quantity.quantity = "12.0" self.assertEqual("12.0", quantity.quantity) self.assertEqual(12.0, quantity.bytes) quantity.quantity = 0.5 self.assertEqual("0.5", quantity.quantity) self.assertEqual(0.5, quantity.bytes) quantity.quantity = "181281.0" self.assertEqual("181281.0", quantity.quantity) self.assertEqual(181281.0, quantity.bytes) quantity.quantity = 1E6 self.assertEqual("1000000.0", quantity.quantity) self.assertEqual(1.0e6, quantity.bytes) quantity.quantity = 0.25E2 self.assertEqual("25.0", quantity.quantity) self.assertEqual(0.25e2, quantity.bytes) def testConstructor_005(self): """ Test assignment of quantity attribute, invalid value (empty). """ quantity = ByteQuantity() self.assertEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "") self.assertEqual(None, quantity.quantity) def testConstructor_006(self): """ Test assignment of quantity attribute, invalid value (not interpretable as a float). """ quantity = ByteQuantity() self.assertEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "blech") self.assertEqual(None, quantity.quantity) def testConstructor_007(self): """ Test assignment of quantity attribute, invalid value (negative number). """ quantity = ByteQuantity() self.assertEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-3") self.assertEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-6.8") self.assertEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-0.2") self.assertEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-.1") self.assertEqual(None, quantity.quantity) def testConstructor_008(self): """ Test assignment of units attribute, None value. """ quantity = ByteQuantity(units=UNIT_MBYTES) self.assertEqual(UNIT_MBYTES, quantity.units) quantity.units = None self.assertEqual(UNIT_BYTES, quantity.units) def testConstructor_009(self): """ Test assignment of units attribute, valid values. """ quantity = ByteQuantity() self.assertEqual(UNIT_BYTES, quantity.units) quantity.units = UNIT_KBYTES self.assertEqual(UNIT_KBYTES, quantity.units) quantity.units = UNIT_MBYTES self.assertEqual(UNIT_MBYTES, quantity.units) quantity.units = UNIT_GBYTES self.assertEqual(UNIT_GBYTES, quantity.units) quantity.units = UNIT_BYTES self.assertEqual(UNIT_BYTES, quantity.units) def testConstructor_010(self): """ Test assignment of units attribute, invalid value (empty). """ quantity = ByteQuantity() self.assertEqual(UNIT_BYTES, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "") self.assertEqual(UNIT_BYTES, quantity.units) def testConstructor_011(self): """ Test assignment of units attribute, invalid value (not a valid unit). """ quantity = ByteQuantity() self.assertEqual(UNIT_BYTES, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", 16) self.assertEqual(UNIT_BYTES, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", -2) self.assertEqual(UNIT_BYTES, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "bytes") self.assertEqual(UNIT_BYTES, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "B") self.assertEqual(UNIT_BYTES, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "KB") self.assertEqual(UNIT_BYTES, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "MB") self.assertEqual(UNIT_BYTES, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "GB") self.assertEqual(UNIT_BYTES, quantity.units) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ quantity1 = ByteQuantity() quantity2 = ByteQuantity() self.assertEqual(quantity1, quantity2) self.assertTrue(quantity1 == quantity2) self.assertTrue(not quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(quantity1 >= quantity2) self.assertTrue(not quantity1 != quantity2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ quantity1 = ByteQuantity("12", UNIT_BYTES) quantity2 = ByteQuantity("12", UNIT_BYTES) self.assertEqual(quantity1, quantity2) self.assertTrue(quantity1 == quantity2) self.assertTrue(not quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(quantity1 >= quantity2) self.assertTrue(not quantity1 != quantity2) def testComparison_003(self): """ Test comparison of two differing objects, quantity differs (one None). """ quantity1 = ByteQuantity() quantity2 = ByteQuantity(quantity="12") self.assertNotEqual(quantity1, quantity2) self.assertTrue(not quantity1 == quantity2) self.assertTrue(quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(not quantity1 >= quantity2) self.assertTrue(quantity1 != quantity2) def testComparison_004a(self): """ Test comparison of two differing objects, quantity differs (same units). """ quantity1 = ByteQuantity("10", UNIT_BYTES) quantity2 = ByteQuantity("12", UNIT_BYTES) self.assertNotEqual(quantity1, quantity2) self.assertTrue(not quantity1 == quantity2) self.assertTrue(quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(not quantity1 >= quantity2) self.assertTrue(quantity1 != quantity2) def testComparison_004b(self): """ Test comparison of two differing objects, quantity differs (different units). """ quantity1 = ByteQuantity("10", UNIT_BYTES) quantity2 = ByteQuantity("12", UNIT_KBYTES) self.assertNotEqual(quantity1, quantity2) self.assertTrue(not quantity1 == quantity2) self.assertTrue(quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(not quantity1 >= quantity2) self.assertTrue(quantity1 != quantity2) def testComparison_004c(self): """ Test comparison of two differing objects, quantity differs (implied UNIT_BYTES). """ quantity1 = ByteQuantity("10") quantity2 = ByteQuantity("12", UNIT_BYTES) self.assertNotEqual(quantity1, quantity2) self.assertTrue(not quantity1 == quantity2) self.assertTrue(quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(not quantity1 >= quantity2) self.assertTrue(quantity1 != quantity2) def testComparison_004d(self): """ Test comparison of two differing objects, quantity differs (implied UNIT_BYTES). """ quantity1 = ByteQuantity("10", UNIT_BYTES) quantity2 = ByteQuantity("12") self.assertNotEqual(quantity1, quantity2) self.assertTrue(not quantity1 == quantity2) self.assertTrue(quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(not quantity1 >= quantity2) self.assertTrue(quantity1 != quantity2) def testComparison_004e(self): """ Test comparison of two differing objects, quantity differs (implied UNIT_BYTES). """ quantity1 = ByteQuantity("10") quantity2 = ByteQuantity("12") self.assertNotEqual(quantity1, quantity2) self.assertTrue(not quantity1 == quantity2) self.assertTrue(quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(not quantity1 >= quantity2) self.assertTrue(quantity1 != quantity2) def testComparison_005(self): """ Test comparison of two differing objects, units differs (one None). """ quantity1 = ByteQuantity() quantity2 = ByteQuantity(units=UNIT_MBYTES) self.assertEqual(quantity1, quantity2) self.assertTrue(quantity1 == quantity2) self.assertTrue(not quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(quantity1 >= quantity2) self.assertTrue(not quantity1 != quantity2) def testComparison_006(self): """ Test comparison of two differing objects, units differs. """ quantity1 = ByteQuantity("12", UNIT_BYTES) quantity2 = ByteQuantity("12", UNIT_KBYTES) self.assertNotEqual(quantity1, quantity2) self.assertTrue(not quantity1 == quantity2) self.assertTrue(quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(not quantity1 >= quantity2) self.assertTrue(quantity1 != quantity2) def testComparison_007a(self): """ Test comparison of byte quantity to integer bytes, equivalent """ quantity1 = 12 quantity2 = ByteQuantity(quantity="12", units=UNIT_BYTES) self.assertEqual(quantity1, quantity2) self.assertTrue(quantity1 == quantity2) self.assertTrue(not quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(quantity1 >= quantity2) self.assertTrue(not quantity1 != quantity2) def testComparison_007b(self): """ Test comparison of byte quantity to integer bytes, equivalent """ quantity1 = 629145600 quantity2 = ByteQuantity(quantity="600", units=UNIT_MBYTES) self.assertEqual(quantity1, quantity2) self.assertTrue(quantity1 == quantity2) self.assertTrue(not quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(quantity1 >= quantity2) self.assertTrue(not quantity1 != quantity2) def testComparison_007c(self): """ Test comparison of byte quantity to integer bytes, equivalent """ quantity1 = ByteQuantity(quantity="600", units=UNIT_MBYTES) quantity2 = 629145600 self.assertEqual(quantity1, quantity2) self.assertTrue(quantity1 == quantity2) self.assertTrue(not quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(quantity1 >= quantity2) self.assertTrue(not quantity1 != quantity2) def testComparison_008a(self): """ Test comparison of byte quantity to integer bytes, integer smaller """ quantity1 = 11 quantity2 = ByteQuantity(quantity="12", units=UNIT_BYTES) self.assertNotEqual(quantity1, quantity2) self.assertTrue(not quantity1 == quantity2) self.assertTrue(quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(not quantity1 >= quantity2) self.assertTrue(quantity1 != quantity2) def testComparison_008b(self): """ Test comparison of byte quantity to integer bytes, integer smaller """ quantity1 = 130390425 quantity2 = ByteQuantity(quantity="600", units=UNIT_MBYTES) self.assertNotEqual(quantity1, quantity2) self.assertTrue(not quantity1 == quantity2) self.assertTrue(quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(not quantity1 >= quantity2) self.assertTrue(quantity1 != quantity2) def testComparison_009a(self): """ Test comparison of byte quantity to integer bytes, integer larger """ quantity1 = 13 quantity2 = ByteQuantity(quantity="12", units=UNIT_BYTES) self.assertNotEqual(quantity1, quantity2) self.assertTrue(not quantity1 == quantity2) self.assertTrue(not quantity1 < quantity2) self.assertTrue(not quantity1 <= quantity2) self.assertTrue(quantity1 > quantity2) self.assertTrue(quantity1 >= quantity2) self.assertTrue(quantity1 != quantity2) def testComparison_009b(self): """ Test comparison of byte quantity to integer bytes, integer larger """ quantity1 = ByteQuantity(quantity="600", units=UNIT_MBYTES) quantity2 = 629145610 self.assertNotEqual(quantity1, quantity2) self.assertTrue(not quantity1 == quantity2) self.assertTrue(quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(not quantity1 >= quantity2) self.assertTrue(quantity1 != quantity2) def testComparison_010a(self): """ Test comparison of byte quantity to float bytes, equivalent """ quantity1 = 12.0 quantity2 = ByteQuantity(quantity="12.0", units=UNIT_BYTES) self.assertEqual(quantity1, quantity2) self.assertTrue(quantity1 == quantity2) self.assertTrue(not quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(quantity1 >= quantity2) self.assertTrue(not quantity1 != quantity2) def testComparison_010b(self): """ Test comparison of byte quantity to float bytes, equivalent """ quantity1 = 629145600.0 quantity2 = ByteQuantity(quantity="600", units=UNIT_MBYTES) self.assertEqual(quantity1, quantity2) self.assertTrue(quantity1 == quantity2) self.assertTrue(not quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(quantity1 >= quantity2) self.assertTrue(not quantity1 != quantity2) def testComparison_011a(self): """ Test comparison of byte quantity to float bytes, float smaller """ quantity1 = 11.0 quantity2 = ByteQuantity(quantity="12.0", units=UNIT_BYTES) self.assertNotEqual(quantity1, quantity2) self.assertTrue(not quantity1 == quantity2) self.assertTrue(quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(not quantity1 >= quantity2) self.assertTrue(quantity1 != quantity2) def testComparison_011b(self): """ Test comparison of byte quantity to float bytes, float smaller """ quantity1 = 130390425.0 quantity2 = ByteQuantity(quantity="600", units=UNIT_MBYTES) self.assertNotEqual(quantity1, quantity2) self.assertTrue(not quantity1 == quantity2) self.assertTrue(quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(not quantity1 >= quantity2) self.assertTrue(quantity1 != quantity2) def testComparison_012a(self): """ Test comparison of byte quantity to float bytes, float larger """ quantity1 = 13.0 quantity2 = ByteQuantity(quantity="12.0", units=UNIT_BYTES) self.assertNotEqual(quantity1, quantity2) self.assertTrue(not quantity1 == quantity2) self.assertTrue(not quantity1 < quantity2) self.assertTrue(not quantity1 <= quantity2) self.assertTrue(quantity1 > quantity2) self.assertTrue(quantity1 >= quantity2) self.assertTrue(quantity1 != quantity2) def testComparison_012b(self): """ Test comparison of byte quantity to float bytes, float larger """ quantity1 = ByteQuantity(quantity="600", units=UNIT_MBYTES) quantity2 = 629145610.0 self.assertNotEqual(quantity1, quantity2) self.assertTrue(not quantity1 == quantity2) self.assertTrue(quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(not quantity1 >= quantity2) self.assertTrue(quantity1 != quantity2) ############################### # TestActionDependencies class ############################### class TestActionDependencies(unittest.TestCase): """Tests for the ActionDependencies class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ActionDependencies() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ dependencies = ActionDependencies() self.assertEqual(None, dependencies.beforeList) self.assertEqual(None, dependencies.afterList) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ dependencies = ActionDependencies(["b", ], ["a", ]) self.assertEqual(["b", ], dependencies.beforeList) self.assertEqual(["a", ], dependencies.afterList) def testConstructor_003(self): """ Test assignment of beforeList attribute, None value. """ dependencies = ActionDependencies(beforeList=[]) self.assertEqual([], dependencies.beforeList) dependencies.beforeList = None self.assertEqual(None, dependencies.beforeList) def testConstructor_004(self): """ Test assignment of beforeList attribute, empty list. """ dependencies = ActionDependencies() self.assertEqual(None, dependencies.beforeList) dependencies.beforeList = [] self.assertEqual([], dependencies.beforeList) def testConstructor_005(self): """ Test assignment of beforeList attribute, non-empty list, valid values. """ dependencies = ActionDependencies() self.assertEqual(None, dependencies.beforeList) dependencies.beforeList = ['a', 'b', ] self.assertEqual(['a', 'b'], dependencies.beforeList) def testConstructor_006(self): """ Test assignment of beforeList attribute, non-empty list, invalid value. """ dependencies = ActionDependencies() self.assertEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", ["KEN", ]) self.assertEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", ["hello, world" ]) self.assertEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", ["dash-word", ]) self.assertEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", ["", ]) self.assertEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", [None, ]) self.assertEqual(None, dependencies.beforeList) def testConstructor_007(self): """ Test assignment of beforeList attribute, non-empty list, mixed values. """ dependencies = ActionDependencies() self.assertEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", ["ken", "dash-word", ]) def testConstructor_008(self): """ Test assignment of afterList attribute, None value. """ dependencies = ActionDependencies(afterList=[]) self.assertEqual([], dependencies.afterList) dependencies.afterList = None self.assertEqual(None, dependencies.afterList) def testConstructor_009(self): """ Test assignment of afterList attribute, non-empty list, valid values. """ dependencies = ActionDependencies() self.assertEqual(None, dependencies.afterList) dependencies.afterList = ['a', 'b', ] self.assertEqual(['a', 'b'], dependencies.afterList) def testConstructor_010(self): """ Test assignment of afterList attribute, non-empty list, invalid values. """ dependencies = ActionDependencies() self.assertEqual(None, dependencies.afterList) def testConstructor_011(self): """ Test assignment of afterList attribute, non-empty list, mixed values. """ dependencies = ActionDependencies() self.assertEqual(None, dependencies.afterList) self.failUnlessAssignRaises(ValueError, dependencies, "afterList", ["KEN", ]) self.assertEqual(None, dependencies.afterList) self.failUnlessAssignRaises(ValueError, dependencies, "afterList", ["hello, world" ]) self.assertEqual(None, dependencies.afterList) self.failUnlessAssignRaises(ValueError, dependencies, "afterList", ["dash-word", ]) self.assertEqual(None, dependencies.afterList) self.failUnlessAssignRaises(ValueError, dependencies, "afterList", ["", ]) self.assertEqual(None, dependencies.afterList) self.failUnlessAssignRaises(ValueError, dependencies, "afterList", [None, ]) self.assertEqual(None, dependencies.afterList) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ dependencies1 = ActionDependencies() dependencies2 = ActionDependencies() self.assertEqual(dependencies1, dependencies2) self.assertTrue(dependencies1 == dependencies2) self.assertTrue(not dependencies1 < dependencies2) self.assertTrue(dependencies1 <= dependencies2) self.assertTrue(not dependencies1 > dependencies2) self.assertTrue(dependencies1 >= dependencies2) self.assertTrue(not dependencies1 != dependencies2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ dependencies1 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) self.assertEqual(dependencies1, dependencies2) self.assertTrue(dependencies1 == dependencies2) self.assertTrue(not dependencies1 < dependencies2) self.assertTrue(dependencies1 <= dependencies2) self.assertTrue(not dependencies1 > dependencies2) self.assertTrue(dependencies1 >= dependencies2) self.assertTrue(not dependencies1 != dependencies2) def testComparison_003(self): """ Test comparison of two differing objects, beforeList differs (one None). """ dependencies1 = ActionDependencies(beforeList=None, afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) self.assertTrue(not dependencies1 == dependencies2) self.assertTrue(dependencies1 < dependencies2) self.assertTrue(dependencies1 <= dependencies2) self.assertTrue(not dependencies1 > dependencies2) self.assertTrue(not dependencies1 >= dependencies2) self.assertTrue(dependencies1 != dependencies2) def testComparison_004(self): """ Test comparison of two differing objects, beforeList differs (one empty). """ dependencies1 = ActionDependencies(beforeList=[], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) self.assertTrue(not dependencies1 == dependencies2) self.assertTrue(dependencies1 < dependencies2) self.assertTrue(dependencies1 <= dependencies2) self.assertTrue(not dependencies1 > dependencies2) self.assertTrue(not dependencies1 >= dependencies2) self.assertTrue(dependencies1 != dependencies2) def testComparison_005(self): """ Test comparison of two differing objects, beforeList differs. """ dependencies1 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["b", ], afterList=["b", ]) self.assertTrue(not dependencies1 == dependencies2) self.assertTrue(dependencies1 < dependencies2) self.assertTrue(dependencies1 <= dependencies2) self.assertTrue(not dependencies1 > dependencies2) self.assertTrue(not dependencies1 >= dependencies2) self.assertTrue(dependencies1 != dependencies2) def testComparison_006(self): """ Test comparison of two differing objects, afterList differs (one None). """ dependencies1 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=None) self.assertNotEqual(dependencies1, dependencies2) self.assertTrue(not dependencies1 == dependencies2) self.assertTrue(not dependencies1 < dependencies2) self.assertTrue(not dependencies1 <= dependencies2) self.assertTrue(dependencies1 > dependencies2) self.assertTrue(dependencies1 >= dependencies2) self.assertTrue(dependencies1 != dependencies2) def testComparison_007(self): """ Test comparison of two differing objects, afterList differs (one empty). """ dependencies1 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=[]) self.assertNotEqual(dependencies1, dependencies2) self.assertTrue(not dependencies1 == dependencies2) self.assertTrue(not dependencies1 < dependencies2) self.assertTrue(not dependencies1 <= dependencies2) self.assertTrue(dependencies1 > dependencies2) self.assertTrue(dependencies1 >= dependencies2) self.assertTrue(dependencies1 != dependencies2) def testComparison_008(self): """ Test comparison of two differing objects, afterList differs. """ dependencies1 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=["a", ]) self.assertNotEqual(dependencies1, dependencies2) self.assertTrue(not dependencies1 == dependencies2) self.assertTrue(not dependencies1 < dependencies2) self.assertTrue(not dependencies1 <= dependencies2) self.assertTrue(dependencies1 > dependencies2) self.assertTrue(dependencies1 >= dependencies2) self.assertTrue(dependencies1 != dependencies2) ####################### # TestActionHook class ####################### class TestActionHook(unittest.TestCase): """Tests for the ActionHook class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ActionHook() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ hook = ActionHook() self.assertEqual(False, hook._before) self.assertEqual(False, hook._after) self.assertEqual(None, hook.action) self.assertEqual(None, hook.command) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ hook = ActionHook(action="action", command="command") self.assertEqual(False, hook._before) self.assertEqual(False, hook._after) self.assertEqual("action", hook.action) self.assertEqual("command", hook.command) def testConstructor_003(self): """ Test assignment of action attribute, None value. """ hook = ActionHook(action="action") self.assertEqual("action", hook.action) hook.action = None self.assertEqual(None, hook.action) def testConstructor_004(self): """ Test assignment of action attribute, valid value. """ hook = ActionHook() self.assertEqual(None, hook.action) hook.action = "action" self.assertEqual("action", hook.action) def testConstructor_005(self): """ Test assignment of action attribute, invalid value. """ hook = ActionHook() self.assertEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "KEN") self.assertEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "dash-word") self.assertEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "hello, world") self.assertEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "") self.assertEqual(None, hook.action) def testConstructor_006(self): """ Test assignment of command attribute, None value. """ hook = ActionHook(command="command") self.assertEqual("command", hook.command) hook.command = None self.assertEqual(None, hook.command) def testConstructor_007(self): """ Test assignment of command attribute, valid valid. """ hook = ActionHook() self.assertEqual(None, hook.command) hook.command = "command" self.assertEqual("command", hook.command) def testConstructor_008(self): """ Test assignment of command attribute, invalid valid. """ hook = ActionHook() self.assertEqual(None, hook.command) self.failUnlessAssignRaises(ValueError, hook, "command", "") self.assertEqual(None, hook.command) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ hook1 = ActionHook() hook2 = ActionHook() self.assertEqual(hook1, hook2) self.assertTrue(hook1 == hook2) self.assertTrue(not hook1 < hook2) self.assertTrue(hook1 <= hook2) self.assertTrue(not hook1 > hook2) self.assertTrue(hook1 >= hook2) self.assertTrue(not hook1 != hook2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ hook1 = ActionHook(action="action", command="command") hook2 = ActionHook(action="action", command="command") self.assertEqual(hook1, hook2) self.assertTrue(hook1 == hook2) self.assertTrue(not hook1 < hook2) self.assertTrue(hook1 <= hook2) self.assertTrue(not hook1 > hook2) self.assertTrue(hook1 >= hook2) self.assertTrue(not hook1 != hook2) def testComparison_003(self): """ Test comparison of two different objects, action differs (one None). """ hook1 = ActionHook(action="action", command="command") hook2 = ActionHook(action=None, command="command") self.assertTrue(not hook1 == hook2) self.assertTrue(not hook1 < hook2) self.assertTrue(not hook1 <= hook2) self.assertTrue(hook1 > hook2) self.assertTrue(hook1 >= hook2) self.assertTrue(hook1 != hook2) def testComparison_004(self): """ Test comparison of two different objects, action differs. """ hook1 = ActionHook(action="action2", command="command") hook2 = ActionHook(action="action1", command="command") self.assertTrue(not hook1 == hook2) self.assertTrue(not hook1 < hook2) self.assertTrue(not hook1 <= hook2) self.assertTrue(hook1 > hook2) self.assertTrue(hook1 >= hook2) self.assertTrue(hook1 != hook2) def testComparison_005(self): """ Test comparison of two different objects, command differs (one None). """ hook1 = ActionHook(action="action", command=None) hook2 = ActionHook(action="action", command="command") self.assertTrue(not hook1 == hook2) self.assertTrue(hook1 < hook2) self.assertTrue(hook1 <= hook2) self.assertTrue(not hook1 > hook2) self.assertTrue(not hook1 >= hook2) self.assertTrue(hook1 != hook2) def testComparison_006(self): """ Test comparison of two different objects, command differs. """ hook1 = ActionHook(action="action", command="command1") hook2 = ActionHook(action="action", command="command2") self.assertTrue(not hook1 == hook2) self.assertTrue(hook1 < hook2) self.assertTrue(hook1 <= hook2) self.assertTrue(not hook1 > hook2) self.assertTrue(not hook1 >= hook2) self.assertTrue(hook1 != hook2) ########################## # TestPreActionHook class ########################## class TestPreActionHook(unittest.TestCase): """Tests for the PreActionHook class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PreActionHook() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ hook = PreActionHook() self.assertEqual(True, hook._before) self.assertEqual(False, hook._after) self.assertEqual(None, hook.action) self.assertEqual(None, hook.command) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ hook = PreActionHook(action="action", command="command") self.assertEqual(True, hook._before) self.assertEqual(False, hook._after) self.assertEqual("action", hook.action) self.assertEqual("command", hook.command) def testConstructor_003(self): """ Test assignment of action attribute, None value. """ hook = PreActionHook(action="action") self.assertEqual("action", hook.action) hook.action = None self.assertEqual(None, hook.action) def testConstructor_004(self): """ Test assignment of action attribute, valid value. """ hook = PreActionHook() self.assertEqual(None, hook.action) hook.action = "action" self.assertEqual("action", hook.action) def testConstructor_005(self): """ Test assignment of action attribute, invalid value. """ hook = PreActionHook() self.assertEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "KEN") self.assertEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "dash-word") self.assertEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "hello, world") self.assertEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "") self.assertEqual(None, hook.action) def testConstructor_006(self): """ Test assignment of command attribute, None value. """ hook = PreActionHook(command="command") self.assertEqual("command", hook.command) hook.command = None self.assertEqual(None, hook.command) def testConstructor_007(self): """ Test assignment of command attribute, valid valid. """ hook = PreActionHook() self.assertEqual(None, hook.command) hook.command = "command" self.assertEqual("command", hook.command) def testConstructor_008(self): """ Test assignment of command attribute, invalid valid. """ hook = PreActionHook() self.assertEqual(None, hook.command) self.failUnlessAssignRaises(ValueError, hook, "command", "") self.assertEqual(None, hook.command) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ hook1 = PreActionHook() hook2 = PreActionHook() self.assertEqual(hook1, hook2) self.assertTrue(hook1 == hook2) self.assertTrue(not hook1 < hook2) self.assertTrue(hook1 <= hook2) self.assertTrue(not hook1 > hook2) self.assertTrue(hook1 >= hook2) self.assertTrue(not hook1 != hook2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ hook1 = PreActionHook(action="action", command="command") hook2 = PreActionHook(action="action", command="command") self.assertEqual(hook1, hook2) self.assertTrue(hook1 == hook2) self.assertTrue(not hook1 < hook2) self.assertTrue(hook1 <= hook2) self.assertTrue(not hook1 > hook2) self.assertTrue(hook1 >= hook2) self.assertTrue(not hook1 != hook2) def testComparison_003(self): """ Test comparison of two different objects, action differs (one None). """ hook1 = PreActionHook(action="action", command="command") hook2 = PreActionHook(action=None, command="command") self.assertTrue(not hook1 == hook2) self.assertTrue(not hook1 < hook2) self.assertTrue(not hook1 <= hook2) self.assertTrue(hook1 > hook2) self.assertTrue(hook1 >= hook2) self.assertTrue(hook1 != hook2) def testComparison_004(self): """ Test comparison of two different objects, action differs. """ hook1 = PreActionHook(action="action2", command="command") hook2 = PreActionHook(action="action1", command="command") self.assertTrue(not hook1 == hook2) self.assertTrue(not hook1 < hook2) self.assertTrue(not hook1 <= hook2) self.assertTrue(hook1 > hook2) self.assertTrue(hook1 >= hook2) self.assertTrue(hook1 != hook2) def testComparison_005(self): """ Test comparison of two different objects, command differs (one None). """ hook1 = PreActionHook(action="action", command=None) hook2 = PreActionHook(action="action", command="command") self.assertTrue(not hook1 == hook2) self.assertTrue(hook1 < hook2) self.assertTrue(hook1 <= hook2) self.assertTrue(not hook1 > hook2) self.assertTrue(not hook1 >= hook2) self.assertTrue(hook1 != hook2) def testComparison_006(self): """ Test comparison of two different objects, command differs. """ hook1 = PreActionHook(action="action", command="command1") hook2 = PreActionHook(action="action", command="command2") self.assertTrue(not hook1 == hook2) self.assertTrue(hook1 < hook2) self.assertTrue(hook1 <= hook2) self.assertTrue(not hook1 > hook2) self.assertTrue(not hook1 >= hook2) self.assertTrue(hook1 != hook2) ########################### # TestPostActionHook class ########################### class TestPostActionHook(unittest.TestCase): """Tests for the PostActionHook class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PostActionHook() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ hook = PostActionHook() self.assertEqual(False, hook._before) self.assertEqual(True, hook._after) self.assertEqual(None, hook.action) self.assertEqual(None, hook.command) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ hook = PostActionHook(action="action", command="command") self.assertEqual(False, hook._before) self.assertEqual(True, hook._after) self.assertEqual("action", hook.action) self.assertEqual("command", hook.command) def testConstructor_003(self): """ Test assignment of action attribute, None value. """ hook = PostActionHook(action="action") self.assertEqual("action", hook.action) hook.action = None self.assertEqual(None, hook.action) def testConstructor_004(self): """ Test assignment of action attribute, valid value. """ hook = PostActionHook() self.assertEqual(None, hook.action) hook.action = "action" self.assertEqual("action", hook.action) def testConstructor_005(self): """ Test assignment of action attribute, invalid value. """ hook = PostActionHook() self.assertEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "KEN") self.assertEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "dash-word") self.assertEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "hello, world") self.assertEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "") self.assertEqual(None, hook.action) def testConstructor_006(self): """ Test assignment of command attribute, None value. """ hook = PostActionHook(command="command") self.assertEqual("command", hook.command) hook.command = None self.assertEqual(None, hook.command) def testConstructor_007(self): """ Test assignment of command attribute, valid valid. """ hook = PostActionHook() self.assertEqual(None, hook.command) hook.command = "command" self.assertEqual("command", hook.command) def testConstructor_008(self): """ Test assignment of command attribute, invalid valid. """ hook = PostActionHook() self.assertEqual(None, hook.command) self.failUnlessAssignRaises(ValueError, hook, "command", "") self.assertEqual(None, hook.command) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ hook1 = PostActionHook() hook2 = PostActionHook() self.assertEqual(hook1, hook2) self.assertTrue(hook1 == hook2) self.assertTrue(not hook1 < hook2) self.assertTrue(hook1 <= hook2) self.assertTrue(not hook1 > hook2) self.assertTrue(hook1 >= hook2) self.assertTrue(not hook1 != hook2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ hook1 = PostActionHook(action="action", command="command") hook2 = PostActionHook(action="action", command="command") self.assertEqual(hook1, hook2) self.assertTrue(hook1 == hook2) self.assertTrue(not hook1 < hook2) self.assertTrue(hook1 <= hook2) self.assertTrue(not hook1 > hook2) self.assertTrue(hook1 >= hook2) self.assertTrue(not hook1 != hook2) def testComparison_003(self): """ Test comparison of two different objects, action differs (one None). """ hook1 = PostActionHook(action="action", command="command") hook2 = PostActionHook(action=None, command="command") self.assertTrue(not hook1 == hook2) self.assertTrue(not hook1 < hook2) self.assertTrue(not hook1 <= hook2) self.assertTrue(hook1 > hook2) self.assertTrue(hook1 >= hook2) self.assertTrue(hook1 != hook2) def testComparison_004(self): """ Test comparison of two different objects, action differs. """ hook1 = PostActionHook(action="action2", command="command") hook2 = PostActionHook(action="action1", command="command") self.assertTrue(not hook1 == hook2) self.assertTrue(not hook1 < hook2) self.assertTrue(not hook1 <= hook2) self.assertTrue(hook1 > hook2) self.assertTrue(hook1 >= hook2) self.assertTrue(hook1 != hook2) def testComparison_005(self): """ Test comparison of two different objects, command differs (one None). """ hook1 = PostActionHook(action="action", command=None) hook2 = PostActionHook(action="action", command="command") self.assertTrue(not hook1 == hook2) self.assertTrue(hook1 < hook2) self.assertTrue(hook1 <= hook2) self.assertTrue(not hook1 > hook2) self.assertTrue(not hook1 >= hook2) self.assertTrue(hook1 != hook2) def testComparison_006(self): """ Test comparison of two different objects, command differs. """ hook1 = PostActionHook(action="action", command="command1") hook2 = PostActionHook(action="action", command="command2") self.assertTrue(not hook1 == hook2) self.assertTrue(hook1 < hook2) self.assertTrue(hook1 <= hook2) self.assertTrue(not hook1 > hook2) self.assertTrue(not hook1 >= hook2) self.assertTrue(hook1 != hook2) ########################## # TestBlankBehavior class ########################## class TestBlankBehavior(unittest.TestCase): """Tests for the BlankBehavior class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = BlankBehavior() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ behavior = BlankBehavior() self.assertEqual(None, behavior.blankMode) self.assertEqual(None, behavior.blankFactor) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ behavior = BlankBehavior(blankMode="daily", blankFactor="1.0") self.assertEqual("daily", behavior.blankMode) self.assertEqual("1.0", behavior.blankFactor) def testConstructor_003(self): """ Test assignment of blankMode, None value. """ behavior = BlankBehavior(blankMode="daily") self.assertEqual("daily", behavior.blankMode) behavior.blankMode = None self.assertEqual(None, behavior.blankMode) def testConstructor_004(self): """ Test assignment of blankMode attribute, valid value. """ behavior = BlankBehavior() self.assertEqual(None, behavior.blankMode) behavior.blankMode = "daily" self.assertEqual("daily", behavior.blankMode) behavior.blankMode = "weekly" self.assertEqual("weekly", behavior.blankMode) def testConstructor_005(self): """ Test assignment of blankFactor attribute, None value. """ behavior = BlankBehavior(blankFactor="1.3") self.assertEqual("1.3", behavior.blankFactor) behavior.blankFactor = None self.assertEqual(None, behavior.blankFactor) def testConstructor_006(self): """ Test assignment of blankFactor attribute, valid values. """ behavior = BlankBehavior() self.assertEqual(None, behavior.blankFactor) behavior.blankFactor = "1.0" self.assertEqual("1.0", behavior.blankFactor) behavior.blankFactor = ".1" self.assertEqual(".1", behavior.blankFactor) behavior.blankFactor = "12" self.assertEqual("12", behavior.blankFactor) behavior.blankFactor = "0.5" self.assertEqual("0.5", behavior.blankFactor) behavior.blankFactor = "181281" self.assertEqual("181281", behavior.blankFactor) behavior.blankFactor = "1E6" self.assertEqual("1E6", behavior.blankFactor) behavior.blankFactor = "0.25E2" self.assertEqual("0.25E2", behavior.blankFactor) def testConstructor_007(self): """ Test assignment of blankFactor attribute, invalid value (empty). """ behavior = BlankBehavior() self.assertEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "") self.assertEqual(None, behavior.blankFactor) def testConstructor_008(self): """ Test assignment of blankFactor attribute, invalid value (not a floating point number). """ behavior = BlankBehavior() self.assertEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "blech") self.assertEqual(None, behavior.blankFactor) def testConstructor_009(self): """ Test assignment of blankFactor store attribute, invalid value (negative number). """ behavior = BlankBehavior() self.assertEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "-3") self.assertEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "-6.8") self.assertEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "-0.2") self.assertEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "-.1") self.assertEqual(None, behavior.blankFactor) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ behavior1 = BlankBehavior() behavior2 = BlankBehavior() self.assertEqual(behavior1, behavior2) self.assertTrue(behavior1 == behavior2) self.assertTrue(not behavior1 < behavior2) self.assertTrue(behavior1 <= behavior2) self.assertTrue(not behavior1 > behavior2) self.assertTrue(behavior1 >= behavior2) self.assertTrue(not behavior1 != behavior2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ behavior1 = BlankBehavior(blankMode="weekly", blankFactor="1.0") behavior2 = BlankBehavior(blankMode="weekly", blankFactor="1.0") self.assertEqual(behavior1, behavior2) self.assertTrue(behavior1 == behavior2) self.assertTrue(not behavior1 < behavior2) self.assertTrue(behavior1 <= behavior2) self.assertTrue(not behavior1 > behavior2) self.assertTrue(behavior1 >= behavior2) self.assertTrue(not behavior1 != behavior2) def testComparison_003(self): """ Test comparison of two different objects, blankMode differs (one None). """ behavior1 = BlankBehavior(None, blankFactor="1.0") behavior2 = BlankBehavior(blankMode="weekly", blankFactor="1.0") self.assertTrue(not behavior1 == behavior2) self.assertTrue(behavior1 < behavior2) self.assertTrue(behavior1 <= behavior2) self.assertTrue(not behavior1 > behavior2) self.assertTrue(not behavior1 >= behavior2) self.assertTrue(behavior1 != behavior2) def testComparison_004(self): """ Test comparison of two different objects, blankMode differs. """ behavior1 = BlankBehavior(blankMode="daily", blankFactor="1.0") behavior2 = BlankBehavior(blankMode="weekly", blankFactor="1.0") self.assertTrue(not behavior1 == behavior2) self.assertTrue(behavior1 < behavior2) self.assertTrue(behavior1 <= behavior2) self.assertTrue(not behavior1 > behavior2) self.assertTrue(not behavior1 >= behavior2) self.assertTrue(behavior1 != behavior2) def testComparison_005(self): """ Test comparison of two different objects, blankFactor differs (one None). """ behavior1 = BlankBehavior(blankMode="weekly", blankFactor=None) behavior2 = BlankBehavior(blankMode="weekly", blankFactor="1.0") self.assertTrue(not behavior1 == behavior2) self.assertTrue(behavior1 < behavior2) self.assertTrue(behavior1 <= behavior2) self.assertTrue(not behavior1 > behavior2) self.assertTrue(not behavior1 >= behavior2) self.assertTrue(behavior1 != behavior2) def testComparison_006(self): """ Test comparison of two different objects, blankFactor differs. """ behavior1 = BlankBehavior(blankMode="weekly", blankFactor="0.0") behavior2 = BlankBehavior(blankMode="weekly", blankFactor="1.0") self.assertTrue(not behavior1 == behavior2) self.assertTrue(behavior1 < behavior2) self.assertTrue(behavior1 <= behavior2) self.assertTrue(not behavior1 > behavior2) self.assertTrue(not behavior1 >= behavior2) self.assertTrue(behavior1 != behavior2) ########################### # TestExtendedAction class ########################### class TestExtendedAction(unittest.TestCase): """Tests for the ExtendedAction class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ExtendedAction() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ action = ExtendedAction() self.assertEqual(None, action.name) self.assertEqual(None, action.module) self.assertEqual(None, action.function) self.assertEqual(None, action.index) self.assertEqual(None, action.dependencies) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ action = ExtendedAction("one", "two", "three", 4, ActionDependencies()) self.assertEqual("one", action.name) self.assertEqual("two", action.module) self.assertEqual("three", action.function) self.assertEqual(4, action.index) self.assertEqual(ActionDependencies(), action.dependencies) def testConstructor_003(self): """ Test assignment of name attribute, None value. """ action = ExtendedAction(name="name") self.assertEqual("name", action.name) action.name = None self.assertEqual(None, action.name) def testConstructor_004(self): """ Test assignment of name attribute, valid value. """ action = ExtendedAction() self.assertEqual(None, action.name) action.name = "name" self.assertEqual("name", action.name) action.name = "9" self.assertEqual("9", action.name) action.name = "name99name" self.assertEqual("name99name", action.name) action.name = "12action" self.assertEqual("12action", action.name) def testConstructor_005(self): """ Test assignment of name attribute, invalid value (empty). """ action = ExtendedAction() self.assertEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "") self.assertEqual(None, action.name) def testConstructor_006(self): """ Test assignment of name attribute, invalid value (does not match valid pattern). """ action = ExtendedAction() self.assertEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "Something") self.assertEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "what_ever") self.assertEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "_BOGUS") self.assertEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "stuff-here") self.assertEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "/more/stuff") self.assertEqual(None, action.name) def testConstructor_007(self): """ Test assignment of module attribute, None value. """ action = ExtendedAction(module="module") self.assertEqual("module", action.module) action.module = None self.assertEqual(None, action.module) def testConstructor_008(self): """ Test assignment of module attribute, valid value. """ action = ExtendedAction() self.assertEqual(None, action.module) action.module = "module" self.assertEqual("module", action.module) action.module = "stuff" self.assertEqual("stuff", action.module) action.module = "stuff.something" self.assertEqual("stuff.something", action.module) action.module = "_identifier.__another.one_more__" self.assertEqual("_identifier.__another.one_more__", action.module) def testConstructor_009(self): """ Test assignment of module attribute, invalid value (empty). """ action = ExtendedAction() self.assertEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "") self.assertEqual(None, action.module) def testConstructor_010(self): """ Test assignment of module attribute, invalid value (does not match valid pattern). """ action = ExtendedAction() self.assertEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "9something") self.assertEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "_bogus.") self.assertEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "-bogus") self.assertEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "/BOGUS") self.assertEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "really._really__.___really.long.bad.path.") self.assertEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", ".really._really__.___really.long.bad.path") self.assertEqual(None, action.module) def testConstructor_011(self): """ Test assignment of function attribute, None value. """ action = ExtendedAction(function="function") self.assertEqual("function", action.function) action.function = None self.assertEqual(None, action.function) def testConstructor_012(self): """ Test assignment of function attribute, valid value. """ action = ExtendedAction() self.assertEqual(None, action.function) action.function = "function" self.assertEqual("function", action.function) action.function = "_stuff" self.assertEqual("_stuff", action.function) action.function = "moreStuff9" self.assertEqual("moreStuff9", action.function) action.function = "__identifier__" self.assertEqual("__identifier__", action.function) def testConstructor_013(self): """ Test assignment of function attribute, invalid value (empty). """ action = ExtendedAction() self.assertEqual(None, action.function) self.failUnlessAssignRaises(ValueError, action, "function", "") self.assertEqual(None, action.function) def testConstructor_014(self): """ Test assignment of function attribute, invalid value (does not match valid pattern). """ action = ExtendedAction() self.assertEqual(None, action.function) self.failUnlessAssignRaises(ValueError, action, "function", "9something") self.assertEqual(None, action.function) self.failUnlessAssignRaises(ValueError, action, "function", "one.two") self.assertEqual(None, action.function) self.failUnlessAssignRaises(ValueError, action, "function", "-bogus") self.assertEqual(None, action.function) self.failUnlessAssignRaises(ValueError, action, "function", "/BOGUS") self.assertEqual(None, action.function) def testConstructor_015(self): """ Test assignment of index attribute, None value. """ action = ExtendedAction(index=1) self.assertEqual(1, action.index) action.index = None self.assertEqual(None, action.index) def testConstructor_016(self): """ Test assignment of index attribute, valid value. """ action = ExtendedAction() self.assertEqual(None, action.index) action.index = 1 self.assertEqual(1, action.index) def testConstructor_017(self): """ Test assignment of index attribute, invalid value. """ action = ExtendedAction() self.assertEqual(None, action.index) self.failUnlessAssignRaises(ValueError, action, "index", "ken") self.assertEqual(None, action.index) def testConstructor_018(self): """ Test assignment of dependencies attribute, None value. """ action = ExtendedAction(dependencies=ActionDependencies()) self.assertEqual(ActionDependencies(), action.dependencies) action.dependencies = None self.assertEqual(None, action.dependencies) def testConstructor_019(self): """ Test assignment of dependencies attribute, valid value. """ action = ExtendedAction() self.assertEqual(None, action.dependencies) action.dependencies = ActionDependencies() self.assertEqual(ActionDependencies(), action.dependencies) def testConstructor_020(self): """ Test assignment of dependencies attribute, invalid value. """ action = ExtendedAction() self.assertEqual(None, action.dependencies) self.failUnlessAssignRaises(ValueError, action, "dependencies", "ken") self.assertEqual(None, action.dependencies) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ action1 = ExtendedAction() action2 = ExtendedAction() self.assertEqual(action1, action2) self.assertTrue(action1 == action2) self.assertTrue(not action1 < action2) self.assertTrue(action1 <= action2) self.assertTrue(not action1 > action2) self.assertTrue(action1 >= action2) self.assertTrue(not action1 != action2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ action1 = ExtendedAction("one", "two", "three", 4, ActionDependencies()) action2 = ExtendedAction("one", "two", "three", 4, ActionDependencies()) self.assertTrue(action1 == action2) self.assertTrue(not action1 < action2) self.assertTrue(action1 <= action2) self.assertTrue(not action1 > action2) self.assertTrue(action1 >= action2) self.assertTrue(not action1 != action2) def testComparison_003(self): """ Test comparison of two differing objects, name differs (one None). """ action1 = ExtendedAction(name="name") action2 = ExtendedAction() self.assertNotEqual(action1, action2) self.assertTrue(not action1 == action2) self.assertTrue(not action1 < action2) self.assertTrue(not action1 <= action2) self.assertTrue(action1 > action2) self.assertTrue(action1 >= action2) self.assertTrue(action1 != action2) def testComparison_004(self): """ Test comparison of two differing objects, name differs. """ action1 = ExtendedAction("name2", "two", "three", 4) action2 = ExtendedAction("name1", "two", "three", 4) self.assertNotEqual(action1, action2) self.assertTrue(not action1 == action2) self.assertTrue(not action1 < action2) self.assertTrue(not action1 <= action2) self.assertTrue(action1 > action2) self.assertTrue(action1 >= action2) self.assertTrue(action1 != action2) def testComparison_005(self): """ Test comparison of two differing objects, module differs (one None). """ action1 = ExtendedAction(module="whatever") action2 = ExtendedAction() self.assertNotEqual(action1, action2) self.assertTrue(not action1 == action2) self.assertTrue(not action1 < action2) self.assertTrue(not action1 <= action2) self.assertTrue(action1 > action2) self.assertTrue(action1 >= action2) self.assertTrue(action1 != action2) def testComparison_006(self): """ Test comparison of two differing objects, module differs. """ action1 = ExtendedAction("one", "MODULE", "three", 4) action2 = ExtendedAction("one", "two", "three", 4) self.assertNotEqual(action1, action2) self.assertTrue(not action1 == action2) self.assertTrue(action1 < action2) self.assertTrue(action1 <= action2) self.assertTrue(not action1 > action2) self.assertTrue(not action1 >= action2) self.assertTrue(action1 != action2) def testComparison_007(self): """ Test comparison of two differing objects, function differs (one None). """ action1 = ExtendedAction(function="func1") action2 = ExtendedAction() self.assertNotEqual(action1, action2) self.assertTrue(not action1 == action2) self.assertTrue(not action1 < action2) self.assertTrue(not action1 <= action2) self.assertTrue(action1 > action2) self.assertTrue(action1 >= action2) self.assertTrue(action1 != action2) def testComparison_008(self): """ Test comparison of two differing objects, function differs. """ action1 = ExtendedAction("one", "two", "func1", 4) action2 = ExtendedAction("one", "two", "func2", 4) self.assertNotEqual(action1, action2) self.assertTrue(not action1 == action2) self.assertTrue(action1 < action2) self.assertTrue(action1 <= action2) self.assertTrue(not action1 > action2) self.assertTrue(not action1 >= action2) self.assertTrue(action1 != action2) def testComparison_009(self): """ Test comparison of two differing objects, index differs (one None). """ action1 = ExtendedAction() action2 = ExtendedAction(index=42) self.assertNotEqual(action1, action2) self.assertTrue(not action1 == action2) self.assertTrue(action1 < action2) self.assertTrue(action1 <= action2) self.assertTrue(not action1 > action2) self.assertTrue(not action1 >= action2) self.assertTrue(action1 != action2) def testComparison_010(self): """ Test comparison of two differing objects, index differs. """ action1 = ExtendedAction("one", "two", "three", 99) action2 = ExtendedAction("one", "two", "three", 12) self.assertNotEqual(action1, action2) self.assertTrue(not action1 == action2) self.assertTrue(not action1 < action2) self.assertTrue(not action1 <= action2) self.assertTrue(action1 > action2) self.assertTrue(action1 >= action2) self.assertTrue(action1 != action2) def testComparison_011(self): """ Test comparison of two differing objects, dependencies differs (one None). """ action1 = ExtendedAction() action2 = ExtendedAction(dependencies=ActionDependencies()) self.assertNotEqual(action1, action2) self.assertTrue(not action1 == action2) self.assertTrue(action1 < action2) self.assertTrue(action1 <= action2) self.assertTrue(not action1 > action2) self.assertTrue(not action1 >= action2) self.assertTrue(action1 != action2) def testComparison_012(self): """ Test comparison of two differing objects, dependencies differs. """ action1 = ExtendedAction("one", "two", "three", 99, ActionDependencies(beforeList=[])) action2 = ExtendedAction("one", "two", "three", 99, ActionDependencies(beforeList=["ken", ])) self.assertNotEqual(action1, action2) self.assertTrue(not action1 == action2) self.assertTrue(action1 < action2) self.assertTrue(action1 <= action2) self.assertTrue(not action1 > action2) self.assertTrue(not action1 >= action2) self.assertTrue(action1 != action2) ############################ # TestCommandOverride class ############################ class TestCommandOverride(unittest.TestCase): """Tests for the CommandOverride class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = CommandOverride() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ override = CommandOverride() self.assertEqual(None, override.command) self.assertEqual(None, override.absolutePath) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ override = CommandOverride(command="command", absolutePath="/path/to/something") self.assertEqual("command", override.command) self.assertEqual("/path/to/something", override.absolutePath) def testConstructor_003(self): """ Test assignment of command attribute, None value. """ override = CommandOverride(command="command") self.assertEqual("command", override.command) override.command = None self.assertEqual(None, override.command) def testConstructor_004(self): """ Test assignment of command attribute, valid value. """ override = CommandOverride() self.assertEqual(None, override.command) override.command = "command" self.assertEqual("command", override.command) def testConstructor_005(self): """ Test assignment of command attribute, invalid value. """ override = CommandOverride() override.command = None self.failUnlessAssignRaises(ValueError, override, "command", "") override.command = None def testConstructor_006(self): """ Test assignment of absolutePath attribute, None value. """ override = CommandOverride(absolutePath="/path/to/something") self.assertEqual("/path/to/something", override.absolutePath) override.absolutePath = None self.assertEqual(None, override.absolutePath) def testConstructor_007(self): """ Test assignment of absolutePath attribute, valid value. """ override = CommandOverride() self.assertEqual(None, override.absolutePath) override.absolutePath = "/path/to/something" self.assertEqual("/path/to/something", override.absolutePath) def testConstructor_008(self): """ Test assignment of absolutePath attribute, invalid value. """ override = CommandOverride() override.command = None self.failUnlessAssignRaises(ValueError, override, "absolutePath", "path/to/something/relative") override.command = None self.failUnlessAssignRaises(ValueError, override, "absolutePath", "") override.command = None ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ override1 = CommandOverride() override2 = CommandOverride() self.assertEqual(override1, override2) self.assertTrue(override1 == override2) self.assertTrue(not override1 < override2) self.assertTrue(override1 <= override2) self.assertTrue(not override1 > override2) self.assertTrue(override1 >= override2) self.assertTrue(not override1 != override2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ override1 = CommandOverride(command="command", absolutePath="/path/to/something") override2 = CommandOverride(command="command", absolutePath="/path/to/something") self.assertEqual(override1, override2) self.assertTrue(override1 == override2) self.assertTrue(not override1 < override2) self.assertTrue(override1 <= override2) self.assertTrue(not override1 > override2) self.assertTrue(override1 >= override2) self.assertTrue(not override1 != override2) def testComparison_003(self): """ Test comparison of differing objects, command differs (one None). """ override1 = CommandOverride(command=None, absolutePath="/path/to/something") override2 = CommandOverride(command="command", absolutePath="/path/to/something") self.assertTrue(not override1 == override2) self.assertTrue(override1 < override2) self.assertTrue(override1 <= override2) self.assertTrue(not override1 > override2) self.assertTrue(not override1 >= override2) self.assertTrue(override1 != override2) def testComparison_004(self): """ Test comparison of differing objects, command differs. """ override1 = CommandOverride(command="command2", absolutePath="/path/to/something") override2 = CommandOverride(command="command1", absolutePath="/path/to/something") self.assertTrue(not override1 == override2) self.assertTrue(not override1 < override2) self.assertTrue(not override1 <= override2) self.assertTrue(override1 > override2) self.assertTrue(override1 >= override2) self.assertTrue(override1 != override2) def testComparison_005(self): """ Test comparison of differing objects, absolutePath differs (one None). """ override1 = CommandOverride(command="command", absolutePath="/path/to/something") override2 = CommandOverride(command="command", absolutePath=None) self.assertTrue(not override1 == override2) self.assertTrue(not override1 < override2) self.assertTrue(not override1 <= override2) self.assertTrue(override1 > override2) self.assertTrue(override1 >= override2) self.assertTrue(override1 != override2) def testComparison_006(self): """ Test comparison of differing objects, absolutePath differs. """ override1 = CommandOverride(command="command", absolutePath="/path/to/something1") override2 = CommandOverride(command="command", absolutePath="/path/to/something2") self.assertTrue(not override1 == override2) self.assertTrue(override1 < override2) self.assertTrue(override1 <= override2) self.assertTrue(not override1 > override2) self.assertTrue(not override1 >= override2) self.assertTrue(override1 != override2) ######################## # TestCollectFile class ######################## class TestCollectFile(unittest.TestCase): """Tests for the CollectFile class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = CollectFile() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ collectFile = CollectFile() self.assertEqual(None, collectFile.absolutePath) self.assertEqual(None, collectFile.collectMode) self.assertEqual(None, collectFile.archiveMode) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ collectFile = CollectFile("/etc/whatever", "incr", "tar") self.assertEqual("/etc/whatever", collectFile.absolutePath) self.assertEqual("incr", collectFile.collectMode) self.assertEqual("tar", collectFile.archiveMode) def testConstructor_003(self): """ Test assignment of absolutePath attribute, None value. """ collectFile = CollectFile(absolutePath="/whatever") self.assertEqual("/whatever", collectFile.absolutePath) collectFile.absolutePath = None self.assertEqual(None, collectFile.absolutePath) def testConstructor_004(self): """ Test assignment of absolutePath attribute, valid value. """ collectFile = CollectFile() self.assertEqual(None, collectFile.absolutePath) collectFile.absolutePath = "/etc/whatever" self.assertEqual("/etc/whatever", collectFile.absolutePath) def testConstructor_005(self): """ Test assignment of absolutePath attribute, invalid value (empty). """ collectFile = CollectFile() self.assertEqual(None, collectFile.absolutePath) self.failUnlessAssignRaises(ValueError, collectFile, "absolutePath", "") self.assertEqual(None, collectFile.absolutePath) def testConstructor_006(self): """ Test assignment of absolutePath attribute, invalid value (non-absolute). """ collectFile = CollectFile() self.assertEqual(None, collectFile.absolutePath) self.failUnlessAssignRaises(ValueError, collectFile, "absolutePath", "whatever") self.assertEqual(None, collectFile.absolutePath) def testConstructor_007(self): """ Test assignment of collectMode attribute, None value. """ collectFile = CollectFile(collectMode="incr") self.assertEqual("incr", collectFile.collectMode) collectFile.collectMode = None self.assertEqual(None, collectFile.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, valid value. """ collectFile = CollectFile() self.assertEqual(None, collectFile.collectMode) collectFile.collectMode = "daily" self.assertEqual("daily", collectFile.collectMode) collectFile.collectMode = "weekly" self.assertEqual("weekly", collectFile.collectMode) collectFile.collectMode = "incr" self.assertEqual("incr", collectFile.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, invalid value (empty). """ collectFile = CollectFile() self.assertEqual(None, collectFile.collectMode) self.failUnlessAssignRaises(ValueError, collectFile, "collectMode", "") self.assertEqual(None, collectFile.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ collectFile = CollectFile() self.assertEqual(None, collectFile.collectMode) self.failUnlessAssignRaises(ValueError, collectFile, "collectMode", "bogus") self.assertEqual(None, collectFile.collectMode) def testConstructor_011(self): """ Test assignment of archiveMode attribute, None value. """ collectFile = CollectFile(archiveMode="tar") self.assertEqual("tar", collectFile.archiveMode) collectFile.archiveMode = None self.assertEqual(None, collectFile.archiveMode) def testConstructor_012(self): """ Test assignment of archiveMode attribute, valid value. """ collectFile = CollectFile() self.assertEqual(None, collectFile.archiveMode) collectFile.archiveMode = "tar" self.assertEqual("tar", collectFile.archiveMode) collectFile.archiveMode = "targz" self.assertEqual("targz", collectFile.archiveMode) collectFile.archiveMode = "tarbz2" self.assertEqual("tarbz2", collectFile.archiveMode) def testConstructor_013(self): """ Test assignment of archiveMode attribute, invalid value (empty). """ collectFile = CollectFile() self.assertEqual(None, collectFile.archiveMode) self.failUnlessAssignRaises(ValueError, collectFile, "archiveMode", "") self.assertEqual(None, collectFile.archiveMode) def testConstructor_014(self): """ Test assignment of archiveMode attribute, invalid value (not in list). """ collectFile = CollectFile() self.assertEqual(None, collectFile.archiveMode) self.failUnlessAssignRaises(ValueError, collectFile, "archiveMode", "bogus") self.assertEqual(None, collectFile.archiveMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ collectFile1 = CollectFile() collectFile2 = CollectFile() self.assertEqual(collectFile1, collectFile2) self.assertTrue(collectFile1 == collectFile2) self.assertTrue(not collectFile1 < collectFile2) self.assertTrue(collectFile1 <= collectFile2) self.assertTrue(not collectFile1 > collectFile2) self.assertTrue(collectFile1 >= collectFile2) self.assertTrue(not collectFile1 != collectFile2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ collectFile1 = CollectFile("/etc/whatever", "incr", "tar") collectFile2 = CollectFile("/etc/whatever", "incr", "tar") self.assertTrue(collectFile1 == collectFile2) self.assertTrue(not collectFile1 < collectFile2) self.assertTrue(collectFile1 <= collectFile2) self.assertTrue(not collectFile1 > collectFile2) self.assertTrue(collectFile1 >= collectFile2) self.assertTrue(not collectFile1 != collectFile2) def testComparison_003(self): """ Test comparison of two differing objects, absolutePath differs (one None). """ collectFile1 = CollectFile() collectFile2 = CollectFile(absolutePath="/whatever") self.assertNotEqual(collectFile1, collectFile2) self.assertTrue(not collectFile1 == collectFile2) self.assertTrue(collectFile1 < collectFile2) self.assertTrue(collectFile1 <= collectFile2) self.assertTrue(not collectFile1 > collectFile2) self.assertTrue(not collectFile1 >= collectFile2) self.assertTrue(collectFile1 != collectFile2) def testComparison_004(self): """ Test comparison of two differing objects, absolutePath differs. """ collectFile1 = CollectFile("/etc/whatever", "incr", "tar") collectFile2 = CollectFile("/stuff", "incr", "tar") self.assertNotEqual(collectFile1, collectFile2) self.assertTrue(not collectFile1 == collectFile2) self.assertTrue(collectFile1 < collectFile2) self.assertTrue(collectFile1 <= collectFile2) self.assertTrue(not collectFile1 > collectFile2) self.assertTrue(not collectFile1 >= collectFile2) self.assertTrue(collectFile1 != collectFile2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ collectFile1 = CollectFile() collectFile2 = CollectFile(collectMode="incr") self.assertNotEqual(collectFile1, collectFile2) self.assertTrue(not collectFile1 == collectFile2) self.assertTrue(collectFile1 < collectFile2) self.assertTrue(collectFile1 <= collectFile2) self.assertTrue(not collectFile1 > collectFile2) self.assertTrue(not collectFile1 >= collectFile2) self.assertTrue(collectFile1 != collectFile2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ collectFile1 = CollectFile("/etc/whatever", "incr", "tar") collectFile2 = CollectFile("/etc/whatever", "daily", "tar") self.assertNotEqual(collectFile1, collectFile2) self.assertTrue(not collectFile1 == collectFile2) self.assertTrue(not collectFile1 < collectFile2) self.assertTrue(not collectFile1 <= collectFile2) self.assertTrue(collectFile1 > collectFile2) self.assertTrue(collectFile1 >= collectFile2) self.assertTrue(collectFile1 != collectFile2) def testComparison_007(self): """ Test comparison of two differing objects, archiveMode differs (one None). """ collectFile1 = CollectFile() collectFile2 = CollectFile(archiveMode="tar") self.assertNotEqual(collectFile1, collectFile2) self.assertTrue(not collectFile1 == collectFile2) self.assertTrue(collectFile1 < collectFile2) self.assertTrue(collectFile1 <= collectFile2) self.assertTrue(not collectFile1 > collectFile2) self.assertTrue(not collectFile1 >= collectFile2) self.assertTrue(collectFile1 != collectFile2) def testComparison_008(self): """ Test comparison of two differing objects, archiveMode differs. """ collectFile1 = CollectFile("/etc/whatever", "incr", "targz") collectFile2 = CollectFile("/etc/whatever", "incr", "tar") self.assertNotEqual(collectFile1, collectFile2) self.assertTrue(not collectFile1 == collectFile2) self.assertTrue(not collectFile1 < collectFile2) self.assertTrue(not collectFile1 <= collectFile2) self.assertTrue(collectFile1 > collectFile2) self.assertTrue(collectFile1 >= collectFile2) self.assertTrue(collectFile1 != collectFile2) ####################### # TestCollectDir class ####################### class TestCollectDir(unittest.TestCase): """Tests for the CollectDir class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = CollectDir() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ collectDir = CollectDir() self.assertEqual(None, collectDir.absolutePath) self.assertEqual(None, collectDir.collectMode) self.assertEqual(None, collectDir.archiveMode) self.assertEqual(None, collectDir.ignoreFile) self.assertEqual(None, collectDir.linkDepth) self.assertEqual(False, collectDir.dereference) self.assertEqual(None, collectDir.recursionLevel) self.assertEqual(None, collectDir.absoluteExcludePaths) self.assertEqual(None, collectDir.relativeExcludePaths) self.assertEqual(None, collectDir.excludePatterns) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ collectDir = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 2, True, 6) self.assertEqual("/etc/whatever", collectDir.absolutePath) self.assertEqual("incr", collectDir.collectMode) self.assertEqual("tar", collectDir.archiveMode) self.assertEqual(".ignore", collectDir.ignoreFile) self.assertEqual(2, collectDir.linkDepth) self.assertEqual(True, collectDir.dereference) self.assertEqual(6, collectDir.recursionLevel) self.assertEqual([], collectDir.absoluteExcludePaths) self.assertEqual([], collectDir.relativeExcludePaths) self.assertEqual([], collectDir.excludePatterns) def testConstructor_003(self): """ Test assignment of absolutePath attribute, None value. """ collectDir = CollectDir(absolutePath="/whatever") self.assertEqual("/whatever", collectDir.absolutePath) collectDir.absolutePath = None self.assertEqual(None, collectDir.absolutePath) def testConstructor_004(self): """ Test assignment of absolutePath attribute, valid value. """ collectDir = CollectDir() self.assertEqual(None, collectDir.absolutePath) collectDir.absolutePath = "/etc/whatever" self.assertEqual("/etc/whatever", collectDir.absolutePath) def testConstructor_005(self): """ Test assignment of absolutePath attribute, invalid value (empty). """ collectDir = CollectDir() self.assertEqual(None, collectDir.absolutePath) self.failUnlessAssignRaises(ValueError, collectDir, "absolutePath", "") self.assertEqual(None, collectDir.absolutePath) def testConstructor_006(self): """ Test assignment of absolutePath attribute, invalid value (non-absolute). """ collectDir = CollectDir() self.assertEqual(None, collectDir.absolutePath) self.failUnlessAssignRaises(ValueError, collectDir, "absolutePath", "whatever") self.assertEqual(None, collectDir.absolutePath) def testConstructor_007(self): """ Test assignment of collectMode attribute, None value. """ collectDir = CollectDir(collectMode="incr") self.assertEqual("incr", collectDir.collectMode) collectDir.collectMode = None self.assertEqual(None, collectDir.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, valid value. """ collectDir = CollectDir() self.assertEqual(None, collectDir.collectMode) collectDir.collectMode = "daily" self.assertEqual("daily", collectDir.collectMode) collectDir.collectMode = "weekly" self.assertEqual("weekly", collectDir.collectMode) collectDir.collectMode = "incr" self.assertEqual("incr", collectDir.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, invalid value (empty). """ collectDir = CollectDir() self.assertEqual(None, collectDir.collectMode) self.failUnlessAssignRaises(ValueError, collectDir, "collectMode", "") self.assertEqual(None, collectDir.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ collectDir = CollectDir() self.assertEqual(None, collectDir.collectMode) self.failUnlessAssignRaises(ValueError, collectDir, "collectMode", "bogus") self.assertEqual(None, collectDir.collectMode) def testConstructor_011(self): """ Test assignment of archiveMode attribute, None value. """ collectDir = CollectDir(archiveMode="tar") self.assertEqual("tar", collectDir.archiveMode) collectDir.archiveMode = None self.assertEqual(None, collectDir.archiveMode) def testConstructor_012(self): """ Test assignment of archiveMode attribute, valid value. """ collectDir = CollectDir() self.assertEqual(None, collectDir.archiveMode) collectDir.archiveMode = "tar" self.assertEqual("tar", collectDir.archiveMode) collectDir.archiveMode = "targz" self.assertEqual("targz", collectDir.archiveMode) collectDir.archiveMode = "tarbz2" self.assertEqual("tarbz2", collectDir.archiveMode) def testConstructor_013(self): """ Test assignment of archiveMode attribute, invalid value (empty). """ collectDir = CollectDir() self.assertEqual(None, collectDir.archiveMode) self.failUnlessAssignRaises(ValueError, collectDir, "archiveMode", "") self.assertEqual(None, collectDir.archiveMode) def testConstructor_014(self): """ Test assignment of archiveMode attribute, invalid value (not in list). """ collectDir = CollectDir() self.assertEqual(None, collectDir.archiveMode) self.failUnlessAssignRaises(ValueError, collectDir, "archiveMode", "bogus") self.assertEqual(None, collectDir.archiveMode) def testConstructor_015(self): """ Test assignment of ignoreFile attribute, None value. """ collectDir = CollectDir(ignoreFile="ignore") self.assertEqual("ignore", collectDir.ignoreFile) collectDir.ignoreFile = None self.assertEqual(None, collectDir.ignoreFile) def testConstructor_016(self): """ Test assignment of ignoreFile attribute, valid value. """ collectDir = CollectDir() self.assertEqual(None, collectDir.ignoreFile) collectDir.ignoreFile = "ignorefile" self.assertEqual("ignorefile", collectDir.ignoreFile) def testConstructor_017(self): """ Test assignment of ignoreFile attribute, invalid value (empty). """ collectDir = CollectDir() self.assertEqual(None, collectDir.ignoreFile) self.failUnlessAssignRaises(ValueError, collectDir, "ignoreFile", "") self.assertEqual(None, collectDir.ignoreFile) def testConstructor_018(self): """ Test assignment of absoluteExcludePaths attribute, None value. """ collectDir = CollectDir(absoluteExcludePaths=[]) self.assertEqual([], collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths = None self.assertEqual(None, collectDir.absoluteExcludePaths) def testConstructor_019(self): """ Test assignment of absoluteExcludePaths attribute, [] value. """ collectDir = CollectDir() self.assertEqual(None, collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths = [] self.assertEqual([], collectDir.absoluteExcludePaths) def testConstructor_020(self): """ Test assignment of absoluteExcludePaths attribute, single valid entry. """ collectDir = CollectDir() self.assertEqual(None, collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths = ["/whatever", ] self.assertEqual(["/whatever", ], collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths.append("/stuff") self.assertEqual(["/whatever", "/stuff", ], collectDir.absoluteExcludePaths) def testConstructor_021(self): """ Test assignment of absoluteExcludePaths attribute, multiple valid entries. """ collectDir = CollectDir() self.assertEqual(None, collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths = ["/whatever", "/stuff", ] self.assertEqual(["/whatever", "/stuff", ], collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths.append("/etc/X11") self.assertEqual(["/whatever", "/stuff", "/etc/X11", ], collectDir.absoluteExcludePaths) def testConstructor_022(self): """ Test assignment of absoluteExcludePaths attribute, single invalid entry (empty). """ collectDir = CollectDir() self.assertEqual(None, collectDir.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collectDir, "absoluteExcludePaths", ["", ]) self.assertEqual(None, collectDir.absoluteExcludePaths) def testConstructor_023(self): """ Test assignment of absoluteExcludePaths attribute, single invalid entry (not absolute). """ collectDir = CollectDir() self.assertEqual(None, collectDir.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collectDir, "absoluteExcludePaths", ["notabsolute", ]) self.assertEqual(None, collectDir.absoluteExcludePaths) def testConstructor_024(self): """ Test assignment of absoluteExcludePaths attribute, mixed valid and invalid entries. """ collectDir = CollectDir() self.assertEqual(None, collectDir.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collectDir, "absoluteExcludePaths", ["/good", "bad", "/alsogood", ]) self.assertEqual(None, collectDir.absoluteExcludePaths) def testConstructor_025(self): """ Test assignment of relativeExcludePaths attribute, None value. """ collectDir = CollectDir(relativeExcludePaths=[]) self.assertEqual([], collectDir.relativeExcludePaths) collectDir.relativeExcludePaths = None self.assertEqual(None, collectDir.relativeExcludePaths) def testConstructor_026(self): """ Test assignment of relativeExcludePaths attribute, [] value. """ collectDir = CollectDir() self.assertEqual(None, collectDir.relativeExcludePaths) collectDir.relativeExcludePaths = [] self.assertEqual([], collectDir.relativeExcludePaths) def testConstructor_027(self): """ Test assignment of relativeExcludePaths attribute, single valid entry. """ collectDir = CollectDir() self.assertEqual(None, collectDir.relativeExcludePaths) collectDir.relativeExcludePaths = ["stuff", ] self.assertEqual(["stuff", ], collectDir.relativeExcludePaths) collectDir.relativeExcludePaths.insert(0, "bogus") self.assertEqual(["bogus", "stuff", ], collectDir.relativeExcludePaths) def testConstructor_028(self): """ Test assignment of relativeExcludePaths attribute, multiple valid entries. """ collectDir = CollectDir() self.assertEqual(None, collectDir.relativeExcludePaths) collectDir.relativeExcludePaths = ["bogus", "stuff", ] self.assertEqual(["bogus", "stuff", ], collectDir.relativeExcludePaths) collectDir.relativeExcludePaths.append("more") self.assertEqual(["bogus", "stuff", "more", ], collectDir.relativeExcludePaths) def testConstructor_029(self): """ Test assignment of excludePatterns attribute, None value. """ collectDir = CollectDir(excludePatterns=[]) self.assertEqual([], collectDir.excludePatterns) collectDir.excludePatterns = None self.assertEqual(None, collectDir.excludePatterns) def testConstructor_030(self): """ Test assignment of excludePatterns attribute, [] value. """ collectDir = CollectDir() self.assertEqual(None, collectDir.excludePatterns) collectDir.excludePatterns = [] self.assertEqual([], collectDir.excludePatterns) def testConstructor_031(self): """ Test assignment of excludePatterns attribute, single valid entry. """ collectDir = CollectDir() self.assertEqual(None, collectDir.excludePatterns) collectDir.excludePatterns = ["valid", ] self.assertEqual(["valid", ], collectDir.excludePatterns) collectDir.excludePatterns.append("more") self.assertEqual(["valid", "more", ], collectDir.excludePatterns) def testConstructor_032(self): """ Test assignment of excludePatterns attribute, multiple valid entries. """ collectDir = CollectDir() self.assertEqual(None, collectDir.excludePatterns) collectDir.excludePatterns = ["valid", "more", ] self.assertEqual(["valid", "more", ], collectDir.excludePatterns) collectDir.excludePatterns.insert(1, "bogus") self.assertEqual(["valid", "bogus", "more", ], collectDir.excludePatterns) def testConstructor_033(self): """ Test assignment of excludePatterns attribute, single invalid entry. """ collectDir = CollectDir() self.assertEqual(None, collectDir.excludePatterns) self.failUnlessAssignRaises(ValueError, collectDir, "excludePatterns", ["*.jpg", ]) self.assertEqual(None, collectDir.excludePatterns) def testConstructor_034(self): """ Test assignment of excludePatterns attribute, multiple invalid entries. """ collectDir = CollectDir() self.assertEqual(None, collectDir.excludePatterns) self.failUnlessAssignRaises(ValueError, collectDir, "excludePatterns", ["*.jpg", "*", ]) self.assertEqual(None, collectDir.excludePatterns) def testConstructor_035(self): """ Test assignment of excludePatterns attribute, mixed valid and invalid entries. """ collectDir = CollectDir() self.assertEqual(None, collectDir.excludePatterns) self.failUnlessAssignRaises(ValueError, collectDir, "excludePatterns", ["*.jpg", "valid", ]) self.assertEqual(None, collectDir.excludePatterns) def testConstructor_036(self): """ Test assignment of linkDepth attribute, None value. """ collectDir = CollectDir(linkDepth=1) self.assertEqual(1, collectDir.linkDepth) collectDir.linkDepth = None self.assertEqual(None, collectDir.linkDepth) def testConstructor_037(self): """ Test assignment of linkDepth attribute, valid value. """ collectDir = CollectDir() self.assertEqual(None, collectDir.linkDepth) collectDir.linkDepth = 1 self.assertEqual(1, collectDir.linkDepth) def testConstructor_038(self): """ Test assignment of linkDepth attribute, invalid value. """ collectDir = CollectDir() self.assertEqual(None, collectDir.linkDepth) self.failUnlessAssignRaises(ValueError, collectDir, "linkDepth", "ken") self.assertEqual(None, collectDir.linkDepth) def testConstructor_039(self): """ Test assignment of dereference attribute, None value. """ collectDir = CollectDir(dereference=True) self.assertEqual(True, collectDir.dereference) collectDir.dereference = None self.assertEqual(False, collectDir.dereference) def testConstructor_040(self): """ Test assignment of dereference attribute, valid value (real boolean). """ collectDir = CollectDir() self.assertEqual(False, collectDir.dereference) collectDir.dereference = True self.assertEqual(True, collectDir.dereference) collectDir.dereference = False self.assertEqual(False, collectDir.dereference) #pylint: disable=R0204 def testConstructor_041(self): """ Test assignment of dereference attribute, valid value (expression). """ collectDir = CollectDir() self.assertEqual(False, collectDir.dereference) collectDir.dereference = 0 self.assertEqual(False, collectDir.dereference) collectDir.dereference = [] self.assertEqual(False, collectDir.dereference) collectDir.dereference = None self.assertEqual(False, collectDir.dereference) collectDir.dereference = ['a'] self.assertEqual(True, collectDir.dereference) collectDir.dereference = 3 self.assertEqual(True, collectDir.dereference) def testConstructor_042(self): """ Test assignment of recursionLevel attribute, None value. """ collectDir = CollectDir(recursionLevel=1) self.assertEqual(1, collectDir.recursionLevel) collectDir.recursionLevel = None self.assertEqual(None, collectDir.recursionLevel) def testConstructor_043(self): """ Test assignment of recursionLevel attribute, valid value. """ collectDir = CollectDir() self.assertEqual(None, collectDir.recursionLevel) collectDir.recursionLevel = 1 self.assertEqual(1, collectDir.recursionLevel) def testConstructor_044(self): """ Test assignment of recursionLevel attribute, invalid value. """ collectDir = CollectDir() self.assertEqual(None, collectDir.recursionLevel) self.failUnlessAssignRaises(ValueError, collectDir, "recursionLevel", "ken") self.assertEqual(None, collectDir.recursionLevel) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ collectDir1 = CollectDir() collectDir2 = CollectDir() self.assertEqual(collectDir1, collectDir2) self.assertTrue(collectDir1 == collectDir2) self.assertTrue(not collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(collectDir1 >= collectDir2) self.assertTrue(not collectDir1 != collectDir2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None (empty lists). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) self.assertTrue(collectDir1 == collectDir2) self.assertTrue(not collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(collectDir1 >= collectDir2) self.assertTrue(not collectDir1 != collectDir2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None (non-empty lists). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", ["/one", ], ["two", ], ["three", ], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", ["/one", ], ["two", ], ["three", ], 1, True, 6) self.assertTrue(collectDir1 == collectDir2) self.assertTrue(not collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(collectDir1 >= collectDir2) self.assertTrue(not collectDir1 != collectDir2) def testComparison_004(self): """ Test comparison of two differing objects, absolutePath differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(absolutePath="/whatever") self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_005(self): """ Test comparison of two differing objects, absolutePath differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/stuff", "incr", "tar", ".ignore", [], [], [], 1, True, 6) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(collectMode="incr") self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_007(self): """ Test comparison of two differing objects, collectMode differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "daily", "tar", ".ignore", [], [], [], 1, True, 6) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(not collectDir1 < collectDir2) self.assertTrue(not collectDir1 <= collectDir2) self.assertTrue(collectDir1 > collectDir2) self.assertTrue(collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_008(self): """ Test comparison of two differing objects, archiveMode differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(archiveMode="tar") self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_009(self): """ Test comparison of two differing objects, archiveMode differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "targz", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(not collectDir1 < collectDir2) self.assertTrue(not collectDir1 <= collectDir2) self.assertTrue(collectDir1 > collectDir2) self.assertTrue(collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_010(self): """ Test comparison of two differing objects, ignoreFile differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(ignoreFile="ignore") self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_011(self): """ Test comparison of two differing objects, ignoreFile differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(not collectDir1 < collectDir2) self.assertTrue(not collectDir1 <= collectDir2) self.assertTrue(collectDir1 > collectDir2) self.assertTrue(collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_012(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one None, one empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(absoluteExcludePaths=[]) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_013(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one None, one not empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(absoluteExcludePaths=["/whatever", ]) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_014(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one empty, one not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", ["/whatever", ], [], [], 1, True, 6) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_015(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (both not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", ["/stuff", ], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", ["/stuff", "/something", ], [], [], 1, True, 6) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(not collectDir1 < collectDir2) # note: different than standard due to unsorted list self.assertTrue(not collectDir1 <= collectDir2) # note: different than standard due to unsorted list self.assertTrue(collectDir1 > collectDir2) # note: different than standard due to unsorted list self.assertTrue(collectDir1 >= collectDir2) # note: different than standard due to unsorted list self.assertTrue(collectDir1 != collectDir2) def testComparison_016(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one None, one empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(relativeExcludePaths=[]) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_017(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one None, one not empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(relativeExcludePaths=["stuff", "other", ]) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_018(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one empty, one not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], ["one", ], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(not collectDir1 < collectDir2) self.assertTrue(not collectDir1 <= collectDir2) self.assertTrue(collectDir1 > collectDir2) self.assertTrue(collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_019(self): """ Test comparison of two differing objects, relativeExcludePaths differs (both not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], ["one", ], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], ["two", ], [], 1, True, 6) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_020(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(excludePatterns=[]) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_021(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one not empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(excludePatterns=["one", "two", "three", ]) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_022(self): """ Test comparison of two differing objects, excludePatterns differs (one empty, one not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], ["pattern", ], 1, True, 6) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_023(self): """ Test comparison of two differing objects, excludePatterns differs (both not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], ["p1", ], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], ["p2", ], 1, True, 6) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_024(self): """ Test comparison of two differing objects, linkDepth differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(linkDepth=1) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_025(self): """ Test comparison of two differing objects, linkDepth differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 2, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, True, 6) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(not collectDir1 < collectDir2) self.assertTrue(not collectDir1 <= collectDir2) self.assertTrue(collectDir1 > collectDir2) self.assertTrue(collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_026(self): """ Test comparison of two differing objects, dereference differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(dereference=True) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_027(self): """ Test comparison of two differing objects, dereference differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, False, 6) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(not collectDir1 < collectDir2) self.assertTrue(not collectDir1 <= collectDir2) self.assertTrue(collectDir1 > collectDir2) self.assertTrue(collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_028(self): """ Test comparison of two differing objects, recursionLevel differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(recursionLevel=1) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(collectDir1 < collectDir2) self.assertTrue(collectDir1 <= collectDir2) self.assertTrue(not collectDir1 > collectDir2) self.assertTrue(not collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) def testComparison_029(self): """ Test comparison of two differing objects, recursionLevel differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, True, 5) self.assertNotEqual(collectDir1, collectDir2) self.assertTrue(not collectDir1 == collectDir2) self.assertTrue(not collectDir1 < collectDir2) self.assertTrue(not collectDir1 <= collectDir2) self.assertTrue(collectDir1 > collectDir2) self.assertTrue(collectDir1 >= collectDir2) self.assertTrue(collectDir1 != collectDir2) ##################### # TestPurgeDir class ##################### class TestPurgeDir(unittest.TestCase): """Tests for the PurgeDir class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PurgeDir() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ purgeDir = PurgeDir() self.assertEqual(None, purgeDir.absolutePath) self.assertEqual(None, purgeDir.retainDays) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ purgeDir = PurgeDir("/whatever", 0) self.assertEqual("/whatever", purgeDir.absolutePath) self.assertEqual(0, purgeDir.retainDays) def testConstructor_003(self): """ Test assignment of absolutePath attribute, None value. """ purgeDir = PurgeDir(absolutePath="/whatever") self.assertEqual("/whatever", purgeDir.absolutePath) purgeDir.absolutePath = None self.assertEqual(None, purgeDir.absolutePath) def testConstructor_004(self): """ Test assignment of absolutePath attribute, valid value. """ purgeDir = PurgeDir() self.assertEqual(None, purgeDir.absolutePath) purgeDir.absolutePath = "/etc/whatever" self.assertEqual("/etc/whatever", purgeDir.absolutePath) def testConstructor_005(self): """ Test assignment of absolutePath attribute, invalid value (empty). """ purgeDir = PurgeDir() self.assertEqual(None, purgeDir.absolutePath) self.failUnlessAssignRaises(ValueError, purgeDir, "absolutePath", "") self.assertEqual(None, purgeDir.absolutePath) def testConstructor_006(self): """ Test assignment of absolutePath attribute, invalid value (non-absolute). """ purgeDir = PurgeDir() self.assertEqual(None, purgeDir.absolutePath) self.failUnlessAssignRaises(ValueError, purgeDir, "absolutePath", "bogus") self.assertEqual(None, purgeDir.absolutePath) def testConstructor_007(self): """ Test assignment of retainDays attribute, None value. """ purgeDir = PurgeDir(retainDays=12) self.assertEqual(12, purgeDir.retainDays) purgeDir.retainDays = None self.assertEqual(None, purgeDir.retainDays) def testConstructor_008(self): """ Test assignment of retainDays attribute, valid value (integer). """ purgeDir = PurgeDir() self.assertEqual(None, purgeDir.retainDays) purgeDir.retainDays = 12 self.assertEqual(12, purgeDir.retainDays) def testConstructor_009(self): """ Test assignment of retainDays attribute, valid value (string representing integer). """ purgeDir = PurgeDir() self.assertEqual(None, purgeDir.retainDays) purgeDir.retainDays = "12" self.assertEqual(12, purgeDir.retainDays) def testConstructor_010(self): """ Test assignment of retainDays attribute, invalid value (empty string). """ purgeDir = PurgeDir() self.assertEqual(None, purgeDir.retainDays) self.failUnlessAssignRaises(ValueError, purgeDir, "retainDays", "") self.assertEqual(None, purgeDir.retainDays) def testConstructor_011(self): """ Test assignment of retainDays attribute, invalid value (non-integer, like a list). """ purgeDir = PurgeDir() self.assertEqual(None, purgeDir.retainDays) self.failUnlessAssignRaises(ValueError, purgeDir, "retainDays", []) self.assertEqual(None, purgeDir.retainDays) def testConstructor_012(self): """ Test assignment of retainDays attribute, invalid value (string representing non-integer). """ purgeDir = PurgeDir() self.assertEqual(None, purgeDir.retainDays) self.failUnlessAssignRaises(ValueError, purgeDir, "retainDays", "blech") self.assertEqual(None, purgeDir.retainDays) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ purgeDir1 = PurgeDir() purgeDir2 = PurgeDir() self.assertEqual(purgeDir1, purgeDir2) self.assertTrue(purgeDir1 == purgeDir2) self.assertTrue(not purgeDir1 < purgeDir2) self.assertTrue(purgeDir1 <= purgeDir2) self.assertTrue(not purgeDir1 > purgeDir2) self.assertTrue(purgeDir1 >= purgeDir2) self.assertTrue(not purgeDir1 != purgeDir2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ purgeDir1 = PurgeDir("/etc/whatever", 12) purgeDir2 = PurgeDir("/etc/whatever", 12) self.assertTrue(purgeDir1 == purgeDir2) self.assertTrue(not purgeDir1 < purgeDir2) self.assertTrue(purgeDir1 <= purgeDir2) self.assertTrue(not purgeDir1 > purgeDir2) self.assertTrue(purgeDir1 >= purgeDir2) self.assertTrue(not purgeDir1 != purgeDir2) def testComparison_003(self): """ Test comparison of two differing objects, absolutePath differs (one None). """ purgeDir1 = PurgeDir() purgeDir2 = PurgeDir(absolutePath="/whatever") self.assertNotEqual(purgeDir1, purgeDir2) self.assertTrue(not purgeDir1 == purgeDir2) self.assertTrue(purgeDir1 < purgeDir2) self.assertTrue(purgeDir1 <= purgeDir2) self.assertTrue(not purgeDir1 > purgeDir2) self.assertTrue(not purgeDir1 >= purgeDir2) self.assertTrue(purgeDir1 != purgeDir2) def testComparison_004(self): """ Test comparison of two differing objects, absolutePath differs. """ purgeDir1 = PurgeDir("/etc/blech", 12) purgeDir2 = PurgeDir("/etc/whatever", 12) self.assertNotEqual(purgeDir1, purgeDir2) self.assertTrue(not purgeDir1 == purgeDir2) self.assertTrue(purgeDir1 < purgeDir2) self.assertTrue(purgeDir1 <= purgeDir2) self.assertTrue(not purgeDir1 > purgeDir2) self.assertTrue(not purgeDir1 >= purgeDir2) self.assertTrue(purgeDir1 != purgeDir2) def testComparison_005(self): """ Test comparison of two differing objects, retainDays differs (one None). """ purgeDir1 = PurgeDir() purgeDir2 = PurgeDir(retainDays=365) self.assertNotEqual(purgeDir1, purgeDir2) self.assertTrue(not purgeDir1 == purgeDir2) self.assertTrue(purgeDir1 < purgeDir2) self.assertTrue(purgeDir1 <= purgeDir2) self.assertTrue(not purgeDir1 > purgeDir2) self.assertTrue(not purgeDir1 >= purgeDir2) self.assertTrue(purgeDir1 != purgeDir2) def testComparison_006(self): """ Test comparison of two differing objects, retainDays differs. """ purgeDir1 = PurgeDir("/etc/whatever", 365) purgeDir2 = PurgeDir("/etc/whatever", 12) self.assertNotEqual(purgeDir1, purgeDir2) self.assertTrue(not purgeDir1 == purgeDir2) self.assertTrue(not purgeDir1 < purgeDir2) self.assertTrue(not purgeDir1 <= purgeDir2) self.assertTrue(purgeDir1 > purgeDir2) self.assertTrue(purgeDir1 >= purgeDir2) self.assertTrue(purgeDir1 != purgeDir2) ###################### # TestLocalPeer class ###################### class TestLocalPeer(unittest.TestCase): """Tests for the LocalPeer class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalPeer() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ localPeer = LocalPeer() self.assertEqual(None, localPeer.name) self.assertEqual(None, localPeer.collectDir) self.assertEqual(None, localPeer.ignoreFailureMode) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ localPeer = LocalPeer("myname", "/whatever", "all") self.assertEqual("myname", localPeer.name) self.assertEqual("/whatever", localPeer.collectDir) self.assertEqual("all", localPeer.ignoreFailureMode) def testConstructor_003(self): """ Test assignment of name attribute, None value. """ localPeer = LocalPeer(name="myname") self.assertEqual("myname", localPeer.name) localPeer.name = None self.assertEqual(None, localPeer.name) def testConstructor_004(self): """ Test assignment of name attribute, valid value. """ localPeer = LocalPeer() self.assertEqual(None, localPeer.name) localPeer.name = "myname" self.assertEqual("myname", localPeer.name) def testConstructor_005(self): """ Test assignment of name attribute, invalid value (empty). """ localPeer = LocalPeer() self.assertEqual(None, localPeer.name) self.failUnlessAssignRaises(ValueError, localPeer, "name", "") self.assertEqual(None, localPeer.name) def testConstructor_006(self): """ Test assignment of collectDir attribute, None value. """ localPeer = LocalPeer(collectDir="/whatever") self.assertEqual("/whatever", localPeer.collectDir) localPeer.collectDir = None self.assertEqual(None, localPeer.collectDir) def testConstructor_007(self): """ Test assignment of collectDir attribute, valid value. """ localPeer = LocalPeer() self.assertEqual(None, localPeer.collectDir) localPeer.collectDir = "/etc/stuff" self.assertEqual("/etc/stuff", localPeer.collectDir) def testConstructor_008(self): """ Test assignment of collectDir attribute, invalid value (empty). """ localPeer = LocalPeer() self.assertEqual(None, localPeer.collectDir) self.failUnlessAssignRaises(ValueError, localPeer, "collectDir", "") self.assertEqual(None, localPeer.collectDir) def testConstructor_009(self): """ Test assignment of collectDir attribute, invalid value (non-absolute). """ localPeer = LocalPeer() self.assertEqual(None, localPeer.collectDir) self.failUnlessAssignRaises(ValueError, localPeer, "collectDir", "bogus") self.assertEqual(None, localPeer.collectDir) def testConstructor_010(self): """ Test assignment of ignoreFailureMode attribute, valid values. """ localPeer = LocalPeer() self.assertEqual(None, localPeer.ignoreFailureMode) localPeer.ignoreFailureMode = "none" self.assertEqual("none", localPeer.ignoreFailureMode) localPeer.ignoreFailureMode = "all" self.assertEqual("all", localPeer.ignoreFailureMode) localPeer.ignoreFailureMode = "daily" self.assertEqual("daily", localPeer.ignoreFailureMode) localPeer.ignoreFailureMode = "weekly" self.assertEqual("weekly", localPeer.ignoreFailureMode) def testConstructor_011(self): """ Test assignment of ignoreFailureMode attribute, invalid value. """ localPeer = LocalPeer() self.assertEqual(None, localPeer.ignoreFailureMode) self.failUnlessAssignRaises(ValueError, localPeer, "ignoreFailureMode", "bogus") def testConstructor_012(self): """ Test assignment of ignoreFailureMode attribute, None value. """ localPeer = LocalPeer() self.assertEqual(None, localPeer.ignoreFailureMode) localPeer.ignoreFailureMode = None self.assertEqual(None, localPeer.ignoreFailureMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ localPeer1 = LocalPeer() localPeer2 = LocalPeer() self.assertEqual(localPeer1, localPeer2) self.assertTrue(localPeer1 == localPeer2) self.assertTrue(not localPeer1 < localPeer2) self.assertTrue(localPeer1 <= localPeer2) self.assertTrue(not localPeer1 > localPeer2) self.assertTrue(localPeer1 >= localPeer2) self.assertTrue(not localPeer1 != localPeer2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ localPeer1 = LocalPeer("myname", "/etc/stuff", "all") localPeer2 = LocalPeer("myname", "/etc/stuff", "all") self.assertTrue(localPeer1 == localPeer2) self.assertTrue(not localPeer1 < localPeer2) self.assertTrue(localPeer1 <= localPeer2) self.assertTrue(not localPeer1 > localPeer2) self.assertTrue(localPeer1 >= localPeer2) self.assertTrue(not localPeer1 != localPeer2) def testComparison_003(self): """ Test comparison of two differing objects, name differs (one None). """ localPeer1 = LocalPeer() localPeer2 = LocalPeer(name="blech") self.assertNotEqual(localPeer1, localPeer2) self.assertTrue(not localPeer1 == localPeer2) self.assertTrue(localPeer1 < localPeer2) self.assertTrue(localPeer1 <= localPeer2) self.assertTrue(not localPeer1 > localPeer2) self.assertTrue(not localPeer1 >= localPeer2) self.assertTrue(localPeer1 != localPeer2) def testComparison_004(self): """ Test comparison of two differing objects, name differs. """ localPeer1 = LocalPeer("name", "/etc/stuff", "all") localPeer2 = LocalPeer("name", "/etc/whatever", "all") self.assertNotEqual(localPeer1, localPeer2) self.assertTrue(not localPeer1 == localPeer2) self.assertTrue(localPeer1 < localPeer2) self.assertTrue(localPeer1 <= localPeer2) self.assertTrue(not localPeer1 > localPeer2) self.assertTrue(not localPeer1 >= localPeer2) self.assertTrue(localPeer1 != localPeer2) def testComparison_005(self): """ Test comparison of two differing objects, collectDir differs (one None). """ localPeer1 = LocalPeer() localPeer2 = LocalPeer(collectDir="/etc/whatever") self.assertNotEqual(localPeer1, localPeer2) self.assertTrue(not localPeer1 == localPeer2) self.assertTrue(localPeer1 < localPeer2) self.assertTrue(localPeer1 <= localPeer2) self.assertTrue(not localPeer1 > localPeer2) self.assertTrue(not localPeer1 >= localPeer2) self.assertTrue(localPeer1 != localPeer2) def testComparison_006(self): """ Test comparison of two differing objects, collectDir differs. """ localPeer1 = LocalPeer("name2", "/etc/stuff", "all") localPeer2 = LocalPeer("name1", "/etc/stuff", "all") self.assertNotEqual(localPeer1, localPeer2) self.assertTrue(not localPeer1 == localPeer2) self.assertTrue(not localPeer1 < localPeer2) self.assertTrue(not localPeer1 <= localPeer2) self.assertTrue(localPeer1 > localPeer2) self.assertTrue(localPeer1 >= localPeer2) self.assertTrue(localPeer1 != localPeer2) def testComparison_008(self): """ Test comparison of two differing objects, ignoreFailureMode differs (one None). """ localPeer1 = LocalPeer() localPeer2 = LocalPeer(ignoreFailureMode="all") self.assertNotEqual(localPeer1, localPeer2) self.assertTrue(not localPeer1 == localPeer2) self.assertTrue(localPeer1 < localPeer2) self.assertTrue(localPeer1 <= localPeer2) self.assertTrue(not localPeer1 > localPeer2) self.assertTrue(not localPeer1 >= localPeer2) self.assertTrue(localPeer1 != localPeer2) def testComparison_009(self): """ Test comparison of two differing objects, collectDir differs. """ localPeer1 = LocalPeer("name1", "/etc/stuff", "none") localPeer2 = LocalPeer("name1", "/etc/stuff", "all") self.assertNotEqual(localPeer1, localPeer2) self.assertTrue(not localPeer1 == localPeer2) self.assertTrue(not localPeer1 < localPeer2) self.assertTrue(not localPeer1 <= localPeer2) self.assertTrue(localPeer1 > localPeer2) self.assertTrue(localPeer1 >= localPeer2) self.assertTrue(localPeer1 != localPeer2) ####################### # TestRemotePeer class ####################### class TestRemotePeer(unittest.TestCase): """Tests for the RemotePeer class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = RemotePeer() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.name) self.assertEqual(None, remotePeer.collectDir) self.assertEqual(None, remotePeer.remoteUser) self.assertEqual(None, remotePeer.rcpCommand) self.assertEqual(None, remotePeer.rshCommand) self.assertEqual(None, remotePeer.cbackCommand) self.assertEqual(False, remotePeer.managed) self.assertEqual(None, remotePeer.managedActions) self.assertEqual(None, remotePeer.ignoreFailureMode) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ remotePeer = RemotePeer("myname", "/stuff", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.assertEqual("myname", remotePeer.name) self.assertEqual("/stuff", remotePeer.collectDir) self.assertEqual("backup", remotePeer.remoteUser) self.assertEqual("scp -1 -B", remotePeer.rcpCommand) self.assertEqual("ssh", remotePeer.rshCommand) self.assertEqual("cback", remotePeer.cbackCommand) self.assertEqual(True, remotePeer.managed) self.assertEqual(["collect", ], remotePeer.managedActions) self.assertEqual("all", remotePeer.ignoreFailureMode) def testConstructor_003(self): """ Test assignment of name attribute, None value. """ remotePeer = RemotePeer(name="myname") self.assertEqual("myname", remotePeer.name) remotePeer.name = None self.assertEqual(None, remotePeer.name) def testConstructor_004(self): """ Test assignment of name attribute, valid value. """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.name) remotePeer.name = "namename" self.assertEqual("namename", remotePeer.name) def testConstructor_005(self): """ Test assignment of name attribute, invalid value (empty). """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.name) self.failUnlessAssignRaises(ValueError, remotePeer, "name", "") self.assertEqual(None, remotePeer.name) def testConstructor_006(self): """ Test assignment of collectDir attribute, None value. """ remotePeer = RemotePeer(collectDir="/etc/stuff") self.assertEqual("/etc/stuff", remotePeer.collectDir) remotePeer.collectDir = None self.assertEqual(None, remotePeer.collectDir) def testConstructor_007(self): """ Test assignment of collectDir attribute, valid value. """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.collectDir) remotePeer.collectDir = "/tmp" self.assertEqual("/tmp", remotePeer.collectDir) def testConstructor_008(self): """ Test assignment of collectDir attribute, invalid value (empty). """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.collectDir) self.failUnlessAssignRaises(ValueError, remotePeer, "collectDir", "") self.assertEqual(None, remotePeer.collectDir) def testConstructor_009(self): """ Test assignment of collectDir attribute, invalid value (non-absolute). """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.collectDir) self.failUnlessAssignRaises(ValueError, remotePeer, "collectDir", "bogus/stuff/there") self.assertEqual(None, remotePeer.collectDir) def testConstructor_010(self): """ Test assignment of remoteUser attribute, None value. """ remotePeer = RemotePeer(remoteUser="spot") self.assertEqual("spot", remotePeer.remoteUser) remotePeer.remoteUser = None self.assertEqual(None, remotePeer.remoteUser) def testConstructor_011(self): """ Test assignment of remoteUser attribute, valid value. """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.remoteUser) remotePeer.remoteUser = "spot" self.assertEqual("spot", remotePeer.remoteUser) def testConstructor_012(self): """ Test assignment of remoteUser attribute, invalid value (empty). """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.remoteUser) self.failUnlessAssignRaises(ValueError, remotePeer, "remoteUser", "") self.assertEqual(None, remotePeer.remoteUser) def testConstructor_013(self): """ Test assignment of rcpCommand attribute, None value. """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.rcpCommand) remotePeer.rcpCommand = "scp" self.assertEqual("scp", remotePeer.rcpCommand) def testConstructor_014(self): """ Test assignment of rcpCommand attribute, valid value. """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.rcpCommand) remotePeer.rcpCommand = "scp" self.assertEqual("scp", remotePeer.rcpCommand) def testConstructor_015(self): """ Test assignment of rcpCommand attribute, invalid value (empty). """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.rcpCommand) self.failUnlessAssignRaises(ValueError, remotePeer, "rcpCommand", "") self.assertEqual(None, remotePeer.rcpCommand) def testConstructor_016(self): """ Test assignment of rshCommand attribute, valid value. """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.rshCommand) remotePeer.rshCommand = "scp" self.assertEqual("scp", remotePeer.rshCommand) def testConstructor_017(self): """ Test assignment of rshCommand attribute, invalid value (empty). """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.rshCommand) self.failUnlessAssignRaises(ValueError, remotePeer, "rshCommand", "") self.assertEqual(None, remotePeer.rshCommand) def testConstructor_018(self): """ Test assignment of cbackCommand attribute, valid value. """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.cbackCommand) remotePeer.cbackCommand = "scp" self.assertEqual("scp", remotePeer.cbackCommand) def testConstructor_019(self): """ Test assignment of cbackCommand attribute, invalid value (empty). """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.cbackCommand) self.failUnlessAssignRaises(ValueError, remotePeer, "cbackCommand", "") self.assertEqual(None, remotePeer.cbackCommand) def testConstructor_021(self): """ Test assignment of managed attribute, None value. """ remotePeer = RemotePeer(managed=True) self.assertEqual(True, remotePeer.managed) remotePeer.managed = None self.assertEqual(False, remotePeer.managed) def testConstructor_022(self): """ Test assignment of managed attribute, valid value (real boolean). """ remotePeer = RemotePeer() self.assertEqual(False, remotePeer.managed) remotePeer.managed = True self.assertEqual(True, remotePeer.managed) remotePeer.managed = False self.assertEqual(False, remotePeer.managed) #pylint: disable=R0204 def testConstructor_023(self): """ Test assignment of managed attribute, valid value (expression). """ remotePeer = RemotePeer() self.assertEqual(False, remotePeer.managed) remotePeer.managed = 0 self.assertEqual(False, remotePeer.managed) remotePeer.managed = [] self.assertEqual(False, remotePeer.managed) remotePeer.managed = None self.assertEqual(False, remotePeer.managed) remotePeer.managed = ['a'] self.assertEqual(True, remotePeer.managed) remotePeer.managed = 3 self.assertEqual(True, remotePeer.managed) def testConstructor_024(self): """ Test assignment of managedActions attribute, None value. """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.managedActions) remotePeer.managedActions = None self.assertEqual(None, remotePeer.managedActions) def testConstructor_025(self): """ Test assignment of managedActions attribute, empty list. """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.managedActions) remotePeer.managedActions = [] self.assertEqual([], remotePeer.managedActions) def testConstructor_026(self): """ Test assignment of managedActions attribute, non-empty list, valid values. """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.managedActions) remotePeer.managedActions = ['a', 'b', ] self.assertEqual(['a', 'b'], remotePeer.managedActions) def testConstructor_027(self): """ Test assignment of managedActions attribute, non-empty list, invalid value. """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", ["KEN", ]) self.assertEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", ["hello, world" ]) self.assertEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", ["dash-word", ]) self.assertEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", ["", ]) self.assertEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", [None, ]) self.assertEqual(None, remotePeer.managedActions) def testConstructor_028(self): """ Test assignment of managedActions attribute, non-empty list, mixed values. """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", ["ken", "dash-word", ]) def testConstructor_029(self): """ Test assignment of ignoreFailureMode attribute, valid values. """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.ignoreFailureMode) remotePeer.ignoreFailureMode = "none" self.assertEqual("none", remotePeer.ignoreFailureMode) remotePeer.ignoreFailureMode = "all" self.assertEqual("all", remotePeer.ignoreFailureMode) remotePeer.ignoreFailureMode = "daily" self.assertEqual("daily", remotePeer.ignoreFailureMode) remotePeer.ignoreFailureMode = "weekly" self.assertEqual("weekly", remotePeer.ignoreFailureMode) def testConstructor_030(self): """ Test assignment of ignoreFailureMode attribute, invalid value. """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.ignoreFailureMode) self.failUnlessAssignRaises(ValueError, remotePeer, "ignoreFailureMode", "bogus") def testConstructor_031(self): """ Test assignment of ignoreFailureMode attribute, None value. """ remotePeer = RemotePeer() self.assertEqual(None, remotePeer.ignoreFailureMode) remotePeer.ignoreFailureMode = None self.assertEqual(None, remotePeer.ignoreFailureMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer() self.assertEqual(remotePeer1, remotePeer2) self.assertTrue(remotePeer1 == remotePeer2) self.assertTrue(not remotePeer1 < remotePeer2) self.assertTrue(remotePeer1 <= remotePeer2) self.assertTrue(not remotePeer1 > remotePeer2) self.assertTrue(remotePeer1 >= remotePeer2) self.assertTrue(not remotePeer1 != remotePeer2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.assertTrue(remotePeer1 == remotePeer2) self.assertTrue(not remotePeer1 < remotePeer2) self.assertTrue(remotePeer1 <= remotePeer2) self.assertTrue(not remotePeer1 > remotePeer2) self.assertTrue(remotePeer1 >= remotePeer2) self.assertTrue(not remotePeer1 != remotePeer2) def testComparison_003(self): """ Test comparison of two differing objects, name differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(name="name") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(remotePeer1 < remotePeer2) self.assertTrue(remotePeer1 <= remotePeer2) self.assertTrue(not remotePeer1 > remotePeer2) self.assertTrue(not remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_004(self): """ Test comparison of two differing objects, name differs. """ remotePeer1 = RemotePeer("name1", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name2", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(remotePeer1 < remotePeer2) self.assertTrue(remotePeer1 <= remotePeer2) self.assertTrue(not remotePeer1 > remotePeer2) self.assertTrue(not remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_005(self): """ Test comparison of two differing objects, collectDir differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(collectDir="/tmp") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(remotePeer1 < remotePeer2) self.assertTrue(remotePeer1 <= remotePeer2) self.assertTrue(not remotePeer1 > remotePeer2) self.assertTrue(not remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_006(self): """ Test comparison of two differing objects, collectDir differs. """ remotePeer1 = RemotePeer("name", "/etc", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(remotePeer1 < remotePeer2) self.assertTrue(remotePeer1 <= remotePeer2) self.assertTrue(not remotePeer1 > remotePeer2) self.assertTrue(not remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_007(self): """ Test comparison of two differing objects, remoteUser differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(remoteUser="spot") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(remotePeer1 < remotePeer2) self.assertTrue(remotePeer1 <= remotePeer2) self.assertTrue(not remotePeer1 > remotePeer2) self.assertTrue(not remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_008(self): """ Test comparison of two differing objects, remoteUser differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "spot", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(not remotePeer1 < remotePeer2) self.assertTrue(not remotePeer1 <= remotePeer2) self.assertTrue(remotePeer1 > remotePeer2) self.assertTrue(remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_009(self): """ Test comparison of two differing objects, rcpCommand differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(rcpCommand="scp") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(remotePeer1 < remotePeer2) self.assertTrue(remotePeer1 <= remotePeer2) self.assertTrue(not remotePeer1 > remotePeer2) self.assertTrue(not remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_010(self): """ Test comparison of two differing objects, rcpCommand differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -2 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(not remotePeer1 < remotePeer2) self.assertTrue(not remotePeer1 <= remotePeer2) self.assertTrue(remotePeer1 > remotePeer2) self.assertTrue(remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_011(self): """ Test comparison of two differing objects, rshCommand differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(rshCommand="ssh") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(remotePeer1 < remotePeer2) self.assertTrue(remotePeer1 <= remotePeer2) self.assertTrue(not remotePeer1 > remotePeer2) self.assertTrue(not remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_012(self): """ Test comparison of two differing objects, rshCommand differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh2", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh1", "cback", True, [ "collect", ], "all") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(not remotePeer1 < remotePeer2) self.assertTrue(not remotePeer1 <= remotePeer2) self.assertTrue(remotePeer1 > remotePeer2) self.assertTrue(remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_013(self): """ Test comparison of two differing objects, cbackCommand differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(cbackCommand="cback") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(remotePeer1 < remotePeer2) self.assertTrue(remotePeer1 <= remotePeer2) self.assertTrue(not remotePeer1 > remotePeer2) self.assertTrue(not remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_014(self): """ Test comparison of two differing objects, cbackCommand differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback2", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback1", True, [ "collect", ], "all") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(not remotePeer1 < remotePeer2) self.assertTrue(not remotePeer1 <= remotePeer2) self.assertTrue(remotePeer1 > remotePeer2) self.assertTrue(remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_015(self): """ Test comparison of two differing objects, managed differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(managed=True) self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(remotePeer1 < remotePeer2) self.assertTrue(remotePeer1 <= remotePeer2) self.assertTrue(not remotePeer1 > remotePeer2) self.assertTrue(not remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_016(self): """ Test comparison of two differing objects, managed differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", False, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(remotePeer1 < remotePeer2) self.assertTrue(remotePeer1 <= remotePeer2) self.assertTrue(not remotePeer1 > remotePeer2) self.assertTrue(not remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_017(self): """ Test comparison of two differing objects, managedActions differs (one None, one empty). """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, None, "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [], "all") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(remotePeer1 < remotePeer2) self.assertTrue(remotePeer1 <= remotePeer2) self.assertTrue(not remotePeer1 > remotePeer2) self.assertTrue(not remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_018(self): """ Test comparison of two differing objects, managedActions differs (one None, one not empty). """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, None, "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(remotePeer1 < remotePeer2) self.assertTrue(remotePeer1 <= remotePeer2) self.assertTrue(not remotePeer1 > remotePeer2) self.assertTrue(not remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_019(self): """ Test comparison of two differing objects, managedActions differs (one empty, one not empty). """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [], "all" ) remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(remotePeer1 < remotePeer2) self.assertTrue(remotePeer1 <= remotePeer2) self.assertTrue(not remotePeer1 > remotePeer2) self.assertTrue(not remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_020(self): """ Test comparison of two differing objects, managedActions differs (both not empty). """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "purge", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(not remotePeer1 < remotePeer2) self.assertTrue(not remotePeer1 <= remotePeer2) self.assertTrue(remotePeer1 > remotePeer2) self.assertTrue(remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_021(self): """ Test comparison of two differing objects, ignoreFailureMode differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(ignoreFailureMode="all") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(remotePeer1 < remotePeer2) self.assertTrue(remotePeer1 <= remotePeer2) self.assertTrue(not remotePeer1 > remotePeer2) self.assertTrue(not remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) def testComparison_022(self): """ Test comparison of two differing objects, ignoreFailureMode differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "none") self.assertNotEqual(remotePeer1, remotePeer2) self.assertTrue(not remotePeer1 == remotePeer2) self.assertTrue(remotePeer1 < remotePeer2) self.assertTrue(remotePeer1 <= remotePeer2) self.assertTrue(not remotePeer1 > remotePeer2) self.assertTrue(not remotePeer1 >= remotePeer2) self.assertTrue(remotePeer1 != remotePeer2) ############################ # TestReferenceConfig class ############################ class TestReferenceConfig(unittest.TestCase): """Tests for the ReferenceConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ReferenceConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ reference = ReferenceConfig() self.assertEqual(None, reference.author) self.assertEqual(None, reference.revision) self.assertEqual(None, reference.description) self.assertEqual(None, reference.generator) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ reference = ReferenceConfig("one", "two", "three", "four") self.assertEqual("one", reference.author) self.assertEqual("two", reference.revision) self.assertEqual("three", reference.description) self.assertEqual("four", reference.generator) def testConstructor_003(self): """ Test assignment of author attribute, None value. """ reference = ReferenceConfig(author="one") self.assertEqual("one", reference.author) reference.author = None self.assertEqual(None, reference.author) def testConstructor_004(self): """ Test assignment of author attribute, valid value. """ reference = ReferenceConfig() self.assertEqual(None, reference.author) reference.author = "one" self.assertEqual("one", reference.author) def testConstructor_005(self): """ Test assignment of author attribute, valid value (empty). """ reference = ReferenceConfig() self.assertEqual(None, reference.author) reference.author = "" self.assertEqual("", reference.author) def testConstructor_006(self): """ Test assignment of revision attribute, None value. """ reference = ReferenceConfig(revision="one") self.assertEqual("one", reference.revision) reference.revision = None self.assertEqual(None, reference.revision) def testConstructor_007(self): """ Test assignment of revision attribute, valid value. """ reference = ReferenceConfig() self.assertEqual(None, reference.revision) reference.revision = "one" self.assertEqual("one", reference.revision) def testConstructor_008(self): """ Test assignment of revision attribute, valid value (empty). """ reference = ReferenceConfig() self.assertEqual(None, reference.revision) reference.revision = "" self.assertEqual("", reference.revision) def testConstructor_009(self): """ Test assignment of description attribute, None value. """ reference = ReferenceConfig(description="one") self.assertEqual("one", reference.description) reference.description = None self.assertEqual(None, reference.description) def testConstructor_010(self): """ Test assignment of description attribute, valid value. """ reference = ReferenceConfig() self.assertEqual(None, reference.description) reference.description = "one" self.assertEqual("one", reference.description) def testConstructor_011(self): """ Test assignment of description attribute, valid value (empty). """ reference = ReferenceConfig() self.assertEqual(None, reference.description) reference.description = "" self.assertEqual("", reference.description) def testConstructor_012(self): """ Test assignment of generator attribute, None value. """ reference = ReferenceConfig(generator="one") self.assertEqual("one", reference.generator) reference.generator = None self.assertEqual(None, reference.generator) def testConstructor_013(self): """ Test assignment of generator attribute, valid value. """ reference = ReferenceConfig() self.assertEqual(None, reference.generator) reference.generator = "one" self.assertEqual("one", reference.generator) def testConstructor_014(self): """ Test assignment of generator attribute, valid value (empty). """ reference = ReferenceConfig() self.assertEqual(None, reference.generator) reference.generator = "" self.assertEqual("", reference.generator) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ reference1 = ReferenceConfig() reference2 = ReferenceConfig() self.assertEqual(reference1, reference2) self.assertTrue(reference1 == reference2) self.assertTrue(not reference1 < reference2) self.assertTrue(reference1 <= reference2) self.assertTrue(not reference1 > reference2) self.assertTrue(reference1 >= reference2) self.assertTrue(not reference1 != reference2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("one", "two", "three", "four") self.assertTrue(reference1 == reference2) self.assertTrue(not reference1 < reference2) self.assertTrue(reference1 <= reference2) self.assertTrue(not reference1 > reference2) self.assertTrue(reference1 >= reference2) self.assertTrue(not reference1 != reference2) def testComparison_003(self): """ Test comparison of two differing objects, author differs (one None). """ reference1 = ReferenceConfig() reference2 = ReferenceConfig(author="one") self.assertNotEqual(reference1, reference2) self.assertTrue(not reference1 == reference2) self.assertTrue(reference1 < reference2) self.assertTrue(reference1 <= reference2) self.assertTrue(not reference1 > reference2) self.assertTrue(not reference1 >= reference2) self.assertTrue(reference1 != reference2) def testComparison_004(self): """ Test comparison of two differing objects, author differs (one empty). """ reference1 = ReferenceConfig("", "two", "three", "four") reference2 = ReferenceConfig("one", "two", "three", "four") self.assertNotEqual(reference1, reference2) self.assertTrue(not reference1 == reference2) self.assertTrue(reference1 < reference2) self.assertTrue(reference1 <= reference2) self.assertTrue(not reference1 > reference2) self.assertTrue(not reference1 >= reference2) self.assertTrue(reference1 != reference2) def testComparison_005(self): """ Test comparison of two differing objects, author differs. """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("author", "two", "three", "four") self.assertNotEqual(reference1, reference2) self.assertTrue(not reference1 == reference2) self.assertTrue(not reference1 < reference2) self.assertTrue(not reference1 <= reference2) self.assertTrue(reference1 > reference2) self.assertTrue(reference1 >= reference2) self.assertTrue(reference1 != reference2) def testComparison_006(self): """ Test comparison of two differing objects, revision differs (one None). """ reference1 = ReferenceConfig() reference2 = ReferenceConfig(revision="one") self.assertNotEqual(reference1, reference2) self.assertTrue(not reference1 == reference2) self.assertTrue(reference1 < reference2) self.assertTrue(reference1 <= reference2) self.assertTrue(not reference1 > reference2) self.assertTrue(not reference1 >= reference2) self.assertTrue(reference1 != reference2) def testComparison_007(self): """ Test comparison of two differing objects, revision differs (one empty). """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("one", "", "three", "four") self.assertNotEqual(reference1, reference2) self.assertTrue(not reference1 == reference2) self.assertTrue(not reference1 < reference2) self.assertTrue(not reference1 <= reference2) self.assertTrue(reference1 > reference2) self.assertTrue(reference1 >= reference2) self.assertTrue(reference1 != reference2) def testComparison_008(self): """ Test comparison of two differing objects, revision differs. """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("one", "revision", "three", "four") self.assertNotEqual(reference1, reference2) self.assertTrue(not reference1 == reference2) self.assertTrue(not reference1 < reference2) self.assertTrue(not reference1 <= reference2) self.assertTrue(reference1 > reference2) self.assertTrue(reference1 >= reference2) self.assertTrue(reference1 != reference2) def testComparison_009(self): """ Test comparison of two differing objects, description differs (one None). """ reference1 = ReferenceConfig() reference2 = ReferenceConfig(description="one") self.assertNotEqual(reference1, reference2) self.assertTrue(not reference1 == reference2) self.assertTrue(reference1 < reference2) self.assertTrue(reference1 <= reference2) self.assertTrue(not reference1 > reference2) self.assertTrue(not reference1 >= reference2) self.assertTrue(reference1 != reference2) def testComparison_010(self): """ Test comparison of two differing objects, description differs (one empty). """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("one", "two", "", "four") self.assertNotEqual(reference1, reference2) self.assertTrue(not reference1 == reference2) self.assertTrue(not reference1 < reference2) self.assertTrue(not reference1 <= reference2) self.assertTrue(reference1 > reference2) self.assertTrue(reference1 >= reference2) self.assertTrue(reference1 != reference2) def testComparison_011(self): """ Test comparison of two differing objects, description differs. """ reference1 = ReferenceConfig("one", "two", "description", "four") reference2 = ReferenceConfig("one", "two", "three", "four") self.assertNotEqual(reference1, reference2) self.assertTrue(not reference1 == reference2) self.assertTrue(reference1 < reference2) self.assertTrue(reference1 <= reference2) self.assertTrue(not reference1 > reference2) self.assertTrue(not reference1 >= reference2) self.assertTrue(reference1 != reference2) def testComparison_012(self): """ Test comparison of two differing objects, generator differs (one None). """ reference1 = ReferenceConfig() reference2 = ReferenceConfig(generator="one") self.assertNotEqual(reference1, reference2) self.assertTrue(not reference1 == reference2) self.assertTrue(reference1 < reference2) self.assertTrue(reference1 <= reference2) self.assertTrue(not reference1 > reference2) self.assertTrue(not reference1 >= reference2) self.assertTrue(reference1 != reference2) def testComparison_013(self): """ Test comparison of two differing objects, generator differs (one empty). """ reference1 = ReferenceConfig("one", "two", "three", "") reference2 = ReferenceConfig("one", "two", "three", "four") self.assertNotEqual(reference1, reference2) self.assertTrue(not reference1 == reference2) self.assertTrue(reference1 < reference2) self.assertTrue(reference1 <= reference2) self.assertTrue(not reference1 > reference2) self.assertTrue(not reference1 >= reference2) self.assertTrue(reference1 != reference2) def testComparison_014(self): """ Test comparison of two differing objects, generator differs. """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("one", "two", "three", "generator") self.assertNotEqual(reference1, reference2) self.assertTrue(not reference1 == reference2) self.assertTrue(reference1 < reference2) self.assertTrue(reference1 <= reference2) self.assertTrue(not reference1 > reference2) self.assertTrue(not reference1 >= reference2) self.assertTrue(reference1 != reference2) ############################# # TestExtensionsConfig class ############################# class TestExtensionsConfig(unittest.TestCase): """Tests for the ExtensionsConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ExtensionsConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ extensions = ExtensionsConfig() self.assertEqual(None, extensions.orderMode) self.assertEqual(None, extensions.actions) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (empty list), positional arguments. """ extensions = ExtensionsConfig([], None) self.assertEqual(None, extensions.orderMode) self.assertEqual([], extensions.actions) extensions = ExtensionsConfig([], "index") self.assertEqual("index", extensions.orderMode) self.assertEqual([], extensions.actions) extensions = ExtensionsConfig([], "dependency") self.assertEqual("dependency", extensions.orderMode) self.assertEqual([], extensions.actions) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values (non-empty list), named arguments. """ extensions = ExtensionsConfig(orderMode=None, actions=[ExtendedAction(), ]) self.assertEqual(None, extensions.orderMode) self.assertEqual([ExtendedAction(), ], extensions.actions) extensions = ExtensionsConfig(orderMode="index", actions=[ExtendedAction(), ]) self.assertEqual("index", extensions.orderMode) self.assertEqual([ExtendedAction(), ], extensions.actions) extensions = ExtensionsConfig(orderMode="dependency", actions=[ExtendedAction(), ]) self.assertEqual("dependency", extensions.orderMode) self.assertEqual([ExtendedAction(), ], extensions.actions) def testConstructor_004(self): """ Test assignment of actions attribute, None value. """ extensions = ExtensionsConfig([]) self.assertEqual(None, extensions.orderMode) self.assertEqual([], extensions.actions) extensions.actions = None self.assertEqual(None, extensions.actions) def testConstructor_005(self): """ Test assignment of actions attribute, [] value. """ extensions = ExtensionsConfig() self.assertEqual(None, extensions.orderMode) self.assertEqual(None, extensions.actions) extensions.actions = [] self.assertEqual([], extensions.actions) def testConstructor_006(self): """ Test assignment of actions attribute, single valid entry. """ extensions = ExtensionsConfig() self.assertEqual(None, extensions.orderMode) self.assertEqual(None, extensions.actions) extensions.actions = [ExtendedAction(), ] self.assertEqual([ExtendedAction(), ], extensions.actions) def testConstructor_007(self): """ Test assignment of actions attribute, multiple valid entries. """ extensions = ExtensionsConfig() self.assertEqual(None, extensions.orderMode) self.assertEqual(None, extensions.actions) extensions.actions = [ExtendedAction("a", "b", "c", 1), ExtendedAction("d", "e", "f", 2), ] self.assertEqual([ExtendedAction("a", "b", "c", 1), ExtendedAction("d", "e", "f", 2), ], extensions.actions) def testConstructor_009(self): """ Test assignment of actions attribute, single invalid entry (not an ExtendedAction). """ extensions = ExtensionsConfig() self.assertEqual(None, extensions.orderMode) self.assertEqual(None, extensions.actions) self.failUnlessAssignRaises(ValueError, extensions, "actions", [ RemotePeer(), ]) self.assertEqual(None, extensions.actions) def testConstructor_010(self): """ Test assignment of actions attribute, mixed valid and invalid entries. """ extensions = ExtensionsConfig() self.assertEqual(None, extensions.orderMode) self.assertEqual(None, extensions.actions) self.failUnlessAssignRaises(ValueError, extensions, "actions", [ ExtendedAction(), RemotePeer(), ]) self.assertEqual(None, extensions.actions) def testConstructor_011(self): """ Test assignment of orderMode attribute, None value. """ extensions = ExtensionsConfig(orderMode="index") self.assertEqual("index", extensions.orderMode) self.assertEqual(None, extensions.actions) extensions.orderMode = None self.assertEqual(None, extensions.orderMode) def testConstructor_012(self): """ Test assignment of orderMode attribute, valid values. """ extensions = ExtensionsConfig() self.assertEqual(None, extensions.orderMode) self.assertEqual(None, extensions.actions) extensions.orderMode = "index" self.assertEqual("index", extensions.orderMode) extensions.orderMode = "dependency" self.assertEqual("dependency", extensions.orderMode) def testConstructor_013(self): """ Test assignment of orderMode attribute, invalid values. """ extensions = ExtensionsConfig() self.assertEqual(None, extensions.orderMode) self.assertEqual(None, extensions.actions) self.failUnlessAssignRaises(ValueError, extensions, "orderMode", "") self.failUnlessAssignRaises(ValueError, extensions, "orderMode", "bogus") self.failUnlessAssignRaises(ValueError, extensions, "orderMode", "indexes") self.failUnlessAssignRaises(ValueError, extensions, "orderMode", "indices") self.failUnlessAssignRaises(ValueError, extensions, "orderMode", "dependencies") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ extensions1 = ExtensionsConfig() extensions2 = ExtensionsConfig() self.assertEqual(extensions1, extensions2) self.assertTrue(extensions1 == extensions2) self.assertTrue(not extensions1 < extensions2) self.assertTrue(extensions1 <= extensions2) self.assertTrue(not extensions1 > extensions2) self.assertTrue(extensions1 >= extensions2) self.assertTrue(not extensions1 != extensions2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None (empty lists). """ extensions1 = ExtensionsConfig([], "index") extensions2 = ExtensionsConfig([], "index") self.assertEqual(extensions1, extensions2) self.assertTrue(extensions1 == extensions2) self.assertTrue(not extensions1 < extensions2) self.assertTrue(extensions1 <= extensions2) self.assertTrue(not extensions1 > extensions2) self.assertTrue(extensions1 >= extensions2) self.assertTrue(not extensions1 != extensions2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None (non-empty lists). """ extensions1 = ExtensionsConfig([ExtendedAction(), ], "index") extensions2 = ExtensionsConfig([ExtendedAction(), ], "index") self.assertEqual(extensions1, extensions2) self.assertTrue(extensions1 == extensions2) self.assertTrue(not extensions1 < extensions2) self.assertTrue(extensions1 <= extensions2) self.assertTrue(not extensions1 > extensions2) self.assertTrue(extensions1 >= extensions2) self.assertTrue(not extensions1 != extensions2) def testComparison_004(self): """ Test comparison of two differing objects, actions differs (one None, one empty). """ extensions1 = ExtensionsConfig(None) extensions2 = ExtensionsConfig([]) self.assertNotEqual(extensions1, extensions2) self.assertTrue(not extensions1 == extensions2) self.assertTrue(extensions1 < extensions2) self.assertTrue(extensions1 <= extensions2) self.assertTrue(not extensions1 > extensions2) self.assertTrue(not extensions1 >= extensions2) self.assertTrue(extensions1 != extensions2) def testComparison_005(self): """ Test comparison of two differing objects, actions differs (one None, one not empty). """ extensions1 = ExtensionsConfig(None) extensions2 = ExtensionsConfig([ExtendedAction(), ]) self.assertNotEqual(extensions1, extensions2) self.assertTrue(not extensions1 == extensions2) self.assertTrue(extensions1 < extensions2) self.assertTrue(extensions1 <= extensions2) self.assertTrue(not extensions1 > extensions2) self.assertTrue(not extensions1 >= extensions2) self.assertTrue(extensions1 != extensions2) def testComparison_006(self): """ Test comparison of two differing objects, actions differs (one empty, one not empty). """ extensions1 = ExtensionsConfig([]) extensions2 = ExtensionsConfig([ExtendedAction(), ]) self.assertNotEqual(extensions1, extensions2) self.assertTrue(not extensions1 == extensions2) self.assertTrue(extensions1 < extensions2) self.assertTrue(extensions1 <= extensions2) self.assertTrue(not extensions1 > extensions2) self.assertTrue(not extensions1 >= extensions2) self.assertTrue(extensions1 != extensions2) def testComparison_007(self): """ Test comparison of two differing objects, actions differs (both not empty). """ extensions1 = ExtensionsConfig([ExtendedAction(name="one"), ]) extensions2 = ExtensionsConfig([ExtendedAction(name="two"), ]) self.assertNotEqual(extensions1, extensions2) self.assertTrue(not extensions1 == extensions2) self.assertTrue(extensions1 < extensions2) self.assertTrue(extensions1 <= extensions2) self.assertTrue(not extensions1 > extensions2) self.assertTrue(not extensions1 >= extensions2) self.assertTrue(extensions1 != extensions2) def testComparison_008(self): """ Test comparison of differing objects, orderMode differs (one None). """ extensions1 = ExtensionsConfig([], None) extensions2 = ExtensionsConfig([], "index") self.assertNotEqual(extensions1, extensions2) self.assertTrue(not extensions1 == extensions2) self.assertTrue(extensions1 < extensions2) self.assertTrue(extensions1 <= extensions2) self.assertTrue(not extensions1 > extensions2) self.assertTrue(not extensions1 >= extensions2) self.assertTrue(extensions1 != extensions2) def testComparison_009(self): """ Test comparison of differing objects, orderMode differs. """ extensions1 = ExtensionsConfig([], "dependency") extensions2 = ExtensionsConfig([], "index") self.assertNotEqual(extensions1, extensions2) self.assertTrue(not extensions1 == extensions2) self.assertTrue(extensions1 < extensions2) self.assertTrue(extensions1 <= extensions2) self.assertTrue(not extensions1 > extensions2) self.assertTrue(not extensions1 >= extensions2) self.assertTrue(extensions1 != extensions2) ########################## # TestOptionsConfig class ########################## class TestOptionsConfig(unittest.TestCase): """Tests for the OptionsConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = OptionsConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ options = OptionsConfig() self.assertEqual(None, options.startingDay) self.assertEqual(None, options.workingDir) self.assertEqual(None, options.backupUser) self.assertEqual(None, options.backupGroup) self.assertEqual(None, options.rcpCommand) self.assertEqual(None, options.rshCommand) self.assertEqual(None, options.cbackCommand) self.assertEqual(None, options.overrides) self.assertEqual(None, options.hooks) self.assertEqual(None, options.managedActions) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (lists empty). """ options = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", [], [], "ssh", "cback", []) self.assertEqual("monday", options.startingDay) self.assertEqual("/tmp", options.workingDir) self.assertEqual("user", options.backupUser) self.assertEqual("group", options.backupGroup) self.assertEqual("scp -1 -B", options.rcpCommand) self.assertEqual("ssh", options.rshCommand) self.assertEqual("cback", options.cbackCommand) self.assertEqual([], options.overrides) self.assertEqual([], options.hooks) self.assertEqual([], options.managedActions) def testConstructor_003(self): """ Test assignment of startingDay attribute, None value. """ options = OptionsConfig(startingDay="monday") self.assertEqual("monday", options.startingDay) options.startingDay = None self.assertEqual(None, options.startingDay) def testConstructor_004(self): """ Test assignment of startingDay attribute, valid value. """ options = OptionsConfig() self.assertEqual(None, options.startingDay) options.startingDay = "monday" self.assertEqual("monday", options.startingDay) options.startingDay = "tuesday" self.assertEqual("tuesday", options.startingDay) options.startingDay = "wednesday" self.assertEqual("wednesday", options.startingDay) options.startingDay = "thursday" self.assertEqual("thursday", options.startingDay) options.startingDay = "friday" self.assertEqual("friday", options.startingDay) options.startingDay = "saturday" self.assertEqual("saturday", options.startingDay) options.startingDay = "sunday" self.assertEqual("sunday", options.startingDay) def testConstructor_005(self): """ Test assignment of startingDay attribute, invalid value (empty). """ options = OptionsConfig() self.assertEqual(None, options.startingDay) self.failUnlessAssignRaises(ValueError, options, "startingDay", "") self.assertEqual(None, options.startingDay) def testConstructor_006(self): """ Test assignment of startingDay attribute, invalid value (not in list). """ options = OptionsConfig() self.assertEqual(None, options.startingDay) self.failUnlessAssignRaises(ValueError, options, "startingDay", "dienstag") # ha, ha, pretend I'm German self.assertEqual(None, options.startingDay) def testConstructor_007(self): """ Test assignment of workingDir attribute, None value. """ options = OptionsConfig(workingDir="/tmp") self.assertEqual("/tmp", options.workingDir) options.workingDir = None self.assertEqual(None, options.workingDir) def testConstructor_008(self): """ Test assignment of workingDir attribute, valid value. """ options = OptionsConfig() self.assertEqual(None, options.workingDir) options.workingDir = "/tmp" self.assertEqual("/tmp", options.workingDir) def testConstructor_009(self): """ Test assignment of workingDir attribute, invalid value (empty). """ options = OptionsConfig() self.assertEqual(None, options.workingDir) self.failUnlessAssignRaises(ValueError, options, "workingDir", "") self.assertEqual(None, options.workingDir) def testConstructor_010(self): """ Test assignment of workingDir attribute, invalid value (non-absolute). """ options = OptionsConfig() self.assertEqual(None, options.workingDir) self.failUnlessAssignRaises(ValueError, options, "workingDir", "stuff") self.assertEqual(None, options.workingDir) def testConstructor_011(self): """ Test assignment of backupUser attribute, None value. """ options = OptionsConfig(backupUser="user") self.assertEqual("user", options.backupUser) options.backupUser = None self.assertEqual(None, options.backupUser) def testConstructor_012(self): """ Test assignment of backupUser attribute, valid value. """ options = OptionsConfig() self.assertEqual(None, options.backupUser) options.backupUser = "user" self.assertEqual("user", options.backupUser) def testConstructor_013(self): """ Test assignment of backupUser attribute, invalid value (empty). """ options = OptionsConfig() self.assertEqual(None, options.backupUser) self.failUnlessAssignRaises(ValueError, options, "backupUser", "") self.assertEqual(None, options.backupUser) def testConstructor_014(self): """ Test assignment of backupGroup attribute, None value. """ options = OptionsConfig(backupGroup="group") self.assertEqual("group", options.backupGroup) options.backupGroup = None self.assertEqual(None, options.backupGroup) def testConstructor_015(self): """ Test assignment of backupGroup attribute, valid value. """ options = OptionsConfig() self.assertEqual(None, options.backupGroup) options.backupGroup = "group" self.assertEqual("group", options.backupGroup) def testConstructor_016(self): """ Test assignment of backupGroup attribute, invalid value (empty). """ options = OptionsConfig() self.assertEqual(None, options.backupGroup) self.failUnlessAssignRaises(ValueError, options, "backupGroup", "") self.assertEqual(None, options.backupGroup) def testConstructor_017(self): """ Test assignment of rcpCommand attribute, None value. """ options = OptionsConfig(rcpCommand="command") self.assertEqual("command", options.rcpCommand) options.rcpCommand = None self.assertEqual(None, options.rcpCommand) def testConstructor_018(self): """ Test assignment of rcpCommand attribute, valid value. """ options = OptionsConfig() self.assertEqual(None, options.rcpCommand) options.rcpCommand = "command" self.assertEqual("command", options.rcpCommand) def testConstructor_019(self): """ Test assignment of rcpCommand attribute, invalid value (empty). """ options = OptionsConfig() self.assertEqual(None, options.rcpCommand) self.failUnlessAssignRaises(ValueError, options, "rcpCommand", "") self.assertEqual(None, options.rcpCommand) def testConstructor_020(self): """ Test constructor with all values filled in, with valid values (lists not empty). """ overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), ] hooks = [ PreActionHook("collect", "ls -l"), ] managedActions = [ "collect", "purge", ] options = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.assertEqual("monday", options.startingDay) self.assertEqual("/tmp", options.workingDir) self.assertEqual("user", options.backupUser) self.assertEqual("group", options.backupGroup) self.assertEqual("scp -1 -B", options.rcpCommand) self.assertEqual("ssh", options.rshCommand) self.assertEqual("cback", options.cbackCommand) self.assertEqual(overrides, options.overrides) self.assertEqual(hooks, options.hooks) self.assertEqual(managedActions, options.managedActions) def testConstructor_021(self): """ Test assignment of overrides attribute, None value. """ collect = OptionsConfig(overrides=[]) self.assertEqual([], collect.overrides) collect.overrides = None self.assertEqual(None, collect.overrides) def testConstructor_022(self): """ Test assignment of overrides attribute, [] value. """ collect = OptionsConfig() self.assertEqual(None, collect.overrides) collect.overrides = [] self.assertEqual([], collect.overrides) def testConstructor_023(self): """ Test assignment of overrides attribute, single valid entry. """ collect = OptionsConfig() self.assertEqual(None, collect.overrides) collect.overrides = [CommandOverride("one", "/one"), ] self.assertEqual([CommandOverride("one", "/one"), ], collect.overrides) def testConstructor_024(self): """ Test assignment of overrides attribute, multiple valid entries. """ collect = OptionsConfig() self.assertEqual(None, collect.overrides) collect.overrides = [CommandOverride("one", "/one"), CommandOverride("two", "/two"), ] self.assertEqual([CommandOverride("one", "/one"), CommandOverride("two", "/two"), ], collect.overrides) def testConstructor_025(self): """ Test assignment of overrides attribute, single invalid entry (None). """ collect = OptionsConfig() self.assertEqual(None, collect.overrides) self.failUnlessAssignRaises(ValueError, collect, "overrides", [ None, ]) self.assertEqual(None, collect.overrides) def testConstructor_026(self): """ Test assignment of overrides attribute, single invalid entry (not a CommandOverride). """ collect = OptionsConfig() self.assertEqual(None, collect.overrides) self.failUnlessAssignRaises(ValueError, collect, "overrides", [ "hello", ]) self.assertEqual(None, collect.overrides) def testConstructor_027(self): """ Test assignment of overrides attribute, mixed valid and invalid entries. """ collect = OptionsConfig() self.assertEqual(None, collect.overrides) self.failUnlessAssignRaises(ValueError, collect, "overrides", [ "hello", CommandOverride("one", "/one"), ]) self.assertEqual(None, collect.overrides) def testConstructor_028(self): """ Test assignment of hooks attribute, None value. """ collect = OptionsConfig(hooks=[]) self.assertEqual([], collect.hooks) collect.hooks = None self.assertEqual(None, collect.hooks) def testConstructor_029(self): """ Test assignment of hooks attribute, [] value. """ collect = OptionsConfig() self.assertEqual(None, collect.hooks) collect.hooks = [] self.assertEqual([], collect.hooks) def testConstructor_030(self): """ Test assignment of hooks attribute, single valid entry. """ collect = OptionsConfig() self.assertEqual(None, collect.hooks) collect.hooks = [PreActionHook("stage", "df -k"), ] self.assertEqual([PreActionHook("stage", "df -k"), ], collect.hooks) def testConstructor_031(self): """ Test assignment of hooks attribute, multiple valid entries. """ collect = OptionsConfig() self.assertEqual(None, collect.hooks) collect.hooks = [ PreActionHook("stage", "df -k"), PostActionHook("collect", "ls -l"), ] self.assertEqual([PreActionHook("stage", "df -k"), PostActionHook("collect", "ls -l"), ], collect.hooks) def testConstructor_032(self): """ Test assignment of hooks attribute, single invalid entry (None). """ collect = OptionsConfig() self.assertEqual(None, collect.hooks) self.failUnlessAssignRaises(ValueError, collect, "hooks", [ None, ]) self.assertEqual(None, collect.hooks) def testConstructor_033(self): """ Test assignment of hooks attribute, single invalid entry (not a ActionHook). """ collect = OptionsConfig() self.assertEqual(None, collect.hooks) self.failUnlessAssignRaises(ValueError, collect, "hooks", [ "hello", ]) self.assertEqual(None, collect.hooks) def testConstructor_034(self): """ Test assignment of hooks attribute, mixed valid and invalid entries. """ collect = OptionsConfig() self.assertEqual(None, collect.hooks) self.failUnlessAssignRaises(ValueError, collect, "hooks", [ "hello", PreActionHook("stage", "df -k"), ]) self.assertEqual(None, collect.hooks) def testConstructor_035(self): """ Test assignment of rshCommand attribute, None value. """ options = OptionsConfig(rshCommand="command") self.assertEqual("command", options.rshCommand) options.rshCommand = None self.assertEqual(None, options.rshCommand) def testConstructor_036(self): """ Test assignment of rshCommand attribute, valid value. """ options = OptionsConfig() self.assertEqual(None, options.rshCommand) options.rshCommand = "command" self.assertEqual("command", options.rshCommand) def testConstructor_037(self): """ Test assignment of rshCommand attribute, invalid value (empty). """ options = OptionsConfig() self.assertEqual(None, options.rshCommand) self.failUnlessAssignRaises(ValueError, options, "rshCommand", "") self.assertEqual(None, options.rshCommand) def testConstructor_038(self): """ Test assignment of cbackCommand attribute, None value. """ options = OptionsConfig(cbackCommand="command") self.assertEqual("command", options.cbackCommand) options.cbackCommand = None self.assertEqual(None, options.cbackCommand) def testConstructor_039(self): """ Test assignment of cbackCommand attribute, valid value. """ options = OptionsConfig() self.assertEqual(None, options.cbackCommand) options.cbackCommand = "command" self.assertEqual("command", options.cbackCommand) def testConstructor_040(self): """ Test assignment of cbackCommand attribute, invalid value (empty). """ options = OptionsConfig() self.assertEqual(None, options.cbackCommand) self.failUnlessAssignRaises(ValueError, options, "cbackCommand", "") self.assertEqual(None, options.cbackCommand) def testConstructor_041(self): """ Test assignment of managedActions attribute, None value. """ options = OptionsConfig() self.assertEqual(None, options.managedActions) options.managedActions = None self.assertEqual(None, options.managedActions) def testConstructor_042(self): """ Test assignment of managedActions attribute, empty list. """ options = OptionsConfig() self.assertEqual(None, options.managedActions) options.managedActions = [] self.assertEqual([], options.managedActions) def testConstructor_043(self): """ Test assignment of managedActions attribute, non-empty list, valid values. """ options = OptionsConfig() self.assertEqual(None, options.managedActions) options.managedActions = ['a', 'b', ] self.assertEqual(['a', 'b'], options.managedActions) def testConstructor_044(self): """ Test assignment of managedActions attribute, non-empty list, invalid value. """ options = OptionsConfig() self.assertEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", ["KEN", ]) self.assertEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", ["hello, world" ]) self.assertEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", ["dash-word", ]) self.assertEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", ["", ]) self.assertEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", [None, ]) self.assertEqual(None, options.managedActions) def testConstructor_045(self): """ Test assignment of managedActions attribute, non-empty list, mixed values. """ options = OptionsConfig() self.assertEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", ["ken", "dash-word", ]) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ options1 = OptionsConfig() options2 = OptionsConfig() self.assertEqual(options1, options2) self.assertTrue(options1 == options2) self.assertTrue(not options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(not options1 != options2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.assertEqual(options1, options2) self.assertTrue(options1 == options2) self.assertTrue(not options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(not options1 != options2) def testComparison_003(self): """ Test comparison of two differing objects, startingDay differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(startingDay="monday") self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_004(self): """ Test comparison of two differing objects, startingDay differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("tuesday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_005(self): """ Test comparison of two differing objects, workingDir differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(workingDir="/tmp") self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_006(self): """ Test comparison of two differing objects, workingDir differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp/whatever", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(not options1 < options2) self.assertTrue(not options1 <= options2) self.assertTrue(options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(options1 != options2) def testComparison_007(self): """ Test comparison of two differing objects, backupUser differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(backupUser="user") self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_008(self): """ Test comparison of two differing objects, backupUser differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user2", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user1", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(not options1 < options2) self.assertTrue(not options1 <= options2) self.assertTrue(options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(options1 != options2) def testComparison_009(self): """ Test comparison of two differing objects, backupGroup differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(backupGroup="group") self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_010(self): """ Test comparison of two differing objects, backupGroup differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group1", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group2", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_011(self): """ Test comparison of two differing objects, rcpCommand differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(rcpCommand="command") self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_012(self): """ Test comparison of two differing objects, rcpCommand differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -2 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(not options1 < options2) self.assertTrue(not options1 <= options2) self.assertTrue(options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(options1 != options2) def testComparison_013(self): """ Test comparison of two differing objects, overrides differs (one None, one empty). """ overrides1 = None overrides2 = [] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides1, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides2, hooks, "ssh", "cback", managedActions) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_014(self): """ Test comparison of two differing objects, overrides differs (one None, one not empty). """ overrides1 = None overrides2 = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides1, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides2, hooks, "ssh", "cback", managedActions) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2, "ssh") self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_015(self): """ Test comparison of two differing objects, overrides differs (one empty, one not empty). """ overrides1 = [ CommandOverride("one", "/one"), ] overrides2 = [] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides1, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides2, hooks, "ssh", "cback", managedActions) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(not options1 < options2) self.assertTrue(not options1 <= options2) self.assertTrue(options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(options1 != options2) def testComparison_016(self): """ Test comparison of two differing objects, overrides differs (both not empty). """ overrides1 = [ CommandOverride("one", "/one"), ] overrides2 = [ CommandOverride(), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides1, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides2, hooks, "ssh", "cback", managedActions) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(not options1 < options2) self.assertTrue(not options1 <= options2) self.assertTrue(options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(options1 != options2) def testComparison_017(self): """ Test comparison of two differing objects, hooks differs (one None, one empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks1 = None hooks2 = [] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks1, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks2, "ssh", "cback", managedActions) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_018(self): """ Test comparison of two differing objects, hooks differs (one None, one not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks1 = [ PreActionHook("collect", "ls -l ") ] hooks2 = [ PostActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks1, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks2, "ssh", "cback", managedActions) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(not options1 < options2) self.assertTrue(not options1 <= options2) self.assertTrue(options1 != options2) def testComparison_019(self): """ Test comparison of two differing objects, hooks differs (one empty, one not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks1 = [ PreActionHook("collect", "ls -l ") ] hooks2 = [ PreActionHook("stage", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks1, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks2, "ssh", "cback", managedActions) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(options1 != options2) def testComparison_020(self): """ Test comparison of two differing objects, hooks differs (both not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks1 = [ PreActionHook("collect", "ls -l ") ] hooks2 = [ PostActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks1, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks2, "ssh", "cback", managedActions) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(not options1 < options2) self.assertTrue(not options1 <= options2) self.assertTrue(options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(options1 != options2) def testComparison_021(self): """ Test comparison of two differing objects, rshCommand differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(rshCommand="command") self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_022(self): """ Test comparison of two differing objects, rshCommand differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh2", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh1", "cback", managedActions) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(not options1 < options2) self.assertTrue(not options1 <= options2) self.assertTrue(options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(options1 != options2) def testComparison_023(self): """ Test comparison of two differing objects, cbackCommand differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(rshCommand="command") self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_024(self): """ Test comparison of two differing objects, cbackCommand differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback1", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback2", managedActions) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_025(self): """ Test comparison of two differing objects, managedActions differs (one None, one empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions1 = None managedActions2 = [] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions1) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions2) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_026(self): """ Test comparison of two differing objects, managedActions differs (one None, one not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions1 = None managedActions2 = [ "collect", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions1) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions2) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(options1 != options2) def testComparison_027(self): """ Test comparison of two differing objects, managedActions differs (one empty, one not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions1 = [] managedActions2 = [ "collect", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions1) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions2) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(options1 != options2) def testComparison_028(self): """ Test comparison of two differing objects, managedActions differs (both not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions1 = [ "collect", ] managedActions2 = [ "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions1) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions2) self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) #################################### # Test add and replace of overrides #################################### def testOverrides_001(self): """ Test addOverride() with no existing overrides. """ options = OptionsConfig() options.addOverride("cdrecord", "/usr/bin/wodim") self.assertEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), ], options.overrides) def testOverrides_002(self): """ Test addOverride() with no existing override that matches. """ options = OptionsConfig() options.overrides = [ CommandOverride("one", "/one"), ] options.addOverride("cdrecord", "/usr/bin/wodim") self.assertEqual([ CommandOverride("one", "/one"), CommandOverride("cdrecord", "/usr/bin/wodim"), ], options.overrides) def testOverrides_003(self): """ Test addOverride(), with existing override that matches. """ options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/one"), ] options.addOverride("cdrecord", "/usr/bin/wodim") self.assertEqual([ CommandOverride("cdrecord", "/one"), ], options.overrides) def testOverrides_004(self): """ Test replaceOverride() with no existing overrides. """ options = OptionsConfig() options.replaceOverride("cdrecord", "/usr/bin/wodim") self.assertEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), ], options.overrides) def testOverrides_005(self): """ Test replaceOverride() with no existing override that matches. """ options = OptionsConfig() options.overrides = [ CommandOverride("one", "/one"), ] options.replaceOverride("cdrecord", "/usr/bin/wodim") self.assertEqual([ CommandOverride("one", "/one"), CommandOverride("cdrecord", "/usr/bin/wodim"), ], options.overrides) def testOverrides_006(self): """ Test replaceOverride(), with existing override that matches. """ options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/one"), ] options.replaceOverride("cdrecord", "/usr/bin/wodim") self.assertEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), ], options.overrides) ######################## # TestPeersConfig class ######################## class TestPeersConfig(unittest.TestCase): """Tests for the PeersConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PeersConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ peers = PeersConfig() self.assertEqual(None, peers.localPeers) self.assertEqual(None, peers.remotePeers) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (empty lists). """ peers = PeersConfig([], []) self.assertEqual([], peers.localPeers) self.assertEqual([], peers.remotePeers) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values (non-empty lists). """ peers = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.assertEqual([LocalPeer(), ], peers.localPeers) self.assertEqual([RemotePeer(), ], peers.remotePeers) def testConstructor_004(self): """ Test assignment of localPeers attribute, None value. """ peers = PeersConfig(localPeers=[]) self.assertEqual([], peers.localPeers) peers.localPeers = None self.assertEqual(None, peers.localPeers) def testConstructor_005(self): """ Test assignment of localPeers attribute, empty list. """ peers = PeersConfig() self.assertEqual(None, peers.localPeers) peers.localPeers = [] self.assertEqual([], peers.localPeers) def testConstructor_006(self): """ Test assignment of localPeers attribute, single valid entry. """ peers = PeersConfig() self.assertEqual(None, peers.localPeers) peers.localPeers = [LocalPeer(), ] self.assertEqual([LocalPeer(), ], peers.localPeers) def testConstructor_007(self): """ Test assignment of localPeers attribute, multiple valid entries. """ peers = PeersConfig() self.assertEqual(None, peers.localPeers) peers.localPeers = [LocalPeer(name="one"), LocalPeer(name="two"), ] self.assertEqual([LocalPeer(name="one"), LocalPeer(name="two"), ], peers.localPeers) def testConstructor_008(self): """ Test assignment of localPeers attribute, single invalid entry (None). """ peers = PeersConfig() self.assertEqual(None, peers.localPeers) self.failUnlessAssignRaises(ValueError, peers, "localPeers", [None, ]) self.assertEqual(None, peers.localPeers) def testConstructor_009(self): """ Test assignment of localPeers attribute, single invalid entry (not a LocalPeer). """ peers = PeersConfig() self.assertEqual(None, peers.localPeers) self.failUnlessAssignRaises(ValueError, peers, "localPeers", [RemotePeer(), ]) self.assertEqual(None, peers.localPeers) def testConstructor_010(self): """ Test assignment of localPeers attribute, mixed valid and invalid entries. """ peers = PeersConfig() self.assertEqual(None, peers.localPeers) self.failUnlessAssignRaises(ValueError, peers, "localPeers", [LocalPeer(), RemotePeer(), ]) self.assertEqual(None, peers.localPeers) def testConstructor_011(self): """ Test assignment of remotePeers attribute, None value. """ peers = PeersConfig(remotePeers=[]) self.assertEqual([], peers.remotePeers) peers.remotePeers = None self.assertEqual(None, peers.remotePeers) def testConstructor_012(self): """ Test assignment of remotePeers attribute, empty list. """ peers = PeersConfig() self.assertEqual(None, peers.remotePeers) peers.remotePeers = [] self.assertEqual([], peers.remotePeers) def testConstructor_013(self): """ Test assignment of remotePeers attribute, single valid entry. """ peers = PeersConfig() self.assertEqual(None, peers.remotePeers) peers.remotePeers = [RemotePeer(name="one"), ] self.assertEqual([RemotePeer(name="one"), ], peers.remotePeers) def testConstructor_014(self): """ Test assignment of remotePeers attribute, multiple valid entries. """ peers = PeersConfig() self.assertEqual(None, peers.remotePeers) peers.remotePeers = [RemotePeer(name="one"), RemotePeer(name="two"), ] self.assertEqual([RemotePeer(name="one"), RemotePeer(name="two"), ], peers.remotePeers) def testConstructor_015(self): """ Test assignment of remotePeers attribute, single invalid entry (None). """ peers = PeersConfig() self.assertEqual(None, peers.remotePeers) self.failUnlessAssignRaises(ValueError, peers, "remotePeers", [None, ]) self.assertEqual(None, peers.remotePeers) def testConstructor_016(self): """ Test assignment of remotePeers attribute, single invalid entry (not a RemotePeer). """ peers = PeersConfig() self.assertEqual(None, peers.remotePeers) self.failUnlessAssignRaises(ValueError, peers, "remotePeers", [LocalPeer(), ]) self.assertEqual(None, peers.remotePeers) def testConstructor_017(self): """ Test assignment of remotePeers attribute, mixed valid and invalid entries. """ peers = PeersConfig() self.assertEqual(None, peers.remotePeers) self.failUnlessAssignRaises(ValueError, peers, "remotePeers", [LocalPeer(), RemotePeer(), ]) self.assertEqual(None, peers.remotePeers) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ peers1 = PeersConfig() peers2 = PeersConfig() self.assertEqual(peers1, peers2) self.assertTrue(peers1 == peers2) self.assertTrue(not peers1 < peers2) self.assertTrue(peers1 <= peers2) self.assertTrue(not peers1 > peers2) self.assertTrue(peers1 >= peers2) self.assertTrue(not peers1 != peers2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None (empty lists). """ peers1 = PeersConfig([], []) peers2 = PeersConfig([], []) self.assertEqual(peers1, peers2) self.assertTrue(peers1 == peers2) self.assertTrue(not peers1 < peers2) self.assertTrue(peers1 <= peers2) self.assertTrue(not peers1 > peers2) self.assertTrue(peers1 >= peers2) self.assertTrue(not peers1 != peers2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None (non-empty lists). """ peers1 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.assertEqual(peers1, peers2) self.assertTrue(peers1 == peers2) self.assertTrue(not peers1 < peers2) self.assertTrue(peers1 <= peers2) self.assertTrue(not peers1 > peers2) self.assertTrue(peers1 >= peers2) self.assertTrue(not peers1 != peers2) def testComparison_004(self): """ Test comparison of two differing objects, localPeers differs (one None, one empty). """ peers1 = PeersConfig(None, [RemotePeer(), ]) peers2 = PeersConfig([], [RemotePeer(), ]) self.assertNotEqual(peers1, peers2) self.assertTrue(not peers1 == peers2) self.assertTrue(peers1 < peers2) self.assertTrue(peers1 <= peers2) self.assertTrue(not peers1 > peers2) self.assertTrue(not peers1 >= peers2) self.assertTrue(peers1 != peers2) def testComparison_005(self): """ Test comparison of two differing objects, localPeers differs (one None, one not empty). """ peers1 = PeersConfig(None, [RemotePeer(), ]) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.assertNotEqual(peers1, peers2) self.assertTrue(not peers1 == peers2) self.assertTrue(peers1 < peers2) self.assertTrue(peers1 <= peers2) self.assertTrue(not peers1 > peers2) self.assertTrue(not peers1 >= peers2) self.assertTrue(peers1 != peers2) def testComparison_006(self): """ Test comparison of two differing objects, localPeers differs (one empty, one not empty). """ peers1 = PeersConfig([], [RemotePeer(), ]) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.assertNotEqual(peers1, peers2) self.assertTrue(not peers1 == peers2) self.assertTrue(peers1 < peers2) self.assertTrue(peers1 <= peers2) self.assertTrue(not peers1 > peers2) self.assertTrue(not peers1 >= peers2) self.assertTrue(peers1 != peers2) def testComparison_007(self): """ Test comparison of two differing objects, localPeers differs (both not empty). """ peers1 = PeersConfig([LocalPeer(name="one"), ], [RemotePeer(), ]) peers2 = PeersConfig([LocalPeer(name="two"), ], [RemotePeer(), ]) self.assertNotEqual(peers1, peers2) self.assertTrue(not peers1 == peers2) self.assertTrue(peers1 < peers2) self.assertTrue(peers1 <= peers2) self.assertTrue(not peers1 > peers2) self.assertTrue(not peers1 >= peers2) self.assertTrue(peers1 != peers2) def testComparison_008(self): """ Test comparison of two differing objects, remotePeers differs (one None, one empty). """ peers1 = PeersConfig([LocalPeer(), ], None) peers2 = PeersConfig([LocalPeer(), ], []) self.assertNotEqual(peers1, peers2) self.assertTrue(not peers1 == peers2) self.assertTrue(peers1 < peers2) self.assertTrue(peers1 <= peers2) self.assertTrue(not peers1 > peers2) self.assertTrue(not peers1 >= peers2) self.assertTrue(peers1 != peers2) def testComparison_009(self): """ Test comparison of two differing objects, remotePeers differs (one None, one not empty). """ peers1 = PeersConfig([LocalPeer(), ], None) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.assertNotEqual(peers1, peers2) self.assertTrue(not peers1 == peers2) self.assertTrue(peers1 < peers2) self.assertTrue(peers1 <= peers2) self.assertTrue(not peers1 > peers2) self.assertTrue(not peers1 >= peers2) self.assertTrue(peers1 != peers2) def testComparison_010(self): """ Test comparison of two differing objects, remotePeers differs (one empty, one not empty). """ peers1 = PeersConfig([LocalPeer(), ], []) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.assertNotEqual(peers1, peers2) self.assertTrue(not peers1 == peers2) self.assertTrue(peers1 < peers2) self.assertTrue(peers1 <= peers2) self.assertTrue(not peers1 > peers2) self.assertTrue(not peers1 >= peers2) self.assertTrue(peers1 != peers2) def testComparison_011(self): """ Test comparison of two differing objects, remotePeers differs (both not empty). """ peers1 = PeersConfig([LocalPeer(), ], [RemotePeer(name="two"), ]) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(name="one"), ]) self.assertNotEqual(peers1, peers2) self.assertTrue(not peers1 == peers2) self.assertTrue(not peers1 < peers2) self.assertTrue(not peers1 <= peers2) self.assertTrue(peers1 > peers2) self.assertTrue(peers1 >= peers2) self.assertTrue(peers1 != peers2) ########################## # TestCollectConfig class ########################## class TestCollectConfig(unittest.TestCase): """Tests for the CollectConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = CollectConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ collect = CollectConfig() self.assertEqual(None, collect.targetDir) self.assertEqual(None, collect.collectMode) self.assertEqual(None, collect.archiveMode) self.assertEqual(None, collect.ignoreFile) self.assertEqual(None, collect.absoluteExcludePaths) self.assertEqual(None, collect.excludePatterns) self.assertEqual(None, collect.collectDirs) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (lists empty). """ collect = CollectConfig("/target", "incr", "tar", "ignore", [], [], [], []) self.assertEqual("/target", collect.targetDir) self.assertEqual("incr", collect.collectMode) self.assertEqual("tar", collect.archiveMode) self.assertEqual("ignore", collect.ignoreFile) self.assertEqual([], collect.absoluteExcludePaths) self.assertEqual([], collect.excludePatterns) self.assertEqual([], collect.collectDirs) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values (lists not empty). """ collect = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertEqual("/target", collect.targetDir) self.assertEqual("incr", collect.collectMode) self.assertEqual("tar", collect.archiveMode) self.assertEqual("ignore", collect.ignoreFile) self.assertEqual(["/path", ], collect.absoluteExcludePaths) self.assertEqual(["pattern", ], collect.excludePatterns) self.assertEqual([CollectFile(), ], collect.collectFiles) self.assertEqual([CollectDir(), ], collect.collectDirs) def testConstructor_004(self): """ Test assignment of targetDir attribute, None value. """ collect = CollectConfig(targetDir="/whatever") self.assertEqual("/whatever", collect.targetDir) collect.targetDir = None self.assertEqual(None, collect.targetDir) def testConstructor_005(self): """ Test assignment of targetDir attribute, valid value. """ collect = CollectConfig() self.assertEqual(None, collect.targetDir) collect.targetDir = "/whatever" self.assertEqual("/whatever", collect.targetDir) def testConstructor_006(self): """ Test assignment of targetDir attribute, invalid value (empty). """ collect = CollectConfig() self.assertEqual(None, collect.targetDir) self.failUnlessAssignRaises(ValueError, collect, "targetDir", "") self.assertEqual(None, collect.targetDir) def testConstructor_007(self): """ Test assignment of targetDir attribute, invalid value (non-absolute). """ collect = CollectConfig() self.assertEqual(None, collect.targetDir) self.failUnlessAssignRaises(ValueError, collect, "targetDir", "bogus") self.assertEqual(None, collect.targetDir) def testConstructor_008(self): """ Test assignment of collectMode attribute, None value. """ collect = CollectConfig(collectMode="incr") self.assertEqual("incr", collect.collectMode) collect.collectMode = None self.assertEqual(None, collect.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, valid value. """ collect = CollectConfig() self.assertEqual(None, collect.collectMode) collect.collectMode = "daily" self.assertEqual("daily", collect.collectMode) collect.collectMode = "weekly" self.assertEqual("weekly", collect.collectMode) collect.collectMode = "incr" self.assertEqual("incr", collect.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (empty). """ collect = CollectConfig() self.assertEqual(None, collect.collectMode) self.failUnlessAssignRaises(ValueError, collect, "collectMode", "") self.assertEqual(None, collect.collectMode) def testConstructor_011(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ collect = CollectConfig() self.assertEqual(None, collect.collectMode) self.failUnlessAssignRaises(ValueError, collect, "collectMode", "periodic") self.assertEqual(None, collect.collectMode) def testConstructor_012(self): """ Test assignment of archiveMode attribute, None value. """ collect = CollectConfig(archiveMode="tar") self.assertEqual("tar", collect.archiveMode) collect.archiveMode = None self.assertEqual(None, collect.archiveMode) def testConstructor_013(self): """ Test assignment of archiveMode attribute, valid value. """ collect = CollectConfig() self.assertEqual(None, collect.archiveMode) collect.archiveMode = "tar" self.assertEqual("tar", collect.archiveMode) collect.archiveMode = "targz" self.assertEqual("targz", collect.archiveMode) collect.archiveMode = "tarbz2" self.assertEqual("tarbz2", collect.archiveMode) def testConstructor_014(self): """ Test assignment of archiveMode attribute, invalid value (empty). """ collect = CollectConfig() self.assertEqual(None, collect.archiveMode) self.failUnlessAssignRaises(ValueError, collect, "archiveMode", "") self.assertEqual(None, collect.archiveMode) def testConstructor_015(self): """ Test assignment of archiveMode attribute, invalid value (not in list). """ collect = CollectConfig() self.assertEqual(None, collect.archiveMode) self.failUnlessAssignRaises(ValueError, collect, "archiveMode", "tarz") self.assertEqual(None, collect.archiveMode) def testConstructor_016(self): """ Test assignment of ignoreFile attribute, None value. """ collect = CollectConfig(ignoreFile="ignore") self.assertEqual("ignore", collect.ignoreFile) collect.ignoreFile = None self.assertEqual(None, collect.ignoreFile) def testConstructor_017(self): """ Test assignment of ignoreFile attribute, valid value. """ collect = CollectConfig() self.assertEqual(None, collect.ignoreFile) collect.ignoreFile = "ignore" self.assertEqual("ignore", collect.ignoreFile) def testConstructor_018(self): """ Test assignment of ignoreFile attribute, invalid value (empty). """ collect = CollectConfig() self.assertEqual(None, collect.ignoreFile) self.failUnlessAssignRaises(ValueError, collect, "ignoreFile", "") self.assertEqual(None, collect.ignoreFile) def testConstructor_019(self): """ Test assignment of absoluteExcludePaths attribute, None value. """ collect = CollectConfig(absoluteExcludePaths=[]) self.assertEqual([], collect.absoluteExcludePaths) collect.absoluteExcludePaths = None self.assertEqual(None, collect.absoluteExcludePaths) def testConstructor_020(self): """ Test assignment of absoluteExcludePaths attribute, [] value. """ collect = CollectConfig() self.assertEqual(None, collect.absoluteExcludePaths) collect.absoluteExcludePaths = [] self.assertEqual([], collect.absoluteExcludePaths) def testConstructor_021(self): """ Test assignment of absoluteExcludePaths attribute, single valid entry. """ collect = CollectConfig() self.assertEqual(None, collect.absoluteExcludePaths) collect.absoluteExcludePaths = ["/whatever", ] self.assertEqual(["/whatever", ], collect.absoluteExcludePaths) def testConstructor_022(self): """ Test assignment of absoluteExcludePaths attribute, multiple valid entries. """ collect = CollectConfig() self.assertEqual(None, collect.absoluteExcludePaths) collect.absoluteExcludePaths = ["/one", "/two", "/three", ] self.assertEqual(["/one", "/two", "/three", ], collect.absoluteExcludePaths) def testConstructor_023(self): """ Test assignment of absoluteExcludePaths attribute, single invalid entry (empty). """ collect = CollectConfig() self.assertEqual(None, collect.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collect, "absoluteExcludePaths", [ "", ]) self.assertEqual(None, collect.absoluteExcludePaths) def testConstructor_024(self): """ Test assignment of absoluteExcludePaths attribute, single invalid entry (not absolute). """ collect = CollectConfig() self.assertEqual(None, collect.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collect, "absoluteExcludePaths", [ "one", ]) self.assertEqual(None, collect.absoluteExcludePaths) def testConstructor_025(self): """ Test assignment of absoluteExcludePaths attribute, mixed valid and invalid entries. """ collect = CollectConfig() self.assertEqual(None, collect.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collect, "absoluteExcludePaths", [ "one", "/two", ]) self.assertEqual(None, collect.absoluteExcludePaths) def testConstructor_026(self): """ Test assignment of excludePatterns attribute, None value. """ collect = CollectConfig(excludePatterns=[]) self.assertEqual([], collect.excludePatterns) collect.excludePatterns = None self.assertEqual(None, collect.excludePatterns) def testConstructor_027(self): """ Test assignment of excludePatterns attribute, [] value. """ collect = CollectConfig() self.assertEqual(None, collect.excludePatterns) collect.excludePatterns = [] self.assertEqual([], collect.excludePatterns) def testConstructor_028(self): """ Test assignment of excludePatterns attribute, single valid entry. """ collect = CollectConfig() self.assertEqual(None, collect.excludePatterns) collect.excludePatterns = ["pattern", ] self.assertEqual(["pattern", ], collect.excludePatterns) def testConstructor_029(self): """ Test assignment of excludePatterns attribute, multiple valid entries. """ collect = CollectConfig() self.assertEqual(None, collect.excludePatterns) collect.excludePatterns = ["pattern1", "pattern2", ] self.assertEqual(["pattern1", "pattern2", ], collect.excludePatterns) def testConstructor_029a(self): """ Test assignment of excludePatterns attribute, single invalid entry. """ collect = CollectConfig() self.assertEqual(None, collect.excludePatterns) self.failUnlessAssignRaises(ValueError, collect, "excludePatterns", ["*.jpg", ]) self.assertEqual(None, collect.excludePatterns) def testConstructor_029b(self): """ Test assignment of excludePatterns attribute, multiple invalid entries. """ collect = CollectConfig() self.assertEqual(None, collect.excludePatterns) self.failUnlessAssignRaises(ValueError, collect, "excludePatterns", ["*.jpg", "*", ]) self.assertEqual(None, collect.excludePatterns) def testConstructor_029c(self): """ Test assignment of excludePatterns attribute, mixed valid and invalid entries. """ collect = CollectConfig() self.assertEqual(None, collect.excludePatterns) self.failUnlessAssignRaises(ValueError, collect, "excludePatterns", ["*.jpg", "valid", ]) self.assertEqual(None, collect.excludePatterns) def testConstructor_030(self): """ Test assignment of collectDirs attribute, None value. """ collect = CollectConfig(collectDirs=[]) self.assertEqual([], collect.collectDirs) collect.collectDirs = None self.assertEqual(None, collect.collectDirs) def testConstructor_031(self): """ Test assignment of collectDirs attribute, [] value. """ collect = CollectConfig() self.assertEqual(None, collect.collectDirs) collect.collectDirs = [] self.assertEqual([], collect.collectDirs) def testConstructor_032(self): """ Test assignment of collectDirs attribute, single valid entry. """ collect = CollectConfig() self.assertEqual(None, collect.collectDirs) collect.collectDirs = [CollectDir(absolutePath="/one"), ] self.assertEqual([CollectDir(absolutePath="/one"), ], collect.collectDirs) def testConstructor_033(self): """ Test assignment of collectDirs attribute, multiple valid entries. """ collect = CollectConfig() self.assertEqual(None, collect.collectDirs) collect.collectDirs = [CollectDir(absolutePath="/one"), CollectDir(absolutePath="/two"), ] self.assertEqual([CollectDir(absolutePath="/one"), CollectDir(absolutePath="/two"), ], collect.collectDirs) def testConstructor_034(self): """ Test assignment of collectDirs attribute, single invalid entry (None). """ collect = CollectConfig() self.assertEqual(None, collect.collectDirs) self.failUnlessAssignRaises(ValueError, collect, "collectDirs", [ None, ]) self.assertEqual(None, collect.collectDirs) def testConstructor_035(self): """ Test assignment of collectDirs attribute, single invalid entry (not a CollectDir). """ collect = CollectConfig() self.assertEqual(None, collect.collectDirs) self.failUnlessAssignRaises(ValueError, collect, "collectDirs", [ "hello", ]) self.assertEqual(None, collect.collectDirs) def testConstructor_036(self): """ Test assignment of collectDirs attribute, mixed valid and invalid entries. """ collect = CollectConfig() self.assertEqual(None, collect.collectDirs) self.failUnlessAssignRaises(ValueError, collect, "collectDirs", [ "hello", CollectDir(), ]) self.assertEqual(None, collect.collectDirs) def testConstructor_037(self): """ Test assignment of collectFiles attribute, None value. """ collect = CollectConfig(collectFiles=[]) self.assertEqual([], collect.collectFiles) collect.collectFiles = None self.assertEqual(None, collect.collectFiles) def testConstructor_038(self): """ Test assignment of collectFiles attribute, [] value. """ collect = CollectConfig() self.assertEqual(None, collect.collectFiles) collect.collectFiles = [] self.assertEqual([], collect.collectFiles) def testConstructor_039(self): """ Test assignment of collectFiles attribute, single valid entry. """ collect = CollectConfig() self.assertEqual(None, collect.collectFiles) collect.collectFiles = [CollectFile(absolutePath="/one"), ] self.assertEqual([CollectFile(absolutePath="/one"), ], collect.collectFiles) def testConstructor_040(self): """ Test assignment of collectFiles attribute, multiple valid entries. """ collect = CollectConfig() self.assertEqual(None, collect.collectFiles) collect.collectFiles = [CollectFile(absolutePath="/one"), CollectFile(absolutePath="/two"), ] self.assertEqual([CollectFile(absolutePath="/one"), CollectFile(absolutePath="/two"), ], collect.collectFiles) def testConstructor_041(self): """ Test assignment of collectFiles attribute, single invalid entry (None). """ collect = CollectConfig() self.assertEqual(None, collect.collectFiles) self.failUnlessAssignRaises(ValueError, collect, "collectFiles", [ None, ]) self.assertEqual(None, collect.collectFiles) def testConstructor_042(self): """ Test assignment of collectFiles attribute, single invalid entry (not a CollectFile). """ collect = CollectConfig() self.assertEqual(None, collect.collectFiles) self.failUnlessAssignRaises(ValueError, collect, "collectFiles", [ "hello", ]) self.assertEqual(None, collect.collectFiles) def testConstructor_043(self): """ Test assignment of collectFiles attribute, mixed valid and invalid entries. """ collect = CollectConfig() self.assertEqual(None, collect.collectFiles) self.failUnlessAssignRaises(ValueError, collect, "collectFiles", [ "hello", CollectFile(), ]) self.assertEqual(None, collect.collectFiles) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ collect1 = CollectConfig() collect2 = CollectConfig() self.assertEqual(collect1, collect2) self.assertTrue(collect1 == collect2) self.assertTrue(not collect1 < collect2) self.assertTrue(collect1 <= collect2) self.assertTrue(not collect1 > collect2) self.assertTrue(collect1 >= collect2) self.assertTrue(not collect1 != collect2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertEqual(collect1, collect2) self.assertTrue(collect1 == collect2) self.assertTrue(not collect1 < collect2) self.assertTrue(collect1 <= collect2) self.assertTrue(not collect1 > collect2) self.assertTrue(collect1 >= collect2) self.assertTrue(not collect1 != collect2) def testComparison_003(self): """ Test comparison of two differing objects, targetDir differs (one None). """ collect1 = CollectConfig(None, "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target2", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(collect1 < collect2) self.assertTrue(collect1 <= collect2) self.assertTrue(not collect1 > collect2) self.assertTrue(not collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_004(self): """ Test comparison of two differing objects, targetDir differs. """ collect1 = CollectConfig("/target1", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target2", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(collect1 < collect2) self.assertTrue(collect1 <= collect2) self.assertTrue(not collect1 > collect2) self.assertTrue(not collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", None, "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(not collect1 < collect2) self.assertTrue(not collect1 <= collect2) self.assertTrue(collect1 > collect2) self.assertTrue(collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ collect1 = CollectConfig("/target", "daily", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(collect1 < collect2) self.assertTrue(collect1 <= collect2) self.assertTrue(not collect1 > collect2) self.assertTrue(not collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_007(self): """ Test comparison of two differing objects, archiveMode differs (one None). """ collect1 = CollectConfig("/target", "incr", None, "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(collect1 < collect2) self.assertTrue(collect1 <= collect2) self.assertTrue(not collect1 > collect2) self.assertTrue(not collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_008(self): """ Test comparison of two differing objects, archiveMode differs. """ collect1 = CollectConfig("/target", "incr", "targz", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tarbz2", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(not collect1 < collect2) self.assertTrue(not collect1 <= collect2) self.assertTrue(collect1 > collect2) self.assertTrue(collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_009(self): """ Test comparison of two differing objects, ignoreFile differs (one None). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", None, ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(not collect1 < collect2) self.assertTrue(not collect1 <= collect2) self.assertTrue(collect1 > collect2) self.assertTrue(collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_010(self): """ Test comparison of two differing objects, ignoreFile differs. """ collect1 = CollectConfig("/target", "incr", "tar", "ignore1", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore2", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(collect1 < collect2) self.assertTrue(collect1 <= collect2) self.assertTrue(not collect1 > collect2) self.assertTrue(not collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_011(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one None, one empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", None, ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", [], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(collect1 < collect2) self.assertTrue(collect1 <= collect2) self.assertTrue(not collect1 > collect2) self.assertTrue(not collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_012(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one None, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", None, ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(not collect1 < collect2) self.assertTrue(not collect1 <= collect2) self.assertTrue(collect1 > collect2) self.assertTrue(collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_013(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one empty, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", [], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(collect1 < collect2) self.assertTrue(collect1 <= collect2) self.assertTrue(not collect1 > collect2) self.assertTrue(not collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_014(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (both not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", "/path2", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(not collect1 < collect2) self.assertTrue(not collect1 <= collect2) self.assertTrue(collect1 > collect2) self.assertTrue(collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_015(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], None, [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], [], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(collect1 < collect2) self.assertTrue(collect1 <= collect2) self.assertTrue(not collect1 > collect2) self.assertTrue(not collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_016(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], None, [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(not collect1 < collect2) self.assertTrue(not collect1 <= collect2) self.assertTrue(collect1 > collect2) self.assertTrue(collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_017(self): """ Test comparison of two differing objects, excludePatterns differs (one empty, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], [], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(not collect1 < collect2) self.assertTrue(not collect1 <= collect2) self.assertTrue(collect1 > collect2) self.assertTrue(collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_018(self): """ Test comparison of two differing objects, excludePatterns differs (both not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", "bogus", ], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(not collect1 < collect2) self.assertTrue(not collect1 <= collect2) self.assertTrue(collect1 > collect2) self.assertTrue(collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_019(self): """ Test comparison of two differing objects, collectDirs differs (one None, one empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], None) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], []) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(collect1 < collect2) self.assertTrue(collect1 <= collect2) self.assertTrue(not collect1 > collect2) self.assertTrue(not collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_020(self): """ Test comparison of two differing objects, collectDirs differs (one None, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], None) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(collect1 < collect2) self.assertTrue(collect1 <= collect2) self.assertTrue(not collect1 > collect2) self.assertTrue(not collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_021(self): """ Test comparison of two differing objects, collectDirs differs (one empty, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], []) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(not collect1 < collect2) self.assertTrue(not collect1 <= collect2) self.assertTrue(collect1 > collect2) self.assertTrue(collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_022(self): """ Test comparison of two differing objects, collectDirs differs (both not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(not collect1 < collect2) self.assertTrue(not collect1 <= collect2) self.assertTrue(collect1 > collect2) self.assertTrue(collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_023(self): """ Test comparison of two differing objects, collectFiles differs (one None, one empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], None, [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(collect1 < collect2) self.assertTrue(collect1 <= collect2) self.assertTrue(not collect1 > collect2) self.assertTrue(not collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_024(self): """ Test comparison of two differing objects, collectFiles differs (one None, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], None, [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(collect1 < collect2) self.assertTrue(collect1 <= collect2) self.assertTrue(not collect1 > collect2) self.assertTrue(not collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_025(self): """ Test comparison of two differing objects, collectFiles differs (one empty, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(not collect1 < collect2) self.assertTrue(not collect1 <= collect2) self.assertTrue(collect1 > collect2) self.assertTrue(collect1 >= collect2) self.assertTrue(collect1 != collect2) def testComparison_026(self): """ Test comparison of two differing objects, collectFiles differs (both not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), CollectFile(), ], [CollectDir() ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.assertNotEqual(collect1, collect2) self.assertTrue(not collect1 == collect2) self.assertTrue(not collect1 < collect2) self.assertTrue(not collect1 <= collect2) self.assertTrue(collect1 > collect2) self.assertTrue(collect1 >= collect2) self.assertTrue(collect1 != collect2) ######################## # TestStageConfig class ######################## class TestStageConfig(unittest.TestCase): """Tests for the StageConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = StageConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ stage = StageConfig() self.assertEqual(None, stage.targetDir) self.assertEqual(None, stage.localPeers) self.assertEqual(None, stage.remotePeers) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (empty lists). """ stage = StageConfig("/whatever", [], []) self.assertEqual("/whatever", stage.targetDir) self.assertEqual([], stage.localPeers) self.assertEqual([], stage.remotePeers) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values (non-empty lists). """ stage = StageConfig("/whatever", [LocalPeer(), ], [RemotePeer(), ]) self.assertEqual("/whatever", stage.targetDir) self.assertEqual([LocalPeer(), ], stage.localPeers) self.assertEqual([RemotePeer(), ], stage.remotePeers) def testConstructor_004(self): """ Test assignment of targetDir attribute, None value. """ stage = StageConfig(targetDir="/whatever") self.assertEqual("/whatever", stage.targetDir) stage.targetDir = None self.assertEqual(None, stage.targetDir) def testConstructor_005(self): """ Test assignment of targetDir attribute, valid value. """ stage = StageConfig() self.assertEqual(None, stage.targetDir) stage.targetDir = "/whatever" self.assertEqual("/whatever", stage.targetDir) def testConstructor_006(self): """ Test assignment of targetDir attribute, invalid value (empty). """ stage = StageConfig() self.assertEqual(None, stage.targetDir) self.failUnlessAssignRaises(ValueError, stage, "targetDir", "") self.assertEqual(None, stage.targetDir) def testConstructor_007(self): """ Test assignment of targetDir attribute, invalid value (non-absolute). """ stage = StageConfig() self.assertEqual(None, stage.targetDir) self.failUnlessAssignRaises(ValueError, stage, "targetDir", "stuff") self.assertEqual(None, stage.targetDir) def testConstructor_008(self): """ Test assignment of localPeers attribute, None value. """ stage = StageConfig(localPeers=[]) self.assertEqual([], stage.localPeers) stage.localPeers = None self.assertEqual(None, stage.localPeers) def testConstructor_009(self): """ Test assignment of localPeers attribute, empty list. """ stage = StageConfig() self.assertEqual(None, stage.localPeers) stage.localPeers = [] self.assertEqual([], stage.localPeers) def testConstructor_010(self): """ Test assignment of localPeers attribute, single valid entry. """ stage = StageConfig() self.assertEqual(None, stage.localPeers) stage.localPeers = [LocalPeer(), ] self.assertEqual([LocalPeer(), ], stage.localPeers) def testConstructor_011(self): """ Test assignment of localPeers attribute, multiple valid entries. """ stage = StageConfig() self.assertEqual(None, stage.localPeers) stage.localPeers = [LocalPeer(name="one"), LocalPeer(name="two"), ] self.assertEqual([LocalPeer(name="one"), LocalPeer(name="two"), ], stage.localPeers) def testConstructor_012(self): """ Test assignment of localPeers attribute, single invalid entry (None). """ stage = StageConfig() self.assertEqual(None, stage.localPeers) self.failUnlessAssignRaises(ValueError, stage, "localPeers", [None, ]) self.assertEqual(None, stage.localPeers) def testConstructor_013(self): """ Test assignment of localPeers attribute, single invalid entry (not a LocalPeer). """ stage = StageConfig() self.assertEqual(None, stage.localPeers) self.failUnlessAssignRaises(ValueError, stage, "localPeers", [RemotePeer(), ]) self.assertEqual(None, stage.localPeers) def testConstructor_014(self): """ Test assignment of localPeers attribute, mixed valid and invalid entries. """ stage = StageConfig() self.assertEqual(None, stage.localPeers) self.failUnlessAssignRaises(ValueError, stage, "localPeers", [LocalPeer(), RemotePeer(), ]) self.assertEqual(None, stage.localPeers) def testConstructor_015(self): """ Test assignment of remotePeers attribute, None value. """ stage = StageConfig(remotePeers=[]) self.assertEqual([], stage.remotePeers) stage.remotePeers = None self.assertEqual(None, stage.remotePeers) def testConstructor_016(self): """ Test assignment of remotePeers attribute, empty list. """ stage = StageConfig() self.assertEqual(None, stage.remotePeers) stage.remotePeers = [] self.assertEqual([], stage.remotePeers) def testConstructor_017(self): """ Test assignment of remotePeers attribute, single valid entry. """ stage = StageConfig() self.assertEqual(None, stage.remotePeers) stage.remotePeers = [RemotePeer(name="one"), ] self.assertEqual([RemotePeer(name="one"), ], stage.remotePeers) def testConstructor_018(self): """ Test assignment of remotePeers attribute, multiple valid entries. """ stage = StageConfig() self.assertEqual(None, stage.remotePeers) stage.remotePeers = [RemotePeer(name="one"), RemotePeer(name="two"), ] self.assertEqual([RemotePeer(name="one"), RemotePeer(name="two"), ], stage.remotePeers) def testConstructor_019(self): """ Test assignment of remotePeers attribute, single invalid entry (None). """ stage = StageConfig() self.assertEqual(None, stage.remotePeers) self.failUnlessAssignRaises(ValueError, stage, "remotePeers", [None, ]) self.assertEqual(None, stage.remotePeers) def testConstructor_020(self): """ Test assignment of remotePeers attribute, single invalid entry (not a RemotePeer). """ stage = StageConfig() self.assertEqual(None, stage.remotePeers) self.failUnlessAssignRaises(ValueError, stage, "remotePeers", [LocalPeer(), ]) self.assertEqual(None, stage.remotePeers) def testConstructor_021(self): """ Test assignment of remotePeers attribute, mixed valid and invalid entries. """ stage = StageConfig() self.assertEqual(None, stage.remotePeers) self.failUnlessAssignRaises(ValueError, stage, "remotePeers", [LocalPeer(), RemotePeer(), ]) self.assertEqual(None, stage.remotePeers) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ stage1 = StageConfig() stage2 = StageConfig() self.assertEqual(stage1, stage2) self.assertTrue(stage1 == stage2) self.assertTrue(not stage1 < stage2) self.assertTrue(stage1 <= stage2) self.assertTrue(not stage1 > stage2) self.assertTrue(stage1 >= stage2) self.assertTrue(not stage1 != stage2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None (empty lists). """ stage1 = StageConfig("/target", [], []) stage2 = StageConfig("/target", [], []) self.assertEqual(stage1, stage2) self.assertTrue(stage1 == stage2) self.assertTrue(not stage1 < stage2) self.assertTrue(stage1 <= stage2) self.assertTrue(not stage1 > stage2) self.assertTrue(stage1 >= stage2) self.assertTrue(not stage1 != stage2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None (non-empty lists). """ stage1 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) self.assertEqual(stage1, stage2) self.assertTrue(stage1 == stage2) self.assertTrue(not stage1 < stage2) self.assertTrue(stage1 <= stage2) self.assertTrue(not stage1 > stage2) self.assertTrue(stage1 >= stage2) self.assertTrue(not stage1 != stage2) def testComparison_004(self): """ Test comparison of two differing objects, targetDir differs (one None). """ stage1 = StageConfig() stage2 = StageConfig(targetDir="/whatever") self.assertNotEqual(stage1, stage2) self.assertTrue(not stage1 == stage2) self.assertTrue(stage1 < stage2) self.assertTrue(stage1 <= stage2) self.assertTrue(not stage1 > stage2) self.assertTrue(not stage1 >= stage2) self.assertTrue(stage1 != stage2) def testComparison_005(self): """ Test comparison of two differing objects, targetDir differs. """ stage1 = StageConfig("/target1", [LocalPeer(), ], [RemotePeer(), ]) stage2 = StageConfig("/target2", [LocalPeer(), ], [RemotePeer(), ]) self.assertNotEqual(stage1, stage2) self.assertTrue(not stage1 == stage2) self.assertTrue(stage1 < stage2) self.assertTrue(stage1 <= stage2) self.assertTrue(not stage1 > stage2) self.assertTrue(not stage1 >= stage2) self.assertTrue(stage1 != stage2) def testComparison_006(self): """ Test comparison of two differing objects, localPeers differs (one None, one empty). """ stage1 = StageConfig("/target", None, [RemotePeer(), ]) stage2 = StageConfig("/target", [], [RemotePeer(), ]) self.assertNotEqual(stage1, stage2) self.assertTrue(not stage1 == stage2) self.assertTrue(stage1 < stage2) self.assertTrue(stage1 <= stage2) self.assertTrue(not stage1 > stage2) self.assertTrue(not stage1 >= stage2) self.assertTrue(stage1 != stage2) def testComparison_007(self): """ Test comparison of two differing objects, localPeers differs (one None, one not empty). """ stage1 = StageConfig("/target", None, [RemotePeer(), ]) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) self.assertNotEqual(stage1, stage2) self.assertTrue(not stage1 == stage2) self.assertTrue(stage1 < stage2) self.assertTrue(stage1 <= stage2) self.assertTrue(not stage1 > stage2) self.assertTrue(not stage1 >= stage2) self.assertTrue(stage1 != stage2) def testComparison_008(self): """ Test comparison of two differing objects, localPeers differs (one empty, one not empty). """ stage1 = StageConfig("/target", [], [RemotePeer(), ]) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) self.assertNotEqual(stage1, stage2) self.assertTrue(not stage1 == stage2) self.assertTrue(stage1 < stage2) self.assertTrue(stage1 <= stage2) self.assertTrue(not stage1 > stage2) self.assertTrue(not stage1 >= stage2) self.assertTrue(stage1 != stage2) def testComparison_009(self): """ Test comparison of two differing objects, localPeers differs (both not empty). """ stage1 = StageConfig("/target", [LocalPeer(name="one"), ], [RemotePeer(), ]) stage2 = StageConfig("/target", [LocalPeer(name="two"), ], [RemotePeer(), ]) self.assertNotEqual(stage1, stage2) self.assertTrue(not stage1 == stage2) self.assertTrue(stage1 < stage2) self.assertTrue(stage1 <= stage2) self.assertTrue(not stage1 > stage2) self.assertTrue(not stage1 >= stage2) self.assertTrue(stage1 != stage2) def testComparison_010(self): """ Test comparison of two differing objects, remotePeers differs (one None, one empty). """ stage1 = StageConfig("/target", [LocalPeer(), ], None) stage2 = StageConfig("/target", [LocalPeer(), ], []) self.assertNotEqual(stage1, stage2) self.assertTrue(not stage1 == stage2) self.assertTrue(stage1 < stage2) self.assertTrue(stage1 <= stage2) self.assertTrue(not stage1 > stage2) self.assertTrue(not stage1 >= stage2) self.assertTrue(stage1 != stage2) def testComparison_011(self): """ Test comparison of two differing objects, remotePeers differs (one None, one not empty). """ stage1 = StageConfig("/target", [LocalPeer(), ], None) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) self.assertNotEqual(stage1, stage2) self.assertTrue(not stage1 == stage2) self.assertTrue(stage1 < stage2) self.assertTrue(stage1 <= stage2) self.assertTrue(not stage1 > stage2) self.assertTrue(not stage1 >= stage2) self.assertTrue(stage1 != stage2) def testComparison_012(self): """ Test comparison of two differing objects, remotePeers differs (one empty, one not empty). """ stage1 = StageConfig("/target", [LocalPeer(), ], []) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) self.assertNotEqual(stage1, stage2) self.assertTrue(not stage1 == stage2) self.assertTrue(stage1 < stage2) self.assertTrue(stage1 <= stage2) self.assertTrue(not stage1 > stage2) self.assertTrue(not stage1 >= stage2) self.assertTrue(stage1 != stage2) def testComparison_013(self): """ Test comparison of two differing objects, remotePeers differs (both not empty). """ stage1 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(name="two"), ]) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(name="one"), ]) self.assertNotEqual(stage1, stage2) self.assertTrue(not stage1 == stage2) self.assertTrue(not stage1 < stage2) self.assertTrue(not stage1 <= stage2) self.assertTrue(stage1 > stage2) self.assertTrue(stage1 >= stage2) self.assertTrue(stage1 != stage2) ######################## # TestStoreConfig class ######################## class TestStoreConfig(unittest.TestCase): """Tests for the StoreConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = StoreConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ store = StoreConfig() self.assertEqual(None, store.sourceDir) self.assertEqual(None, store.mediaType) self.assertEqual(None, store.deviceType) self.assertEqual(None, store.devicePath) self.assertEqual(None, store.deviceScsiId) self.assertEqual(None, store.driveSpeed) self.assertEqual(False, store.checkData) self.assertEqual(False, store.checkMedia) self.assertEqual(False, store.warnMidnite) self.assertEqual(False, store.noEject) self.assertEqual(None, store.blankBehavior) self.assertEqual(None, store.refreshMediaDelay) self.assertEqual(None, store.ejectDelay) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ behavior = BlankBehavior("weekly", "1.3") store = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior, 12, 13) self.assertEqual("/source", store.sourceDir) self.assertEqual("cdr-74", store.mediaType) self.assertEqual("cdwriter", store.deviceType) self.assertEqual("/dev/cdrw", store.devicePath) self.assertEqual("0,0,0", store.deviceScsiId) self.assertEqual(4, store.driveSpeed) self.assertEqual(True, store.checkData) self.assertEqual(True, store.checkMedia) self.assertEqual(True, store.warnMidnite) self.assertEqual(True, store.noEject) self.assertEqual(behavior, store.blankBehavior) self.assertEqual(12, store.refreshMediaDelay) self.assertEqual(13, store.ejectDelay) def testConstructor_003(self): """ Test assignment of sourceDir attribute, None value. """ store = StoreConfig(sourceDir="/whatever") self.assertEqual("/whatever", store.sourceDir) store.sourceDir = None self.assertEqual(None, store.sourceDir) def testConstructor_004(self): """ Test assignment of sourceDir attribute, valid value. """ store = StoreConfig() self.assertEqual(None, store.sourceDir) store.sourceDir = "/whatever" self.assertEqual("/whatever", store.sourceDir) def testConstructor_005(self): """ Test assignment of sourceDir attribute, invalid value (empty). """ store = StoreConfig() self.assertEqual(None, store.sourceDir) self.failUnlessAssignRaises(ValueError, store, "sourceDir", "") self.assertEqual(None, store.sourceDir) def testConstructor_006(self): """ Test assignment of sourceDir attribute, invalid value (non-absolute). """ store = StoreConfig() self.assertEqual(None, store.sourceDir) self.failUnlessAssignRaises(ValueError, store, "sourceDir", "bogus") self.assertEqual(None, store.sourceDir) def testConstructor_007(self): """ Test assignment of mediaType attribute, None value. """ store = StoreConfig(mediaType="cdr-74") self.assertEqual("cdr-74", store.mediaType) store.mediaType = None self.assertEqual(None, store.mediaType) def testConstructor_008(self): """ Test assignment of mediaType attribute, valid value. """ store = StoreConfig() self.assertEqual(None, store.mediaType) store.mediaType = "cdr-74" self.assertEqual("cdr-74", store.mediaType) store.mediaType = "cdrw-74" self.assertEqual("cdrw-74", store.mediaType) store.mediaType = "cdr-80" self.assertEqual("cdr-80", store.mediaType) store.mediaType = "cdrw-80" self.assertEqual("cdrw-80", store.mediaType) store.mediaType = "dvd+r" self.assertEqual("dvd+r", store.mediaType) store.mediaType = "dvd+rw" self.assertEqual("dvd+rw", store.mediaType) def testConstructor_009(self): """ Test assignment of mediaType attribute, invalid value (empty). """ store = StoreConfig() self.assertEqual(None, store.mediaType) self.failUnlessAssignRaises(ValueError, store, "mediaType", "") self.assertEqual(None, store.mediaType) def testConstructor_010(self): """ Test assignment of mediaType attribute, invalid value (not in list). """ store = StoreConfig() self.assertEqual(None, store.mediaType) self.failUnlessAssignRaises(ValueError, store, "mediaType", "floppy") self.assertEqual(None, store.mediaType) def testConstructor_011(self): """ Test assignment of deviceType attribute, None value. """ store = StoreConfig(deviceType="cdwriter") self.assertEqual("cdwriter", store.deviceType) store.deviceType = None self.assertEqual(None, store.deviceType) def testConstructor_012(self): """ Test assignment of deviceType attribute, valid value. """ store = StoreConfig() self.assertEqual(None, store.deviceType) store.deviceType = "cdwriter" self.assertEqual("cdwriter", store.deviceType) store.deviceType = "dvdwriter" self.assertEqual("dvdwriter", store.deviceType) def testConstructor_013(self): """ Test assignment of deviceType attribute, invalid value (empty). """ store = StoreConfig() self.assertEqual(None, store.deviceType) self.failUnlessAssignRaises(ValueError, store, "deviceType", "") self.assertEqual(None, store.deviceType) def testConstructor_014(self): """ Test assignment of deviceType attribute, invalid value (not in list). """ store = StoreConfig() self.assertEqual(None, store.deviceType) self.failUnlessAssignRaises(ValueError, store, "deviceType", "ftape") self.assertEqual(None, store.deviceType) def testConstructor_015(self): """ Test assignment of devicePath attribute, None value. """ store = StoreConfig(devicePath="/dev/cdrw") self.assertEqual("/dev/cdrw", store.devicePath) store.devicePath = None self.assertEqual(None, store.devicePath) def testConstructor_016(self): """ Test assignment of devicePath attribute, valid value. """ store = StoreConfig() self.assertEqual(None, store.devicePath) store.devicePath = "/dev/cdrw" self.assertEqual("/dev/cdrw", store.devicePath) def testConstructor_017(self): """ Test assignment of devicePath attribute, invalid value (empty). """ store = StoreConfig() self.assertEqual(None, store.devicePath) self.failUnlessAssignRaises(ValueError, store, "devicePath", "") self.assertEqual(None, store.devicePath) def testConstructor_018(self): """ Test assignment of devicePath attribute, invalid value (non-absolute). """ store = StoreConfig() self.assertEqual(None, store.devicePath) self.failUnlessAssignRaises(ValueError, store, "devicePath", "dev/cdrw") self.assertEqual(None, store.devicePath) def testConstructor_019(self): """ Test assignment of deviceScsiId attribute, None value. """ store = StoreConfig(deviceScsiId="0,0,0") self.assertEqual("0,0,0", store.deviceScsiId) store.deviceScsiId = None self.assertEqual(None, store.deviceScsiId) def testConstructor_020(self): """ Test assignment of deviceScsiId attribute, valid value. """ store = StoreConfig() self.assertEqual(None, store.deviceScsiId) store.deviceScsiId = "0,0,0" self.assertEqual("0,0,0", store.deviceScsiId) store.deviceScsiId = "ATA:0,0,0" self.assertEqual("ATA:0,0,0", store.deviceScsiId) def testConstructor_021(self): """ Test assignment of deviceScsiId attribute, invalid value (empty). """ store = StoreConfig() self.assertEqual(None, store.deviceScsiId) self.failUnlessAssignRaises(ValueError, store, "deviceScsiId", "") self.assertEqual(None, store.deviceScsiId) def testConstructor_022(self): """ Test assignment of deviceScsiId attribute, invalid value (invalid id). """ store = StoreConfig() self.assertEqual(None, store.deviceScsiId) self.failUnlessAssignRaises(ValueError, store, "deviceScsiId", "ATA;0,0,0") self.assertEqual(None, store.deviceScsiId) self.failUnlessAssignRaises(ValueError, store, "deviceScsiId", "ATAPI-0,0,0") self.assertEqual(None, store.deviceScsiId) self.failUnlessAssignRaises(ValueError, store, "deviceScsiId", "1:2:3") self.assertEqual(None, store.deviceScsiId) def testConstructor_023(self): """ Test assignment of driveSpeed attribute, None value. """ store = StoreConfig(driveSpeed=4) self.assertEqual(4, store.driveSpeed) store.driveSpeed = None self.assertEqual(None, store.driveSpeed) #pylint: disable=R0204 def testConstructor_024(self): """ Test assignment of driveSpeed attribute, valid value. """ store = StoreConfig() self.assertEqual(None, store.driveSpeed) store.driveSpeed = 4 self.assertEqual(4, store.driveSpeed) store.driveSpeed = "12" self.assertEqual(12, store.driveSpeed) def testConstructor_025(self): """ Test assignment of driveSpeed attribute, invalid value (not an integer). """ store = StoreConfig() self.assertEqual(None, store.driveSpeed) self.failUnlessAssignRaises(ValueError, store, "driveSpeed", "blech") self.assertEqual(None, store.driveSpeed) self.failUnlessAssignRaises(ValueError, store, "driveSpeed", CollectDir()) self.assertEqual(None, store.driveSpeed) def testConstructor_026(self): """ Test assignment of checkData attribute, None value. """ store = StoreConfig(checkData=True) self.assertEqual(True, store.checkData) store.checkData = None self.assertEqual(False, store.checkData) def testConstructor_027(self): """ Test assignment of checkData attribute, valid value (real boolean). """ store = StoreConfig() self.assertEqual(False, store.checkData) store.checkData = True self.assertEqual(True, store.checkData) store.checkData = False self.assertEqual(False, store.checkData) #pylint: disable=R0204 def testConstructor_028(self): """ Test assignment of checkData attribute, valid value (expression). """ store = StoreConfig() self.assertEqual(False, store.checkData) store.checkData = 0 self.assertEqual(False, store.checkData) store.checkData = [] self.assertEqual(False, store.checkData) store.checkData = None self.assertEqual(False, store.checkData) store.checkData = ['a'] self.assertEqual(True, store.checkData) store.checkData = 3 self.assertEqual(True, store.checkData) def testConstructor_029(self): """ Test assignment of warnMidnite attribute, None value. """ store = StoreConfig(warnMidnite=True) self.assertEqual(True, store.warnMidnite) store.warnMidnite = None self.assertEqual(False, store.warnMidnite) def testConstructor_030(self): """ Test assignment of warnMidnite attribute, valid value (real boolean). """ store = StoreConfig() self.assertEqual(False, store.warnMidnite) store.warnMidnite = True self.assertEqual(True, store.warnMidnite) store.warnMidnite = False self.assertEqual(False, store.warnMidnite) #pylint: disable=R0204 def testConstructor_031(self): """ Test assignment of warnMidnite attribute, valid value (expression). """ store = StoreConfig() self.assertEqual(False, store.warnMidnite) store.warnMidnite = 0 self.assertEqual(False, store.warnMidnite) store.warnMidnite = [] self.assertEqual(False, store.warnMidnite) store.warnMidnite = None self.assertEqual(False, store.warnMidnite) store.warnMidnite = ['a'] self.assertEqual(True, store.warnMidnite) store.warnMidnite = 3 self.assertEqual(True, store.warnMidnite) def testConstructor_032(self): """ Test assignment of noEject attribute, None value. """ store = StoreConfig(noEject=True) self.assertEqual(True, store.noEject) store.noEject = None self.assertEqual(False, store.noEject) def testConstructor_033(self): """ Test assignment of noEject attribute, valid value (real boolean). """ store = StoreConfig() self.assertEqual(False, store.noEject) store.noEject = True self.assertEqual(True, store.noEject) store.noEject = False self.assertEqual(False, store.noEject) #pylint: disable=R0204 def testConstructor_034(self): """ Test assignment of noEject attribute, valid value (expression). """ store = StoreConfig() self.assertEqual(False, store.noEject) store.noEject = 0 self.assertEqual(False, store.noEject) store.noEject = [] self.assertEqual(False, store.noEject) store.noEject = None self.assertEqual(False, store.noEject) store.noEject = ['a'] self.assertEqual(True, store.noEject) store.noEject = 3 self.assertEqual(True, store.noEject) def testConstructor_035(self): """ Test assignment of checkMedia attribute, None value. """ store = StoreConfig(checkMedia=True) self.assertEqual(True, store.checkMedia) store.checkMedia = None self.assertEqual(False, store.checkMedia) def testConstructor_036(self): """ Test assignment of checkMedia attribute, valid value (real boolean). """ store = StoreConfig() self.assertEqual(False, store.checkMedia) store.checkMedia = True self.assertEqual(True, store.checkMedia) store.checkMedia = False self.assertEqual(False, store.checkMedia) #pylint: disable=R0204 def testConstructor_037(self): """ Test assignment of checkMedia attribute, valid value (expression). """ store = StoreConfig() self.assertEqual(False, store.checkMedia) store.checkMedia = 0 self.assertEqual(False, store.checkMedia) store.checkMedia = [] self.assertEqual(False, store.checkMedia) store.checkMedia = None self.assertEqual(False, store.checkMedia) store.checkMedia = ['a'] self.assertEqual(True, store.checkMedia) store.checkMedia = 3 self.assertEqual(True, store.checkMedia) def testConstructor_038(self): """ Test assignment of blankBehavior attribute, None value. """ store = StoreConfig() store.blankBehavior = None self.assertEqual(None, store.blankBehavior) def testConstructor_039(self): """ Test assignment of blankBehavior store attribute, valid value. """ store = StoreConfig() store.blankBehavior = BlankBehavior() self.assertEqual(BlankBehavior(), store.blankBehavior) def testConstructor_040(self): """ Test assignment of blankBehavior store attribute, invalid value (not BlankBehavior). """ store = StoreConfig() self.failUnlessAssignRaises(ValueError, store, "blankBehavior", CollectDir()) def testConstructor_041(self): """ Test assignment of refreshMediaDelay attribute, None value. """ store = StoreConfig(refreshMediaDelay=4) self.assertEqual(4, store.refreshMediaDelay) store.refreshMediaDelay = None self.assertEqual(None, store.refreshMediaDelay) #pylint: disable=R0204 def testConstructor_042(self): """ Test assignment of refreshMediaDelay attribute, valid value. """ store = StoreConfig() self.assertEqual(None, store.refreshMediaDelay) store.refreshMediaDelay = 4 self.assertEqual(4, store.refreshMediaDelay) store.refreshMediaDelay = "12" self.assertEqual(12, store.refreshMediaDelay) store.refreshMediaDelay = "0" self.assertEqual(None, store.refreshMediaDelay) store.refreshMediaDelay = 0 self.assertEqual(None, store.refreshMediaDelay) def testConstructor_043(self): """ Test assignment of refreshMediaDelay attribute, invalid value (not an integer). """ store = StoreConfig() self.assertEqual(None, store.refreshMediaDelay) self.failUnlessAssignRaises(ValueError, store, "refreshMediaDelay", "blech") self.assertEqual(None, store.refreshMediaDelay) self.failUnlessAssignRaises(ValueError, store, "refreshMediaDelay", CollectDir()) self.assertEqual(None, store.refreshMediaDelay) def testConstructor_044(self): """ Test assignment of ejectDelay attribute, None value. """ store = StoreConfig(ejectDelay=4) self.assertEqual(4, store.ejectDelay) store.ejectDelay = None self.assertEqual(None, store.ejectDelay) #pylint: disable=R0204 def testConstructor_045(self): """ Test assignment of ejectDelay attribute, valid value. """ store = StoreConfig() self.assertEqual(None, store.ejectDelay) store.ejectDelay = 4 self.assertEqual(4, store.ejectDelay) store.ejectDelay = "12" self.assertEqual(12, store.ejectDelay) store.ejectDelay = "0" self.assertEqual(None, store.ejectDelay) store.ejectDelay = 0 self.assertEqual(None, store.ejectDelay) def testConstructor_046(self): """ Test assignment of ejectDelay attribute, invalid value (not an integer). """ store = StoreConfig() self.assertEqual(None, store.ejectDelay) self.failUnlessAssignRaises(ValueError, store, "ejectDelay", "blech") self.assertEqual(None, store.ejectDelay) self.failUnlessAssignRaises(ValueError, store, "ejectDelay", CollectDir()) self.assertEqual(None, store.ejectDelay) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ store1 = StoreConfig() store2 = StoreConfig() self.assertEqual(store1, store2) self.assertTrue(store1 == store2) self.assertTrue(not store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(store1 >= store2) self.assertTrue(not store1 != store2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.assertEqual(store1, store2) self.assertTrue(store1 == store2) self.assertTrue(not store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(store1 >= store2) self.assertTrue(not store1 != store2) def testComparison_003(self): """ Test comparison of two differing objects, sourceDir differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(sourceDir="/whatever") self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_004(self): """ Test comparison of two differing objects, sourceDir differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source1", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source2", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_005(self): """ Test comparison of two differing objects, mediaType differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(mediaType="cdr-74") self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_006(self): """ Test comparison of two differing objects, mediaType differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdrw-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(not store1 < store2) self.assertTrue(not store1 <= store2) self.assertTrue(store1 > store2) self.assertTrue(store1 >= store2) self.assertTrue(store1 != store2) def testComparison_007(self): """ Test comparison of two differing objects, deviceType differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(deviceType="cdwriter") self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_008(self): """ Test comparison of two differing objects, devicePath differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(devicePath="/dev/cdrw") self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_009(self): """ Test comparison of two differing objects, devicePath differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/hdd", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_010(self): """ Test comparison of two differing objects, deviceScsiId differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(deviceScsiId="0,0,0") self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_011(self): """ Test comparison of two differing objects, deviceScsiId differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "ATA:0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_012(self): """ Test comparison of two differing objects, driveSpeed differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(driveSpeed=3) self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_013(self): """ Test comparison of two differing objects, driveSpeed differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 1, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_014(self): """ Test comparison of two differing objects, checkData differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, False, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_015(self): """ Test comparison of two differing objects, warnMidnite differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, False, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_016(self): """ Test comparison of two differing objects, noEject differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, False, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_017(self): """ Test comparison of two differing objects, checkMedia differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, False, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_018(self): """ Test comparison of two differing objects, blankBehavior differs (one None). """ behavior = BlankBehavior() store1 = StoreConfig() store2 = StoreConfig(blankBehavior=behavior) self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_019(self): """ Test comparison of two differing objects, blankBehavior differs. """ behavior1 = BlankBehavior("daily", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_020(self): """ Test comparison of two differing objects, refreshMediaDelay differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(refreshMediaDelay=3) self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_021(self): """ Test comparison of two differing objects, refreshMediaDelay differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 1, True, True, True, True, behavior1, 1, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 1, True, True, True, True, behavior2, 4, 5) self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_022(self): """ Test comparison of two differing objects, ejectDelay differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(ejectDelay=3) self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) def testComparison_023(self): """ Test comparison of two differing objects, ejectDelay differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 1, True, True, True, True, behavior1, 4, 1) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 1, True, True, True, True, behavior2, 4, 5) self.assertNotEqual(store1, store2) self.assertTrue(not store1 == store2) self.assertTrue(store1 < store2) self.assertTrue(store1 <= store2) self.assertTrue(not store1 > store2) self.assertTrue(not store1 >= store2) self.assertTrue(store1 != store2) ######################## # TestPurgeConfig class ######################## class TestPurgeConfig(unittest.TestCase): """Tests for the PurgeConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PurgeConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ purge = PurgeConfig() self.assertEqual(None, purge.purgeDirs) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (empty list). """ purge = PurgeConfig([]) self.assertEqual([], purge.purgeDirs) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values (non-empty list). """ purge = PurgeConfig([PurgeDir(), ]) self.assertEqual([PurgeDir(), ], purge.purgeDirs) def testConstructor_004(self): """ Test assignment of purgeDirs attribute, None value. """ purge = PurgeConfig([]) self.assertEqual([], purge.purgeDirs) purge.purgeDirs = None self.assertEqual(None, purge.purgeDirs) def testConstructor_005(self): """ Test assignment of purgeDirs attribute, [] value. """ purge = PurgeConfig() self.assertEqual(None, purge.purgeDirs) purge.purgeDirs = [] self.assertEqual([], purge.purgeDirs) def testConstructor_006(self): """ Test assignment of purgeDirs attribute, single valid entry. """ purge = PurgeConfig() self.assertEqual(None, purge.purgeDirs) purge.purgeDirs = [PurgeDir(), ] self.assertEqual([PurgeDir(), ], purge.purgeDirs) def testConstructor_007(self): """ Test assignment of purgeDirs attribute, multiple valid entries. """ purge = PurgeConfig() self.assertEqual(None, purge.purgeDirs) purge.purgeDirs = [PurgeDir("/one"), PurgeDir("/two"), ] self.assertEqual([PurgeDir("/one"), PurgeDir("/two"), ], purge.purgeDirs) def testConstructor_009(self): """ Test assignment of purgeDirs attribute, single invalid entry (not a PurgeDir). """ purge = PurgeConfig() self.assertEqual(None, purge.purgeDirs) self.failUnlessAssignRaises(ValueError, purge, "purgeDirs", [ RemotePeer(), ]) self.assertEqual(None, purge.purgeDirs) def testConstructor_010(self): """ Test assignment of purgeDirs attribute, mixed valid and invalid entries. """ purge = PurgeConfig() self.assertEqual(None, purge.purgeDirs) self.failUnlessAssignRaises(ValueError, purge, "purgeDirs", [ PurgeDir(), RemotePeer(), ]) self.assertEqual(None, purge.purgeDirs) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ purge1 = PurgeConfig() purge2 = PurgeConfig() self.assertEqual(purge1, purge2) self.assertTrue(purge1 == purge2) self.assertTrue(not purge1 < purge2) self.assertTrue(purge1 <= purge2) self.assertTrue(not purge1 > purge2) self.assertTrue(purge1 >= purge2) self.assertTrue(not purge1 != purge2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None (empty lists). """ purge1 = PurgeConfig([]) purge2 = PurgeConfig([]) self.assertEqual(purge1, purge2) self.assertTrue(purge1 == purge2) self.assertTrue(not purge1 < purge2) self.assertTrue(purge1 <= purge2) self.assertTrue(not purge1 > purge2) self.assertTrue(purge1 >= purge2) self.assertTrue(not purge1 != purge2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None (non-empty lists). """ purge1 = PurgeConfig([PurgeDir(), ]) purge2 = PurgeConfig([PurgeDir(), ]) self.assertEqual(purge1, purge2) self.assertTrue(purge1 == purge2) self.assertTrue(not purge1 < purge2) self.assertTrue(purge1 <= purge2) self.assertTrue(not purge1 > purge2) self.assertTrue(purge1 >= purge2) self.assertTrue(not purge1 != purge2) def testComparison_004(self): """ Test comparison of two differing objects, purgeDirs differs (one None, one empty). """ purge1 = PurgeConfig(None) purge2 = PurgeConfig([]) self.assertNotEqual(purge1, purge2) self.assertTrue(not purge1 == purge2) self.assertTrue(purge1 < purge2) self.assertTrue(purge1 <= purge2) self.assertTrue(not purge1 > purge2) self.assertTrue(not purge1 >= purge2) self.assertTrue(purge1 != purge2) def testComparison_005(self): """ Test comparison of two differing objects, purgeDirs differs (one None, one not empty). """ purge1 = PurgeConfig(None) purge2 = PurgeConfig([PurgeDir(), ]) self.assertNotEqual(purge1, purge2) self.assertTrue(not purge1 == purge2) self.assertTrue(purge1 < purge2) self.assertTrue(purge1 <= purge2) self.assertTrue(not purge1 > purge2) self.assertTrue(not purge1 >= purge2) self.assertTrue(purge1 != purge2) def testComparison_006(self): """ Test comparison of two differing objects, purgeDirs differs (one empty, one not empty). """ purge1 = PurgeConfig([]) purge2 = PurgeConfig([PurgeDir(), ]) self.assertNotEqual(purge1, purge2) self.assertTrue(not purge1 == purge2) self.assertTrue(purge1 < purge2) self.assertTrue(purge1 <= purge2) self.assertTrue(not purge1 > purge2) self.assertTrue(not purge1 >= purge2) self.assertTrue(purge1 != purge2) def testComparison_007(self): """ Test comparison of two differing objects, purgeDirs differs (both not empty). """ purge1 = PurgeConfig([PurgeDir("/two"), ]) purge2 = PurgeConfig([PurgeDir("/one"), ]) self.assertNotEqual(purge1, purge2) self.assertTrue(not purge1 == purge2) self.assertTrue(not purge1 < purge2) self.assertTrue(not purge1 <= purge2) self.assertTrue(purge1 > purge2) self.assertTrue(purge1 >= purge2) self.assertTrue(purge1 != purge2) ################### # TestConfig class ################### class TestConfig(unittest.TestCase): """Tests for the Config class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = Config() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = Config(validate=False) self.assertEqual(None, config.reference) self.assertEqual(None, config.extensions) self.assertEqual(None, config.options) self.assertEqual(None, config.peers) self.assertEqual(None, config.collect) self.assertEqual(None, config.stage) self.assertEqual(None, config.store) self.assertEqual(None, config.purge) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = Config(validate=True) self.assertEqual(None, config.reference) self.assertEqual(None, config.extensions) self.assertEqual(None, config.options) self.assertEqual(None, config.peers) self.assertEqual(None, config.collect) self.assertEqual(None, config.stage) self.assertEqual(None, config.store) self.assertEqual(None, config.purge) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["cback.conf.2"] with open(path) as f: contents = f.read() self.assertRaises(ValueError, Config, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test with empty config document as data, validate=False. """ path = self.resources["cback.conf.2"] with open(path) as f: contents = f.read() config = Config(xmlData=contents, validate=False) self.assertEqual(None, config.reference) self.assertEqual(None, config.extensions) self.assertEqual(None, config.options) self.assertEqual(None, config.peers) self.assertEqual(None, config.collect) self.assertEqual(None, config.stage) self.assertEqual(None, config.store) self.assertEqual(None, config.purge) def testConstructor_005(self): """ Test with empty config document in a file, validate=False. """ path = self.resources["cback.conf.2"] config = Config(xmlPath=path, validate=False) self.assertEqual(None, config.reference) self.assertEqual(None, config.extensions) self.assertEqual(None, config.options) self.assertEqual(None, config.peers) self.assertEqual(None, config.collect) self.assertEqual(None, config.stage) self.assertEqual(None, config.store) self.assertEqual(None, config.purge) def testConstructor_006(self): """ Test assignment of reference attribute, None value. """ config = Config() config.reference = None self.assertEqual(None, config.reference) def testConstructor_007(self): """ Test assignment of reference attribute, valid value. """ config = Config() config.reference = ReferenceConfig() self.assertEqual(ReferenceConfig(), config.reference) def testConstructor_008(self): """ Test assignment of reference attribute, invalid value (not ReferenceConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "reference", CollectDir()) def testConstructor_009(self): """ Test assignment of extensions attribute, None value. """ config = Config() config.extensions = None self.assertEqual(None, config.extensions) def testConstructor_010(self): """ Test assignment of extensions attribute, valid value. """ config = Config() config.extensions = ExtensionsConfig() self.assertEqual(ExtensionsConfig(), config.extensions) def testConstructor_011(self): """ Test assignment of extensions attribute, invalid value (not ExtensionsConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "extensions", CollectDir()) def testConstructor_012(self): """ Test assignment of options attribute, None value. """ config = Config() config.options = None self.assertEqual(None, config.options) def testConstructor_013(self): """ Test assignment of options attribute, valid value. """ config = Config() config.options = OptionsConfig() self.assertEqual(OptionsConfig(), config.options) def testConstructor_014(self): """ Test assignment of options attribute, invalid value (not OptionsConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "options", CollectDir()) def testConstructor_015(self): """ Test assignment of collect attribute, None value. """ config = Config() config.collect = None self.assertEqual(None, config.collect) def testConstructor_016(self): """ Test assignment of collect attribute, valid value. """ config = Config() config.collect = CollectConfig() self.assertEqual(CollectConfig(), config.collect) def testConstructor_017(self): """ Test assignment of collect attribute, invalid value (not CollectConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "collect", CollectDir()) def testConstructor_018(self): """ Test assignment of stage attribute, None value. """ config = Config() config.stage = None self.assertEqual(None, config.stage) def testConstructor_019(self): """ Test assignment of stage attribute, valid value. """ config = Config() config.stage = StageConfig() self.assertEqual(StageConfig(), config.stage) def testConstructor_020(self): """ Test assignment of stage attribute, invalid value (not StageConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "stage", CollectDir()) def testConstructor_021(self): """ Test assignment of store attribute, None value. """ config = Config() config.store = None self.assertEqual(None, config.store) def testConstructor_022(self): """ Test assignment of store attribute, valid value. """ config = Config() config.store = StoreConfig() self.assertEqual(StoreConfig(), config.store) def testConstructor_023(self): """ Test assignment of store attribute, invalid value (not StoreConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "store", CollectDir()) def testConstructor_024(self): """ Test assignment of purge attribute, None value. """ config = Config() config.purge = None self.assertEqual(None, config.purge) def testConstructor_025(self): """ Test assignment of purge attribute, valid value. """ config = Config() config.purge = PurgeConfig() self.assertEqual(PurgeConfig(), config.purge) def testConstructor_026(self): """ Test assignment of purge attribute, invalid value (not PurgeConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "purge", CollectDir()) def testConstructor_027(self): """ Test assignment of peers attribute, None value. """ config = Config() config.peers = None self.assertEqual(None, config.peers) def testConstructor_028(self): """ Test assignment of peers attribute, valid value. """ config = Config() config.peers = PeersConfig() self.assertEqual(PeersConfig(), config.peers) def testConstructor_029(self): """ Test assignment of peers attribute, invalid value (not PeersConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "peers", CollectDir()) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = Config() config2 = Config() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = Config() config1.reference = ReferenceConfig() config1.extensions = ExtensionsConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.extensions = ExtensionsConfig() config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, reference differs (one None). """ config1 = Config() config2 = Config() config2.reference = ReferenceConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, reference differs. """ config1 = Config() config1.reference = ReferenceConfig(author="one") config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig(author="two") config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_005(self): """ Test comparison of two differing objects, extensions differs (one None). """ config1 = Config() config2 = Config() config2.extensions = ExtensionsConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_006(self): """ Test comparison of two differing objects, extensions differs (one list empty, one None). """ config1 = Config() config1.reference = ReferenceConfig() config1.extensions = ExtensionsConfig(None) config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.extensions = ExtensionsConfig([]) config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_007(self): """ Test comparison of two differing objects, extensions differs (one list empty, one not empty). """ config1 = Config() config1.reference = ReferenceConfig() config1.extensions = ExtensionsConfig([]) config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.extensions = ExtensionsConfig([ExtendedAction("one", "two", "three"), ]) config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_008(self): """ Test comparison of two differing objects, extensions differs (both lists not empty). """ config1 = Config() config1.reference = ReferenceConfig() config1.extensions = ExtensionsConfig([ExtendedAction("one", "two", "three"), ]) config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.extensions = ExtensionsConfig([ExtendedAction("one", "two", "four"), ]) config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(not config1 <= config2) self.assertTrue(config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(config1 != config2) def testComparison_009(self): """ Test comparison of two differing objects, options differs (one None). """ config1 = Config() config2 = Config() config2.options = OptionsConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_010(self): """ Test comparison of two differing objects, options differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.options = OptionsConfig(startingDay="tuesday") config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.options = OptionsConfig(startingDay="monday") config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(not config1 <= config2) self.assertTrue(config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(config1 != config2) def testComparison_011(self): """ Test comparison of two differing objects, collect differs (one None). """ config1 = Config() config2 = Config() config2.collect = CollectConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_012(self): """ Test comparison of two differing objects, collect differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig(collectMode="daily") config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig(collectMode="incr") config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_013(self): """ Test comparison of two differing objects, stage differs (one None). """ config1 = Config() config2 = Config() config2.stage = StageConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_014(self): """ Test comparison of two differing objects, stage differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig(targetDir="/something") config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig(targetDir="/whatever") config2.store = StoreConfig() config2.purge = PurgeConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_015(self): """ Test comparison of two differing objects, store differs (one None). """ config1 = Config() config2 = Config() config2.store = StoreConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_016(self): """ Test comparison of two differing objects, store differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig(deviceScsiId="ATA:0,0,0") config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig(deviceScsiId="0,0,0") config2.purge = PurgeConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(not config1 <= config2) self.assertTrue(config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(config1 != config2) def testComparison_017(self): """ Test comparison of two differing objects, purge differs (one None). """ config1 = Config() config2 = Config() config2.purge = PurgeConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_018(self): """ Test comparison of two differing objects, purge differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig(purgeDirs=None) config2 = Config() config2.reference = ReferenceConfig() config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig(purgeDirs=[]) self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_019(self): """ Test comparison of two differing objects, peers differs (one None). """ config1 = Config() config2 = Config() config2.peers = PeersConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_020(self): """ Test comparison of two identical objects, peers differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.extensions = ExtensionsConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.extensions = ExtensionsConfig() config2.options = OptionsConfig() config2.peers = PeersConfig(localPeers=[LocalPeer(), ]) config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on an empty reference section. """ config = Config() config.reference = ReferenceConfig() config._validateReference() def testValidate_002(self): """ Test validate on a non-empty reference section, with everything filled in. """ config = Config() config.reference = ReferenceConfig("author", "revision", "description", "generator") config._validateReference() def testValidate_003(self): """ Test validate on an empty extensions section, with a None list. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = None config._validateExtensions() def testValidate_004(self): """ Test validate on an empty extensions section, with [] for the list. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [] config._validateExtensions() def testValidate_005(self): """ Test validate on an a extensions section, with one empty extended action. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ExtendedAction(), ] self.assertRaises(ValueError, config._validateExtensions) def testValidate_006(self): """ Test validate on an a extensions section, with one extended action that has only a name. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ExtendedAction(name="name"), ] self.assertRaises(ValueError, config._validateExtensions) def testValidate_007(self): """ Test validate on an a extensions section, with one extended action that has only a module. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ExtendedAction(module="module"), ] self.assertRaises(ValueError, config._validateExtensions) def testValidate_008(self): """ Test validate on an a extensions section, with one extended action that has only a function. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ExtendedAction(function="function"), ] self.assertRaises(ValueError, config._validateExtensions) def testValidate_009(self): """ Test validate on an a extensions section, with one extended action that has only an index. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ExtendedAction(index=12), ] self.assertRaises(ValueError, config._validateExtensions) def testValidate_010(self): """ Test validate on an a extensions section, with one extended action that makes sense, index order mode. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = "index" config.extensions.actions = [ ExtendedAction("one", "two", "three", 100) ] config._validateExtensions() def testValidate_011(self): """ Test validate on an a extensions section, with one extended action that makes sense, dependency order mode. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = "dependency" config.extensions.actions = [ ExtendedAction("one", "two", "three", dependencies=ActionDependencies()) ] config._validateExtensions() def testValidate_012(self): """ Test validate on an a extensions section, with several extended actions that make sense for various kinds of order modes. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ ExtendedAction("a", "b", "c", 1), ExtendedAction("e", "f", "g", 10), ] config._validateExtensions() config.extensions = ExtensionsConfig() config.extensions.orderMode = "index" config.extensions.actions = [ ExtendedAction("a", "b", "c", 1), ExtendedAction("e", "f", "g", 10), ] config._validateExtensions() config.extensions = ExtensionsConfig() config.extensions.orderMode = "dependency" config.extensions.actions = [ ExtendedAction("a", "b", "c", dependencies=ActionDependencies()), ExtendedAction("e", "f", "g", dependencies=ActionDependencies()), ] config._validateExtensions() def testValidate_012a(self): """ Test validate on an a extensions section, with several extended actions that don't have the proper ordering modes. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ ExtendedAction("a", "b", "c", dependencies=ActionDependencies()), ExtendedAction("e", "f", "g", dependencies=ActionDependencies()), ] self.assertRaises(ValueError, config._validateExtensions) config.extensions = ExtensionsConfig() config.extensions.orderMode = "index" config.extensions.actions = [ ExtendedAction("a", "b", "c", dependencies=ActionDependencies()), ExtendedAction("e", "f", "g", dependencies=ActionDependencies()), ] self.assertRaises(ValueError, config._validateExtensions) config.extensions = ExtensionsConfig() config.extensions.orderMode = "dependency" config.extensions.actions = [ ExtendedAction("a", "b", "c", 100), ExtendedAction("e", "f", "g", 12), ] self.assertRaises(ValueError, config._validateExtensions) config.extensions = ExtensionsConfig() config.extensions.orderMode = "index" config.extensions.actions = [ ExtendedAction("a", "b", "c", 12), ExtendedAction("e", "f", "g", dependencies=ActionDependencies()), ] self.assertRaises(ValueError, config._validateExtensions) config.extensions = ExtensionsConfig() config.extensions.orderMode = "dependency" config.extensions.actions = [ ExtendedAction("a", "b", "c", dependencies=ActionDependencies()), ExtendedAction("e", "f", "g", 12), ] self.assertRaises(ValueError, config._validateExtensions) def testValidate_013(self): """ Test validate on an empty options section. """ config = Config() config.options = OptionsConfig() self.assertRaises(ValueError, config._validateOptions) def testValidate_014(self): """ Test validate on a non-empty options section, with everything filled in. """ config = Config() config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config._validateOptions() def testValidate_015(self): """ Test validate on a non-empty options section, with individual items missing. """ config = Config() config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config._validateOptions() config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config.options.startingDay = None self.assertRaises(ValueError, config._validateOptions) config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config.options.workingDir = None self.assertRaises(ValueError, config._validateOptions) config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config.options.backupUser = None self.assertRaises(ValueError, config._validateOptions) config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config.options.backupGroup = None self.assertRaises(ValueError, config._validateOptions) config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config.options.rcpCommand = None self.assertRaises(ValueError, config._validateOptions) def testValidate_016(self): """ Test validate on an empty collect section. """ config = Config() config.collect = CollectConfig() self.assertRaises(ValueError, config._validateCollect) def testValidate_017(self): """ Test validate on collect section containing only targetDir. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config._validateCollect() # we no longer validate that at least one file or dir is required here def testValidate_018(self): """ Test validate on collect section containing only targetDir and one collectDirs entry that is empty. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(), ] self.assertRaises(ValueError, config._validateCollect) def testValidate_018a(self): """ Test validate on collect section containing only targetDir and one collectFiles entry that is empty. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectFiles = [ CollectFile(), ] self.assertRaises(ValueError, config._validateCollect) def testValidate_019(self): """ Test validate on collect section containing only targetDir and one collectDirs entry with only a path. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff"), ] self.assertRaises(ValueError, config._validateCollect) def testValidate_019a(self): """ Test validate on collect section containing only targetDir and one collectFiles entry with only a path. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectFiles = [ CollectFile(absolutePath="/stuff"), ] self.assertRaises(ValueError, config._validateCollect) def testValidate_020(self): """ Test validate on collect section containing only targetDir and one collectDirs entry with path, collect mode, archive mode and ignore file. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i"), ] config._validateCollect() def testValidate_020a(self): """ Test validate on collect section containing only targetDir and one collectFiles entry with path, collect mode and archive mode. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectFiles = [ CollectFile(absolutePath="/stuff", collectMode="incr", archiveMode="tar"), ] config._validateCollect() def testValidate_021(self): """ Test validate on collect section containing targetDir, collect mode, archive mode and ignore file, and one collectDirs entry with only a path. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectMode = "incr" config.collect.archiveMode = "tar" config.collect.ignoreFile = "ignore" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff"), ] config._validateCollect() def testValidate_021a(self): """ Test validate on collect section containing targetDir, collect mode, archive mode and ignore file, and one collectFiles entry with only a path. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectMode = "incr" config.collect.archiveMode = "tar" config.collect.ignoreFile = "ignore" config.collect.collectFiles = [ CollectFile(absolutePath="/stuff"), ] config._validateCollect() def testValidate_022(self): """ Test validate on collect section containing targetDir, but with collect mode, archive mode and ignore file mixed between main section and directories. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.archiveMode = "tar" config.collect.ignoreFile = "ignore" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", ignoreFile="i"), ] config._validateCollect() config.collect.collectDirs.append(CollectDir(absolutePath="/stuff2")) self.assertRaises(ValueError, config._validateCollect) config.collect.collectDirs[-1].collectMode = "daily" config._validateCollect() def testValidate_022a(self): """ Test validate on collect section containing targetDir, but with collect mode, and archive mode mixed between main section and directories. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.archiveMode = "tar" config.collect.collectFiles = [ CollectFile(absolutePath="/stuff", collectMode="incr", archiveMode="targz"), ] config._validateCollect() config.collect.collectFiles.append(CollectFile(absolutePath="/stuff2")) self.assertRaises(ValueError, config._validateCollect) config.collect.collectFiles[-1].collectMode = "daily" config._validateCollect() def testValidate_023(self): """ Test validate on an empty stage section. """ config = Config() config.stage = StageConfig() self.assertRaises(ValueError, config._validateStage) def testValidate_024(self): """ Test validate on stage section containing only targetDir and None for the lists. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = None config.stage.remotePeers = None self.assertRaises(ValueError, config._validateStage) def testValidate_025(self): """ Test validate on stage section containing only targetDir and [] for the lists. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [] config.stage.remotePeers = [] self.assertRaises(ValueError, config._validateStage) def testValidate_026(self): """ Test validate on stage section containing targetDir and one local peer that is empty. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(), ] self.assertRaises(ValueError, config._validateStage) def testValidate_027(self): """ Test validate on stage section containing targetDir and one local peer with only a name. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(name="name"), ] self.assertRaises(ValueError, config._validateStage) def testValidate_028(self): """ Test validate on stage section containing targetDir and one local peer with a name and path, None for remote list. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(name="name", collectDir="/somewhere"), ] config.stage.remotePeers = None config._validateStage() def testValidate_029(self): """ Test validate on stage section containing targetDir and one local peer with a name and path, [] for remote list. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(name="name", collectDir="/somewhere"), ] config.stage.remotePeers = [] config._validateStage() def testValidate_030(self): """ Test validate on stage section containing targetDir and one remote peer that is empty. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.remotePeers = [RemotePeer(), ] self.assertRaises(ValueError, config._validateStage) def testValidate_031(self): """ Test validate on stage section containing targetDir and one remote peer with only a name. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.remotePeers = [RemotePeer(name="blech"), ] self.assertRaises(ValueError, config._validateStage) def testValidate_032(self): """ Test validate on stage section containing targetDir and one remote peer with a name and path, None for local list. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = None config.stage.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.assertRaises(ValueError, config._validateStage) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validateStage() config.options = None self.assertRaises(ValueError, config._validateStage) config.stage.remotePeers[-1].remoteUser = "remote" config.stage.remotePeers[-1].rcpCommand = "command" config._validateStage() def testValidate_033(self): """ Test validate on stage section containing targetDir and one remote peer with a name and path, [] for local list. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [] config.stage.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.assertRaises(ValueError, config._validateStage) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validateStage() config.options = None self.assertRaises(ValueError, config._validateStage) config.stage.remotePeers[-1].remoteUser = "remote" config.stage.remotePeers[-1].rcpCommand = "command" config._validateStage() def testValidate_034(self): """ Test validate on stage section containing targetDir and one remote and one local peer. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(name="metoo", collectDir="/nowhere"), ] config.stage.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.assertRaises(ValueError, config._validateStage) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validateStage() config.options = None self.assertRaises(ValueError, config._validateStage) config.stage.remotePeers[-1].remoteUser = "remote" config.stage.remotePeers[-1].rcpCommand = "command" config._validateStage() def testValidate_035(self): """ Test validate on stage section containing targetDir multiple remote and local peers. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(name="metoo", collectDir="/nowhere"), LocalPeer("one", "/two"), LocalPeer("a", "/b"), ] config.stage.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), RemotePeer("c", "/d"), ] self.assertRaises(ValueError, config._validateStage) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validateStage() config.options = None self.assertRaises(ValueError, config._validateStage) config.stage.remotePeers[-1].remoteUser = "remote" config.stage.remotePeers[-1].rcpCommand = "command" self.assertRaises(ValueError, config._validateStage) config.stage.remotePeers[0].remoteUser = "remote" config.stage.remotePeers[0].rcpCommand = "command" config._validateStage() def testValidate_036(self): """ Test validate on an empty store section. """ config = Config() config.store = StoreConfig() self.assertRaises(ValueError, config._validateStore) def testValidate_037(self): """ Test validate on store section with everything filled in. """ config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdrw-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-80" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdrw-80" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "dvd+r" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "dvd+rw" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() def testValidate_038(self): """ Test validate on store section missing one each of required fields. """ config = Config() config.store = StoreConfig() config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.assertRaises(ValueError, config._validateStore) config.store = StoreConfig() config.store.sourceDir = "/source" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.assertRaises(ValueError, config._validateStore) config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.assertRaises(ValueError, config._validateStore) def testValidate_039(self): """ Test validate on store section missing one each of device type, drive speed and capacity mode and the booleans. """ config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.warnMidnite = True config.store.noEject = True config._validateStore() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() def testValidate_039a(self): """ Test validate on store section with everything filled in, but mismatch device/media. """ config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.assertRaises(ValueError, config._validateStore) config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdrw-74" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.assertRaises(ValueError, config._validateStore) config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-80" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.assertRaises(ValueError, config._validateStore) config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdrw-80" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.assertRaises(ValueError, config._validateStore) config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "dvd+rw" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.assertRaises(ValueError, config._validateStore) config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "dvd+r" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.assertRaises(ValueError, config._validateStore) def testValidate_040(self): """ Test validate on an empty purge section, with a None list. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = None config._validatePurge() def testValidate_041(self): """ Test validate on an empty purge section, with [] for the list. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [] config._validatePurge() def testValidate_042(self): """ Test validate on an a purge section, with one empty purge dir. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [PurgeDir(), ] self.assertRaises(ValueError, config._validatePurge) def testValidate_043(self): """ Test validate on an a purge section, with one purge dir that has only a path. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [PurgeDir(absolutePath="/whatever"), ] self.assertRaises(ValueError, config._validatePurge) def testValidate_044(self): """ Test validate on an a purge section, with one purge dir that has only retain days. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [PurgeDir(retainDays=3), ] self.assertRaises(ValueError, config._validatePurge) def testValidate_045(self): """ Test validate on an a purge section, with one purge dir that makes sense. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [ PurgeDir(absolutePath="/whatever", retainDays=4), ] config._validatePurge() def testValidate_046(self): """ Test validate on an a purge section, with several purge dirs that make sense. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [ PurgeDir("/whatever", 4), PurgeDir("/etc/different", 12), ] config._validatePurge() def testValidate_047(self): """ Test that we catch a duplicate extended action name. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = "dependency" config.extensions.actions = [ ExtendedAction("unique1", "b", "c", dependencies=ActionDependencies()), ExtendedAction("unique2", "f", "g", dependencies=ActionDependencies()), ] config._validateExtensions() config.extensions.actions = [ ExtendedAction("duplicate", "b", "c", dependencies=ActionDependencies()), ExtendedAction("duplicate", "f", "g", dependencies=ActionDependencies()), ] self.assertRaises(ValueError, config._validateExtensions) def testValidate_048(self): """ Test that we catch a duplicate local peer name in stage configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), LocalPeer(name="unique2", collectDir="/nowhere"), ] config._validateStage() config.stage.localPeers = [ LocalPeer(name="duplicate", collectDir="/nowhere"), LocalPeer(name="duplicate", collectDir="/nowhere"), ] self.assertRaises(ValueError, config._validateStage) def testValidate_049(self): """ Test that we catch a duplicate remote peer name in stage configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.remotePeers = [ RemotePeer(name="unique1", collectDir="/some/path/to/data"), RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config._validateStage() config.stage.remotePeers = [ RemotePeer(name="duplicate", collectDir="/some/path/to/data"), RemotePeer(name="duplicate", collectDir="/some/path/to/data"), ] self.assertRaises(ValueError, config._validateStage) def testValidate_050(self): """ Test that we catch a duplicate peer name duplicated between remote and local in stage configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.stage.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config._validateStage() config.stage.localPeers = [ LocalPeer(name="duplicate", collectDir="/nowhere"), ] config.stage.remotePeers = [ RemotePeer(name="duplicate", collectDir="/some/path/to/data"), ] self.assertRaises(ValueError, config._validateStage) def testValidate_051(self): """ Test validate on a None peers section. """ config = Config() config.peers = None config._validatePeers() def testValidate_052(self): """ Test validate on an empty peers section. """ config = Config() config.peers = PeersConfig() self.assertRaises(ValueError, config._validatePeers) def testValidate_053(self): """ Test validate on peers section containing None for the lists. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = None config.peers.remotePeers = None self.assertRaises(ValueError, config._validatePeers) def testValidate_054(self): """ Test validate on peers section containing [] for the lists. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [] self.assertRaises(ValueError, config._validatePeers) def testValidate_055(self): """ Test validate on peers section containing one local peer that is empty. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(), ] self.assertRaises(ValueError, config._validatePeers) def testValidate_056(self): """ Test validate on peers section containing local peer with only a name. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(name="name"), ] self.assertRaises(ValueError, config._validatePeers) def testValidate_057(self): """ Test validate on peers section containing one local peer with a name and path, None for remote list. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(name="name", collectDir="/somewhere"), ] config.peers.remotePeers = None config._validatePeers() def testValidate_058(self): """ Test validate on peers section containing one local peer with a name and path, [] for remote list. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(name="name", collectDir="/somewhere"), ] config.peers.remotePeers = [] config._validatePeers() def testValidate_059(self): """ Test validate on peers section containing one remote peer that is empty. """ config = Config() config.peers = PeersConfig() config.peers.remotePeers = [RemotePeer(), ] self.assertRaises(ValueError, config._validatePeers) def testValidate_060(self): """ Test validate on peers section containing one remote peer with only a name. """ config = Config() config.peers = PeersConfig() config.peers.remotePeers = [RemotePeer(name="blech"), ] self.assertRaises(ValueError, config._validatePeers) def testValidate_061(self): """ Test validate on peers section containing one remote peer with a name and path, None for local list. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = None config.peers.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.assertRaises(ValueError, config._validatePeers) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validatePeers() config.options = None self.assertRaises(ValueError, config._validatePeers) config.peers.remotePeers[-1].remoteUser = "remote" config.peers.remotePeers[-1].rcpCommand = "command" config._validatePeers() def testValidate_062(self): """ Test validate on peers section containing one remote peer with a name and path, [] for local list. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.assertRaises(ValueError, config._validatePeers) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validatePeers() config.options = None self.assertRaises(ValueError, config._validatePeers) config.peers.remotePeers[-1].remoteUser = "remote" config.peers.remotePeers[-1].rcpCommand = "command" config._validatePeers() def testValidate_063(self): """ Test validate on peers section containing one remote and one local peer. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(name="metoo", collectDir="/nowhere"), ] config.peers.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.assertRaises(ValueError, config._validatePeers) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validatePeers() config.options = None self.assertRaises(ValueError, config._validatePeers) config.peers.remotePeers[-1].remoteUser = "remote" config.peers.remotePeers[-1].rcpCommand = "command" config._validatePeers() def testValidate_064(self): """ Test validate on peers section containing multiple remote and local peers. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(name="metoo", collectDir="/nowhere"), LocalPeer("one", "/two"), LocalPeer("a", "/b"), ] config.peers.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), RemotePeer("c", "/d"), ] self.assertRaises(ValueError, config._validatePeers) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validatePeers() config.options = None self.assertRaises(ValueError, config._validatePeers) config.peers.remotePeers[-1].remoteUser = "remote" config.peers.remotePeers[-1].rcpCommand = "command" self.assertRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].remoteUser = "remote" config.peers.remotePeers[0].rcpCommand = "command" config._validatePeers() def testValidate_065(self): """ Test that we catch a duplicate local peer name in peers configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), LocalPeer(name="unique2", collectDir="/nowhere"), ] config._validatePeers() config.peers.localPeers = [ LocalPeer(name="duplicate", collectDir="/nowhere"), LocalPeer(name="duplicate", collectDir="/nowhere"), ] self.assertRaises(ValueError, config._validatePeers) def testValidate_066(self): """ Test that we catch a duplicate remote peer name in peers configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.peers.remotePeers = [ RemotePeer(name="unique1", collectDir="/some/path/to/data"), RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config._validatePeers() config.peers.remotePeers = [ RemotePeer(name="duplicate", collectDir="/some/path/to/data"), RemotePeer(name="duplicate", collectDir="/some/path/to/data"), ] self.assertRaises(ValueError, config._validatePeers) def testValidate_067(self): """ Test that we catch a duplicate peer name duplicated between remote and local in peers configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config._validatePeers() config.peers.localPeers = [ LocalPeer(name="duplicate", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="duplicate", collectDir="/some/path/to/data"), ] self.assertRaises(ValueError, config._validatePeers) def testValidate_068(self): """ Test that stage peers can be None, if peers configuration is not None. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.stage = StageConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config.stage.targetDir = "/whatever" config.stage.localPeers = None config.stage.remotePeers = None config._validatePeers() config._validateStage() def testValidate_069(self): """ Test that stage peers can be empty lists, if peers configuration is not None. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.stage = StageConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config.stage.targetDir = "/whatever" config.stage.localPeers = [] config.stage.remotePeers = [] config._validatePeers() config._validateStage() def testValidate_070(self): """ Test that staging local peers must be valid if filled in, even if peers configuration is not None. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.stage = StageConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(), ] # empty local peer is invalid, so validation should catch it config.stage.remotePeers = [] config._validatePeers() self.assertRaises(ValueError, config._validateStage) def testValidate_071(self): """ Test that staging remote peers must be valid if filled in, even if peers configuration is not None. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.stage = StageConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config.stage.targetDir = "/whatever" config.stage.localPeers = [] config.stage.remotePeers = [RemotePeer(), ] # empty remote peer is invalid, so validation should catch it config._validatePeers() self.assertRaises(ValueError, config._validateStage) def testValidate_072(self): """ Test that staging local and remote peers must be valid if filled in, even if peers configuration is not None. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.stage = StageConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(), ] # empty local peer is invalid, so validation should catch it config.stage.remotePeers = [RemotePeer(), ] # empty remote peer is invalid, so validation should catch it config._validatePeers() self.assertRaises(ValueError, config._validateStage) def testValidate_073(self): """ Confirm that remote peer is required to have backup user if not set in options. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="rcp", rshCommand="rsh", cbackCommand="cback", managedActions=["collect"], ) config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [ RemotePeer(name="remote", collectDir="/path"), ] config._validatePeers() config.options.backupUser = None self.assertRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].remoteUser = "ken" config._validatePeers() def testValidate_074(self): """ Confirm that remote peer is required to have rcp command if not set in options. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="rcp", rshCommand="rsh", cbackCommand="cback", managedActions=["collect"], ) config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [ RemotePeer(name="remote", collectDir="/path"), ] config._validatePeers() config.options.rcpCommand = None self.assertRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].rcpCommand = "rcp" config._validatePeers() def testValidate_075(self): """ Confirm that remote managed peer is required to have rsh command if not set in options. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="rcp", rshCommand="rsh", cbackCommand="cback", managedActions=["collect"], ) config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [ RemotePeer(name="remote", collectDir="/path"), ] config._validatePeers() config.options.rshCommand = None config._validatePeers() config.peers.remotePeers[0].managed = True self.assertRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].rshCommand = "rsh" config._validatePeers() def testValidate_076(self): """ Confirm that remote managed peer is required to have cback command if not set in options. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="rcp", rshCommand="rsh", cbackCommand="cback", managedActions=["collect"], ) config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [ RemotePeer(name="remote", collectDir="/path"), ] config._validatePeers() config.options.cbackCommand = None config._validatePeers() config.peers.remotePeers[0].managed = True self.assertRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].cbackCommand = "cback" config._validatePeers() def testValidate_077(self): """ Confirm that remote managed peer is required to have managed actions list if not set in options. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="rcp", rshCommand="rsh", cbackCommand="cback", managedActions=["collect"], ) config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [ RemotePeer(name="remote", collectDir="/path"), ] config._validatePeers() config.options.managedActions = None config._validatePeers() config.peers.remotePeers[0].managed = True self.assertRaises(ValueError, config._validatePeers) config.options.managedActions = [] self.assertRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].managedActions = ["collect", ] config._validatePeers() def testValidate_078(self): """ Test case where dereference is True but link depth is None. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=None, dereference=True), ] self.assertRaises(ValueError, config._validateCollect) def testValidate_079(self): """ Test case where dereference is True but link depth is zero. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=0, dereference=True), ] self.assertRaises(ValueError, config._validateCollect) def testValidate_080(self): """ Test case where dereference is False and linkDepth is None. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=None, dereference=False), ] config._validateCollect() def testValidate_081(self): """ Test case where dereference is None and linkDepth is None. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=None, dereference=None), ] config._validateCollect() def testValidate_082(self): """ Test case where dereference is False and linkDepth is zero. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=0, dereference=False), ] config._validateCollect() def testValidate_083(self): """ Test case where dereference is None and linkDepth is zero. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=0, dereference=None), ] config._validateCollect() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document, validate=False. """ path = self.resources["cback.conf.2"] config = Config(xmlPath=path, validate=False) expected = Config() self.assertEqual(expected, config) def testParse_002(self): """ Parse empty config document, validate=True. """ path = self.resources["cback.conf.2"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_003(self): """ Parse config document containing only a reference section, containing only required fields, validate=False. """ path = self.resources["cback.conf.3"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig() self.assertEqual(expected, config) def testParse_004(self): """ Parse config document containing only a reference section, containing only required fields, validate=True. """ path = self.resources["cback.conf.3"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_005(self): """ Parse config document containing only a reference section, containing all required and optional fields, validate=False. """ path = self.resources["cback.conf.4"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") self.assertEqual(expected, config) def testParse_006(self): """ Parse config document containing only a reference section, containing all required and optional fields, validate=True. """ path = self.resources["cback.conf.4"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_007(self): """ Parse config document containing only a extensions section, containing only required fields, validate=False. """ path = self.resources["cback.conf.16"] config = Config(xmlPath=path, validate=False) expected = Config() expected.extensions = ExtensionsConfig() expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", 1)) self.assertEqual(expected, config) def testParse_008(self): """ Parse config document containing only a extensions section, containing only required fields, validate=True. """ path = self.resources["cback.conf.16"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_009(self): """ Parse config document containing only a extensions section, containing all fields, order mode is "index", validate=False. """ path = self.resources["cback.conf.18"] config = Config(xmlPath=path, validate=False) expected = Config() expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "index" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", 1)) self.assertEqual(expected, config) def testParse_009a(self): """ Parse config document containing only a extensions section, containing all fields, order mode is "dependency", validate=False. """ path = self.resources["cback.conf.19"] config = Config(xmlPath=path, validate=False) expected = Config() expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "dependency" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("sysinfo", "CedarBackup3.extend.sysinfo", "executeAction", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("mysql", "CedarBackup3.extend.mysql", "executeAction", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("postgresql", "CedarBackup3.extend.postgresql", "executeAction", index=None, dependencies=ActionDependencies(beforeList=["one", ]))) expected.extensions.actions.append(ExtendedAction("subversion", "CedarBackup3.extend.subversion", "executeAction", index=None, dependencies=ActionDependencies(afterList=["one", ]))) expected.extensions.actions.append(ExtendedAction("mbox", "CedarBackup3.extend.mbox", "executeAction", index=None, dependencies=ActionDependencies(beforeList=["one", ], afterList=["one", ]))) expected.extensions.actions.append(ExtendedAction("encrypt", "CedarBackup3.extend.encrypt", "executeAction", index=None, dependencies=ActionDependencies(beforeList=["a", "b", "c", "d", ], afterList=["one", "two", "three", "four", "five", "six", "seven", "eight", ]))) expected.extensions.actions.append(ExtendedAction("amazons3", "CedarBackup3.extend.amazons3", "executeAction", index=None, dependencies=ActionDependencies())) self.assertEqual(expected, config) def testParse_010(self): """ Parse config document containing only a extensions section, containing all fields, order mode is "index", validate=True. """ path = self.resources["cback.conf.18"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_010a(self): """ Parse config document containing only a extensions section, containing all fields, order mode is "dependency", validate=True. """ path = self.resources["cback.conf.19"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_011(self): """ Parse config document containing only an options section, containing only required fields, validate=False. """ path = self.resources["cback.conf.5"] config = Config(xmlPath=path, validate=False) expected = Config() expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B") self.assertEqual(expected, config) def testParse_012(self): """ Parse config document containing only an options section, containing only required fields, validate=True. """ path = self.resources["cback.conf.5"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_013(self): """ Parse config document containing only an options section, containing required and optional fields, validate=False. """ path = self.resources["cback.conf.6"] config = Config(xmlPath=path, validate=False) expected = Config() expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] self.assertEqual(expected, config) def testParse_014(self): """ Parse config document containing only an options section, containing required and optional fields, validate=True. """ path = self.resources["cback.conf.6"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_015(self): """ Parse config document containing only a collect section, containing only required fields, validate=False. (Case with single collect directory.) """ path = self.resources["cback.conf.7"] config = Config(xmlPath=path, validate=False) expected = Config() expected.collect = CollectConfig("/opt/backup/collect", "daily", "tar", ".ignore") expected.collect.collectDirs = [CollectDir(absolutePath="/etc"), ] self.assertEqual(expected, config) def testParse_015a(self): """ Parse config document containing only a collect section, containing only required fields, validate=False. (Case with single collect file.) """ path = self.resources["cback.conf.17"] config = Config(xmlPath=path, validate=False) expected = Config() expected.collect = CollectConfig("/opt/backup/collect", "daily", "tar", ".ignore") expected.collect.collectFiles = [CollectFile(absolutePath="/etc"), ] self.assertEqual(expected, config) def testParse_016(self): """ Parse config document containing only a collect section, containing only required fields, validate=True. (Case with single collect directory.) """ path = self.resources["cback.conf.7"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_016a(self): """ Parse config document containing only a collect section, containing only required fields, validate=True. (Case with single collect file.) """ path = self.resources["cback.conf.17"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_017(self): """ Parse config document containing only a collect section, containing required and optional fields, validate=False. """ path = self.resources["cback.conf.8"] config = Config(xmlPath=path, validate=False) expected = Config() expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", r".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root", recursionLevel=1)) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ r".*\.doc\.*", r".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) self.assertEqual(expected, config) def testParse_018(self): """ Parse config document containing only a collect section, containing required and optional fields, validate=True. """ path = self.resources["cback.conf.8"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_019(self): """ Parse config document containing only a stage section, containing only required fields, validate=False. """ path = self.resources["cback.conf.9"] config = Config(xmlPath=path, validate=False) expected = Config() expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = None expected.stage.remotePeers = [ RemotePeer("machine2", "/opt/backup/collect"), ] self.assertEqual(expected, config) def testParse_020(self): """ Parse config document containing only a stage section, containing only required fields, validate=True. """ path = self.resources["cback.conf.9"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_021(self): """ Parse config document containing only a stage section, containing all required and optional fields, validate=False. """ path = self.resources["cback.conf.10"] config = Config(xmlPath=path, validate=False) expected = Config() expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [] expected.stage.remotePeers = [] expected.stage.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.stage.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.stage.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.stage.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) self.assertEqual(expected, config) def testParse_022(self): """ Parse config document containing only a stage section, containing all required and optional fields, validate=True. """ path = self.resources["cback.conf.10"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_023(self): """ Parse config document containing only a store section, containing only required fields, validate=False. """ path = self.resources["cback.conf.11"] config = Config(xmlPath=path, validate=False) expected = Config() expected.store = StoreConfig("/opt/backup/staging", mediaType="cdrw-74", devicePath="/dev/cdrw", deviceScsiId=None) self.assertEqual(expected, config) def testParse_024(self): """ Parse config document containing only a store section, containing only required fields, validate=True. """ path = self.resources["cback.conf.11"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_025(self): """ Parse config document containing only a store section, containing all required and optional fields, validate=False. """ path = self.resources["cback.conf.12"] config = Config(xmlPath=path, validate=False) expected = Config() expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "cdrw-74" expected.store.deviceType = "cdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = "0,0,0" expected.store.driveSpeed = 4 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.refreshMediaDelay = 12 expected.store.ejectDelay = 13 expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" self.assertEqual(expected, config) def testParse_026(self): """ Parse config document containing only a store section, containing all required and optional fields, validate=True. """ path = self.resources["cback.conf.12"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_027(self): """ Parse config document containing only a purge section, containing only required fields, validate=False. """ path = self.resources["cback.conf.13"] config = Config(xmlPath=path, validate=False) expected = Config() expected.purge = PurgeConfig() expected.purge.purgeDirs = [PurgeDir("/opt/backup/stage", 5), ] self.assertEqual(expected, config) def testParse_028(self): """ Parse config document containing only a purge section, containing only required fields, validate=True. """ path = self.resources["cback.conf.13"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_029(self): """ Parse config document containing only a purge section, containing all required and optional fields, validate=False. """ path = self.resources["cback.conf.14"] config = Config(xmlPath=path, validate=False) expected = Config() expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.assertEqual(expected, config) def testParse_030(self): """ Parse config document containing only a purge section, containing all required and optional fields, validate=True. """ path = self.resources["cback.conf.14"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_031(self): """ Parse complete document containing all required and optional fields, "index" extensions, validate=False. """ path = self.resources["cback.conf.15"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "index" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", 102)) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", 350)) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", r".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ r".*\.doc\.*", r".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [] expected.stage.remotePeers = [] expected.stage.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.stage.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.stage.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.stage.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "cdrw-74" expected.store.deviceType = "cdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 4 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.assertEqual(expected, config) def testParse_031a(self): """ Parse complete document containing all required and optional fields, "dependency" extensions, validate=False. """ path = self.resources["cback.conf.20"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "dependency" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", index=None, dependencies=ActionDependencies(beforeList=["a", "b", "c", ], afterList=["one", ]))) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", r".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ r".*\.doc\.*", r".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [] expected.stage.remotePeers = [] expected.stage.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.stage.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.stage.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.stage.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "dvd+rw" expected.store.deviceType = "dvdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 1 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.assertEqual(expected, config) def testParse_032(self): """ Parse complete document containing all required and optional fields, "index" extensions, validate=True. """ path = self.resources["cback.conf.15"] config = Config(xmlPath=path, validate=True) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "index" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", 102)) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", 350)) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", r".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ r".*\.doc\.*", r".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [] expected.stage.remotePeers = [] expected.stage.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.stage.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.stage.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.stage.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "cdrw-74" expected.store.deviceType = "cdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 4 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.assertEqual(expected, config) def testParse_032a(self): """ Parse complete document containing all required and optional fields, "dependency" extensions, validate=True. """ path = self.resources["cback.conf.20"] config = Config(xmlPath=path, validate=True) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "dependency" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", index=None, dependencies=ActionDependencies(beforeList=["a", "b", "c", ], afterList=["one", ]))) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", r".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ r".*\.doc\.*", r".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [] expected.stage.remotePeers = [] expected.stage.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.stage.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.stage.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.stage.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "dvd+rw" expected.store.deviceType = "dvdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 1 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.assertEqual(expected, config) def testParse_033(self): """ Parse a sample from Cedar Backup v1.x, which must still be valid, validate=False. """ path = self.resources["cback.conf.1"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration") expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "backup", "/usr/bin/scp -1 -B") expected.collect = CollectConfig() expected.collect.targetDir = "/opt/backup/collect" expected.collect.archiveMode = "targz" expected.collect.ignoreFile = ".cbignore" expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir("/etc", collectMode="daily")) expected.collect.collectDirs.append(CollectDir("/var/log", collectMode="incr")) collectDir = CollectDir("/opt", collectMode="weekly") collectDir.absoluteExcludePaths = ["/opt/large", "/opt/backup", "/opt/tmp", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] expected.stage.remotePeers = [RemotePeer("machine2", "/opt/backup/collect", remoteUser="backup"), ] expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = "0,0,0" expected.store.driveSpeed = 4 expected.store.mediaType = "cdrw-74" expected.store.checkData = True expected.store.checkMedia = False expected.store.warnMidnite = False expected.store.noEject = False expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) self.assertEqual(expected, config) def testParse_034(self): """ Parse a sample from Cedar Backup v1.x, which must still be valid, validate=True. """ path = self.resources["cback.conf.1"] config = Config(xmlPath=path, validate=True) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration") expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "backup", "/usr/bin/scp -1 -B") expected.collect = CollectConfig() expected.collect.targetDir = "/opt/backup/collect" expected.collect.archiveMode = "targz" expected.collect.ignoreFile = ".cbignore" expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir("/etc", collectMode="daily")) expected.collect.collectDirs.append(CollectDir("/var/log", collectMode="incr")) collectDir = CollectDir("/opt", collectMode="weekly") collectDir.absoluteExcludePaths = ["/opt/large", "/opt/backup", "/opt/tmp", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] expected.stage.remotePeers = [RemotePeer("machine2", "/opt/backup/collect", remoteUser="backup"), ] expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = "0,0,0" expected.store.driveSpeed = 4 expected.store.mediaType = "cdrw-74" expected.store.checkData = True expected.store.checkMedia = False expected.store.warnMidnite = False expected.store.noEject = False expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) self.assertEqual(expected, config) def testParse_035(self): """ Document containing all required fields, peers in peer configuration and not staging, validate=False. """ path = self.resources["cback.conf.21"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "dependency" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", index=None, dependencies=ActionDependencies(beforeList=["a", "b", "c", ], afterList=["one", ]))) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.peers = PeersConfig() expected.peers.localPeers = [] expected.peers.remotePeers = [] expected.peers.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.peers.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.peers.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.peers.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.peers.remotePeers.append(RemotePeer("machine4", "/aa", remoteUser="someone", rcpCommand="scp -B", rshCommand="ssh", cbackCommand="cback", managed=True, managedActions=None)) expected.peers.remotePeers.append(RemotePeer("machine5", "/bb", managed=False, managedActions=["collect", "purge", ])) expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", r".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ r".*\.doc\.*", r".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = None expected.stage.remotePeers = None expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "dvd+rw" expected.store.deviceType = "dvdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 1 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.assertEqual(expected, config) def testParse_036(self): """ Document containing all required fields, peers in peer configuration and not staging, validate=True. """ path = self.resources["cback.conf.21"] config = Config(xmlPath=path, validate=True) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "dependency" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", index=None, dependencies=ActionDependencies(beforeList=["a", "b", "c", ], afterList=["one", ]))) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.peers = PeersConfig() expected.peers.localPeers = [] expected.peers.remotePeers = [] expected.peers.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.peers.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.peers.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.peers.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.peers.remotePeers.append(RemotePeer("machine4", "/aa", remoteUser="someone", rcpCommand="scp -B", rshCommand="ssh", cbackCommand="cback", managed=True, managedActions=None)) expected.peers.remotePeers.append(RemotePeer("machine5", "/bb", managed=False, managedActions=["collect", "purge", ])) expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", r".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ r".*\.doc\.*", r".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = None expected.stage.remotePeers = None expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "dvd+rw" expected.store.deviceType = "dvdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 1 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.assertEqual(expected, config) def testParse_037(self): """ Parse config document containing only a peers section, containing only required fields, validate=False. """ path = self.resources["cback.conf.22"] config = Config(xmlPath=path, validate=False) expected = Config() expected.peers = PeersConfig() expected.peers.localPeers = None expected.peers.remotePeers = [ RemotePeer("machine2", "/opt/backup/collect"), ] self.assertEqual(expected, config) def testParse_038(self): """ Parse config document containing only a peers section, containing only required fields, validate=True. """ path = self.resources["cback.conf.9"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_039(self): """ Parse config document containing only a peers section, containing all required and optional fields, validate=False. """ path = self.resources["cback.conf.23"] config = Config(xmlPath=path, validate=False) expected = Config() expected.peers = PeersConfig() expected.peers.localPeers = [] expected.peers.remotePeers = [] expected.peers.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.peers.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.peers.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.peers.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.peers.remotePeers.append(RemotePeer("machine4", "/aa", remoteUser="someone", rcpCommand="scp -B", rshCommand="ssh", cbackCommand="cback", managed=True, managedActions=None)) expected.peers.remotePeers.append(RemotePeer("machine5", "/bb", managed=False, managedActions=["collect", "purge", ])) self.assertEqual(expected, config) def testParse_040(self): """ Parse config document containing only a peers section, containing all required and optional fields, validate=True. """ path = self.resources["cback.conf.23"] self.assertRaises(ValueError, Config, xmlPath=path, validate=True) ######################### # Test the extract logic ######################### def testExtractXml_001(self): """ Extract empty config document, validate=True. """ before = Config() self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_002(self): """ Extract empty config document, validate=False. """ before = Config() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_003(self): """ Extract document containing only a valid reference section, validate=True. """ before = Config() before.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration") self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_004(self): """ Extract document containing only a valid reference section, validate=False. """ before = Config() before.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration") beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_005(self): """ Extract document containing only a valid extensions section, empty list, orderMode=None, validate=True. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.orderMode = None before.extensions.actions = [] self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_006(self): """ Extract document containing only a valid extensions section, non-empty list and orderMode="index", validate=True. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.orderMode = "index" before.extensions.actions = [] before.extensions.actions.append(ExtendedAction("name", "module", "function", 1)) self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_006a(self): """ Extract document containing only a valid extensions section, non-empty list and orderMode="dependency", validate=True. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.orderMode = "dependency" before.extensions.actions = [] before.extensions.actions.append(ExtendedAction("name", "module", "function", dependencies=ActionDependencies(beforeList=["b", ], afterList=["a", ]))) self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_007(self): """ Extract document containing only a valid extensions section, empty list, orderMode=None, validate=False. """ before = Config() before.extensions = ExtensionsConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_008(self): """ Extract document containing only a valid extensions section, orderMode="index", validate=False. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.orderMode = "index" before.extensions.actions = [] before.extensions.actions.append(ExtendedAction("name", "module", "function", 1)) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_009(self): """ Extract document containing only an invalid extensions section, validate=True. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.actions = [] before.extensions.actions.append(ExtendedAction("name", "module", None, None)) self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_010(self): """ Extract document containing only an invalid extensions section, validate=False. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.actions = [] before.extensions.actions.append(ExtendedAction("name", "module", None, None)) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_011(self): """ Extract document containing only a valid options section, validate=True. """ before = Config() before.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "backup", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh") before.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] before.options.hooks = [ PostActionHook("collect", "ls -l"), ] self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_012(self): """ Extract document containing only a valid options section, validate=False. """ before = Config() before.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "backup", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh") before.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] before.options.hooks = [ PostActionHook("collect", "ls -l"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_013(self): """ Extract document containing only an invalid options section, validate=True. """ before = Config() before.options = OptionsConfig() self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_014(self): """ Extract document containing only an invalid options section, validate=False. """ before = Config() before.options = OptionsConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_015(self): """ Extract document containing only a valid collect section, empty lists, validate=True. (Test a directory.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.collectDirs = [CollectDir("/etc", collectMode="daily"), ] self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_015a(self): """ Extract document containing only a valid collect section, empty lists, validate=True. (Test a file.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.collectFiles = [CollectFile("/etc", collectMode="daily"), ] self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_016(self): """ Extract document containing only a valid collect section, empty lists, validate=False. (Test a directory.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.collectDirs = [CollectDir("/etc", collectMode="daily"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_016a(self): """ Extract document containing only a valid collect section, empty lists, validate=False. (Test a file.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.collectFiles = [CollectFile("/etc", collectMode="daily"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_017(self): """ Extract document containing only a valid collect section, non-empty lists, validate=True. (Test a directory.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.absoluteExcludePaths = [ "/one", "/two", "/three", ] before.collect.excludePatterns = [ "pattern", ] before.collect.collectDirs = [CollectDir("/etc", collectMode="daily"), ] self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_017a(self): """ Extract document containing only a valid collect section, non-empty lists, validate=True. (Test a file.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.absoluteExcludePaths = [ "/one", "/two", "/three", ] before.collect.excludePatterns = [ "pattern", ] before.collect.collectFiles = [CollectFile("/etc", collectMode="daily"), ] self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_018(self): """ Extract document containing only a valid collect section, non-empty lists, validate=False. (Test a directory.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.absoluteExcludePaths = [ "/one", "/two", "/three", ] before.collect.excludePatterns = [ "pattern", ] before.collect.collectDirs = [CollectDir("/etc", collectMode="daily"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_018a(self): """ Extract document containing only a valid collect section, non-empty lists, validate=False. (Test a file.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.absoluteExcludePaths = [ "/one", "/two", "/three", ] before.collect.excludePatterns = [ "pattern", ] before.collect.collectFiles = [CollectFile("/etc", collectMode="daily"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_019(self): """ Extract document containing only an invalid collect section, validate=True. """ before = Config() before.collect = CollectConfig() self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_020(self): """ Extract document containing only an invalid collect section, validate=False. """ before = Config() before.collect = CollectConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_021(self): """ Extract document containing only a valid stage section, one empty list, validate=True. """ before = Config() before.stage = StageConfig() before.stage.targetDir = "/opt/backup/staging" before.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] before.stage.remotePeers = None self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_022(self): """ Extract document containing only a valid stage section, empty lists, validate=False. """ before = Config() before.stage = StageConfig() before.stage.targetDir = "/opt/backup/staging" before.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] before.stage.remotePeers = None beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_023(self): """ Extract document containing only a valid stage section, non-empty lists, validate=True. """ before = Config() before.stage = StageConfig() before.stage.targetDir = "/opt/backup/staging" before.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] before.stage.remotePeers = [RemotePeer("machine2", "/opt/backup/collect", remoteUser="backup"), ] self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_024(self): """ Extract document containing only a valid stage section, non-empty lists, validate=False. """ before = Config() before.stage = StageConfig() before.stage.targetDir = "/opt/backup/staging" before.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] before.stage.remotePeers = [RemotePeer("machine2", "/opt/backup/collect", remoteUser="backup"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_025(self): """ Extract document containing only an invalid stage section, validate=True. """ before = Config() before.stage = StageConfig() self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_026(self): """ Extract document containing only an invalid stage section, validate=False. """ before = Config() before.stage = StageConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_027(self): """ Extract document containing only a valid store section, validate=True. """ before = Config() before.store = StoreConfig() before.store.sourceDir = "/opt/backup/staging" before.store.devicePath = "/dev/cdrw" before.store.deviceScsiId = "0,0,0" before.store.driveSpeed = 4 before.store.mediaType = "cdrw-74" before.store.checkData = True before.store.checkMedia = True before.store.warnMidnite = True before.store.noEject = True before.store.refreshMediaDelay = 12 before.store.ejectDelay = 13 self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_028(self): """ Extract document containing only a valid store section, validate=False. """ before = Config() before.store = StoreConfig() before.store.sourceDir = "/opt/backup/staging" before.store.devicePath = "/dev/cdrw" before.store.deviceScsiId = "0,0,0" before.store.driveSpeed = 4 before.store.mediaType = "cdrw-74" before.store.checkData = True before.store.checkMedia = True before.store.warnMidnite = True before.store.noEject = True before.store.refreshMediaDelay = 12 before.store.ejectDelay = 13 beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_029(self): """ Extract document containing only an invalid store section, validate=True. """ before = Config() before.store = StoreConfig() self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_030(self): """ Extract document containing only an invalid store section, validate=False. """ before = Config() before.store = StoreConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_031(self): """ Extract document containing only a valid purge section, empty list, validate=True. """ before = Config() before.purge = PurgeConfig() self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_032(self): """ Extract document containing only a valid purge section, empty list, validate=False. """ before = Config() before.purge = PurgeConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_033(self): """ Extract document containing only a valid purge section, non-empty list, validate=True. """ before = Config() before.purge = PurgeConfig() before.purge.purgeDirs = [] before.purge.purgeDirs.append(PurgeDir(absolutePath="/whatever", retainDays=3)) self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_034(self): """ Extract document containing only a valid purge section, non-empty list, validate=False. """ before = Config() before.purge = PurgeConfig() before.purge.purgeDirs = [] before.purge.purgeDirs.append(PurgeDir(absolutePath="/whatever", retainDays=3)) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_035(self): """ Extract document containing only an invalid purge section, validate=True. """ before = Config() before.purge = PurgeConfig() before.purge.purgeDirs = [] before.purge.purgeDirs.append(PurgeDir(absolutePath="/whatever")) self.assertRaises(ValueError, before.extractXml, validate=True) def testExtractXml_036(self): """ Extract document containing only an invalid purge section, validate=False. """ before = Config() before.purge = PurgeConfig() before.purge.purgeDirs = [] before.purge.purgeDirs.append(PurgeDir(absolutePath="/whatever")) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_037(self): """ Extract complete document containing all required and optional fields, "index" extensions, validate=False. """ path = self.resources["cback.conf.15"] before = Config(xmlPath=path, validate=False) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_037a(self): """ Extract complete document containing all required and optional fields, "dependency" extensions, validate=False. """ path = self.resources["cback.conf.20"] before = Config(xmlPath=path, validate=False) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_038(self): """ Extract complete document containing all required and optional fields, "index" extensions, validate=True. """ path = self.resources["cback.conf.15"] before = Config(xmlPath=path, validate=True) beforeXml = before.extractXml(validate=True) after = Config(xmlData=beforeXml, validate=True) self.assertEqual(before, after) def testExtractXml_038a(self): """ Extract complete document containing all required and optional fields, "dependency" extensions, validate=True. """ path = self.resources["cback.conf.20"] before = Config(xmlPath=path, validate=True) beforeXml = before.extractXml(validate=True) after = Config(xmlData=beforeXml, validate=True) self.assertEqual(before, after) def testExtractXml_039(self): """ Extract a sample from Cedar Backup v1.x, which must still be valid, validate=False. """ path = self.resources["cback.conf.1"] before = Config(xmlPath=path, validate=False) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.assertEqual(before, after) def testExtractXml_040(self): """ Extract a sample from Cedar Backup v1.x, which must still be valid, validate=True. """ path = self.resources["cback.conf.1"] before = Config(xmlPath=path, validate=True) beforeXml = before.extractXml(validate=True) after = Config(xmlData=beforeXml, validate=True) self.assertEqual(before, after) def testExtractXml_041(self): """ Extract complete document containing all required and optional fields, using a peers configuration section, validate=True. """ path = self.resources["cback.conf.21"] before = Config(xmlPath=path, validate=True) beforeXml = before.extractXml(validate=True) after = Config(xmlData=beforeXml, validate=True) self.assertEqual(before, after) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" tests = [ ] tests.append(unittest.makeSuite(TestByteQuantity, 'test')) tests.append(unittest.makeSuite(TestActionDependencies, 'test')) tests.append(unittest.makeSuite(TestActionHook, 'test')) tests.append(unittest.makeSuite(TestPreActionHook, 'test')) tests.append(unittest.makeSuite(TestPostActionHook, 'test')) tests.append(unittest.makeSuite(TestBlankBehavior, 'test')) tests.append(unittest.makeSuite(TestExtendedAction, 'test')) tests.append(unittest.makeSuite(TestCommandOverride, 'test')) tests.append(unittest.makeSuite(TestCollectFile, 'test')) tests.append(unittest.makeSuite(TestCollectDir, 'test')) tests.append(unittest.makeSuite(TestPurgeDir, 'test')) tests.append(unittest.makeSuite(TestLocalPeer, 'test')) tests.append(unittest.makeSuite(TestRemotePeer, 'test')) tests.append(unittest.makeSuite(TestReferenceConfig, 'test')) tests.append(unittest.makeSuite(TestExtensionsConfig, 'test')) tests.append(unittest.makeSuite(TestOptionsConfig, 'test')) tests.append(unittest.makeSuite(TestPeersConfig, 'test')) tests.append(unittest.makeSuite(TestCollectConfig, 'test')) tests.append(unittest.makeSuite(TestStageConfig, 'test')) tests.append(unittest.makeSuite(TestStoreConfig, 'test')) tests.append(unittest.makeSuite(TestPurgeConfig, 'test')) tests.append(unittest.makeSuite(TestConfig, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/dvdwritertests.py0000664000175000017500000013455212560007330022762 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests DVD writer functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/writers/dvdwriter.py. Code Coverage ============= This module contains individual tests for the public classes implemented in dvdwriter.py. Unfortunately, it's rather difficult to test this code in an automated fashion, even if you have access to a physical DVD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to. Because of this, there aren't any tests below that actually cause DVD media to be written to. As a compromise, complicated parts of the implementation are in terms of private static methods with well-defined behaviors. Normally, I prefer to only test the public interface to class, but in this case, testing these few private methods will help give us some reasonable confidence in the code, even if we can't write a physical disc or can't run all of the tests. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. There are no special dependencies for these tests. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import os import unittest import tempfile from CedarBackup3.writers.dvdwriter import MediaDefinition, MediaCapacity, DvdWriter from CedarBackup3.writers.dvdwriter import MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW from CedarBackup3.testutil import findResources, buildPath, removedir, extractTar ####################################################################### # Module-wide configuration and constants ####################################################################### GB44 = (4.4*1024.0*1024.0*1024.0) # 4.4 GB GB44SECTORS = GB44/2048.0 # 4.4 GB in 2048-byte sectors DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "tree9.tar.gz", ] ####################################################################### # Test Case Classes ####################################################################### ############################ # TestMediaDefinition class ############################ class TestMediaDefinition(unittest.TestCase): """Tests for the MediaDefinition class.""" def testConstructor_001(self): """ Test the constructor with an invalid media type. """ self.assertRaises(ValueError, MediaDefinition, 100) def testConstructor_002(self): """ Test the constructor with the C{MEDIA_DVDPLUSR} media type. """ media = MediaDefinition(MEDIA_DVDPLUSR) self.assertEqual(MEDIA_DVDPLUSR, media.mediaType) self.assertEqual(False, media.rewritable) self.assertEqual(GB44SECTORS, media.capacity) def testConstructor_003(self): """ Test the constructor with the C{MEDIA_DVDPLUSRW} media type. """ media = MediaDefinition(MEDIA_DVDPLUSRW) self.assertEqual(MEDIA_DVDPLUSRW, media.mediaType) self.assertEqual(True, media.rewritable) self.assertEqual(GB44SECTORS, media.capacity) ########################## # TestMediaCapacity class ########################## class TestMediaCapacity(unittest.TestCase): """Tests for the MediaCapacity class.""" def testConstructor_001(self): """ Test the constructor with valid, zero values """ capacity = MediaCapacity(0.0, 0.0) self.assertEqual(0.0, capacity.bytesUsed) self.assertEqual(0.0, capacity.bytesAvailable) def testConstructor_002(self): """ Test the constructor with valid, non-zero values. """ capacity = MediaCapacity(1.1, 2.2) self.assertEqual(1.1, capacity.bytesUsed) self.assertEqual(2.2, capacity.bytesAvailable) def testConstructor_003(self): """ Test the constructor with bytesUsed that is not a float. """ self.assertRaises(ValueError, MediaCapacity, 0.0, "ken") def testConstructor_004(self): """ Test the constructor with bytesAvailable that is not a float. """ self.assertRaises(ValueError, MediaCapacity, "a", 0.0) ###################### # TestDvdWriter class ###################### class TestDvdWriter(unittest.TestCase): """Tests for the DvdWriter class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): removedir(self.tmpdir) ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def getFileContents(self, resource): """Gets contents of named resource as a list of strings.""" path = self.resources[resource] with open(path) as f: return f.readlines() ################### # Test constructor ################### def testConstructor_001(self): """ Test with an empty device. """ self.assertRaises(ValueError, DvdWriter, None) def testConstructor_002(self): """ Test with a device only. """ dvdwriter = DvdWriter("/dev/dvd", unittest=True) self.assertEqual("/dev/dvd", dvdwriter.device) self.assertEqual(None, dvdwriter.scsiId) self.assertEqual("/dev/dvd", dvdwriter.hardwareId) self.assertEqual(None, dvdwriter.driveSpeed) self.assertEqual(MEDIA_DVDPLUSRW, dvdwriter.media.mediaType) self.assertEqual(True, dvdwriter.deviceHasTray) self.assertEqual(True, dvdwriter.deviceCanEject) def testConstructor_003(self): """ Test with a device and valid SCSI id. """ dvdwriter = DvdWriter("/dev/dvd", scsiId="ATA:1,0,0", unittest=True) self.assertEqual("/dev/dvd", dvdwriter.device) self.assertEqual("ATA:1,0,0", dvdwriter.scsiId) self.assertEqual("/dev/dvd", dvdwriter.hardwareId) self.assertEqual(None, dvdwriter.driveSpeed) self.assertEqual(MEDIA_DVDPLUSRW, dvdwriter.media.mediaType) self.assertEqual(True, dvdwriter.deviceHasTray) self.assertEqual(True, dvdwriter.deviceCanEject) def testConstructor_004(self): """ Test with a device and valid drive speed. """ dvdwriter = DvdWriter("/dev/dvd", driveSpeed=3, unittest=True) self.assertEqual("/dev/dvd", dvdwriter.device) self.assertEqual(None, dvdwriter.scsiId) self.assertEqual("/dev/dvd", dvdwriter.hardwareId) self.assertEqual(3, dvdwriter.driveSpeed) self.assertEqual(MEDIA_DVDPLUSRW, dvdwriter.media.mediaType) self.assertEqual(True, dvdwriter.deviceHasTray) self.assertEqual(True, dvdwriter.deviceCanEject) def testConstructor_005(self): """ Test with a device with media type MEDIA_DVDPLUSR. """ dvdwriter = DvdWriter("/dev/dvd", mediaType=MEDIA_DVDPLUSR, unittest=True) self.assertEqual("/dev/dvd", dvdwriter.device) self.assertEqual(None, dvdwriter.scsiId) self.assertEqual("/dev/dvd", dvdwriter.hardwareId) self.assertEqual(None, dvdwriter.driveSpeed) self.assertEqual(MEDIA_DVDPLUSR, dvdwriter.media.mediaType) self.assertEqual(True, dvdwriter.deviceHasTray) self.assertEqual(True, dvdwriter.deviceCanEject) def testConstructor_006(self): """ Test with a device with media type MEDIA_DVDPLUSRW. """ dvdwriter = DvdWriter("/dev/dvd", mediaType=MEDIA_DVDPLUSR, unittest=True) self.assertEqual("/dev/dvd", dvdwriter.device) self.assertEqual(None, dvdwriter.scsiId) self.assertEqual("/dev/dvd", dvdwriter.hardwareId) self.assertEqual(None, dvdwriter.driveSpeed) self.assertEqual(MEDIA_DVDPLUSR, dvdwriter.media.mediaType) self.assertEqual(True, dvdwriter.deviceHasTray) self.assertEqual(True, dvdwriter.deviceCanEject) def testConstructor_007(self): """ Test with a device and invalid SCSI id. """ dvdwriter = DvdWriter("/dev/dvd", scsiId="00000000", unittest=True) self.assertEqual("/dev/dvd", dvdwriter.device) self.assertEqual("00000000", dvdwriter.scsiId) self.assertEqual("/dev/dvd", dvdwriter.hardwareId) self.assertEqual(None, dvdwriter.driveSpeed) self.assertEqual(MEDIA_DVDPLUSRW, dvdwriter.media.mediaType) self.assertEqual(True, dvdwriter.deviceHasTray) self.assertEqual(True, dvdwriter.deviceCanEject) def testConstructor_008(self): """ Test with a device and invalid drive speed. """ self.assertRaises(ValueError, DvdWriter, "/dev/dvd", driveSpeed="KEN", unittest=True) def testConstructor_009(self): """ Test with a device and invalid media type. """ self.assertRaises(ValueError, DvdWriter, "/dev/dvd", mediaType=999, unittest=True) def testConstructor_010(self): """ Test with all valid parameters, but no device, unittest=True. """ self.assertRaises(ValueError, DvdWriter, None, "ATA:1,0,0", 1, MEDIA_DVDPLUSRW, unittest=True) def testConstructor_011(self): """ Test with all valid parameters, but no device, unittest=False. """ self.assertRaises(ValueError, DvdWriter, None, "ATA:1,0,0", 1, MEDIA_DVDPLUSRW, unittest=False) def testConstructor_012(self): """ Test with all valid parameters, and an invalid device (not absolute path), unittest=True. """ self.assertRaises(ValueError, DvdWriter, "dev/dvd", "ATA:1,0,0", 1, MEDIA_DVDPLUSRW, unittest=True) def testConstructor_013(self): """ Test with all valid parameters, and an invalid device (not absolute path), unittest=False. """ self.assertRaises(ValueError, DvdWriter, "dev/dvd", "ATA:1,0,0", 1, MEDIA_DVDPLUSRW, unittest=False) def testConstructor_014(self): """ Test with all valid parameters, and an invalid device (path does not exist), unittest=False. """ self.assertRaises(ValueError, DvdWriter, "/dev/bogus", "ATA:1,0,0", 1, MEDIA_DVDPLUSRW, unittest=False) def testConstructor_015(self): """ Test with all valid parameters. """ dvdwriter = DvdWriter("/dev/dvd", "ATA:1,0,0", 1, MEDIA_DVDPLUSR, noEject=False, unittest=True) self.assertEqual("/dev/dvd", dvdwriter.device) self.assertEqual("ATA:1,0,0", dvdwriter.scsiId) self.assertEqual("/dev/dvd", dvdwriter.hardwareId) self.assertEqual(1, dvdwriter.driveSpeed) self.assertEqual(MEDIA_DVDPLUSR, dvdwriter.media.mediaType) self.assertEqual(True, dvdwriter.deviceHasTray) self.assertEqual(True, dvdwriter.deviceCanEject) def testConstructor_016(self): """ Test with all valid parameters. """ dvdwriter = DvdWriter("/dev/dvd", "ATA:1,0,0", 1, MEDIA_DVDPLUSR, noEject=True, unittest=True) self.assertEqual("/dev/dvd", dvdwriter.device) self.assertEqual("ATA:1,0,0", dvdwriter.scsiId) self.assertEqual("/dev/dvd", dvdwriter.hardwareId) self.assertEqual(1, dvdwriter.driveSpeed) self.assertEqual(MEDIA_DVDPLUSR, dvdwriter.media.mediaType) self.assertEqual(False, dvdwriter.deviceHasTray) self.assertEqual(False, dvdwriter.deviceCanEject) ###################### # Test isRewritable() ###################### def testIsRewritable_001(self): """ Test with MEDIA_DVDPLUSR. """ dvdwriter = DvdWriter("/dev/dvd", mediaType=MEDIA_DVDPLUSR, unittest=True) self.assertEqual(False, dvdwriter.isRewritable()) def testIsRewritable_002(self): """ Test with MEDIA_DVDPLUSRW. """ dvdwriter = DvdWriter("/dev/dvd", mediaType=MEDIA_DVDPLUSRW, unittest=True) self.assertEqual(True, dvdwriter.isRewritable()) ######################### # Test initializeImage() ######################### def testInitializeImage_001(self): """ Test with newDisc=False, tmpdir=None. """ dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) self.assertEqual(False, dvdwriter._image.newDisc) self.assertEqual(None, dvdwriter._image.tmpdir) self.assertEqual({}, dvdwriter._image.entries) def testInitializeImage_002(self): """ Test with newDisc=True, tmpdir not None. """ dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(True, "/path/to/somewhere") self.assertEqual(True, dvdwriter._image.newDisc) self.assertEqual("/path/to/somewhere", dvdwriter._image.tmpdir) self.assertEqual({}, dvdwriter._image.entries) ####################### # Test addImageEntry() ####################### def testAddImageEntry_001(self): """ Add a valid path with no graft point, before calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "dir002", ]) self.assertTrue(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) self.assertRaises(ValueError, dvdwriter.addImageEntry, path, None) def testAddImageEntry_002(self): """ Add a valid path with a graft point, before calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "dir002", ]) self.assertTrue(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) self.assertRaises(ValueError, dvdwriter.addImageEntry, path, "ken") def testAddImageEntry_003(self): """ Add a non-existent path with no graft point, before calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "bogus", ]) self.assertFalse(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) self.assertRaises(ValueError, dvdwriter.addImageEntry, path, None) def testAddImageEntry_004(self): """ Add a non-existent path with a graft point, before calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "bogus", ]) self.assertFalse(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) self.assertRaises(ValueError, dvdwriter.addImageEntry, path, "ken") def testAddImageEntry_005(self): """ Add a valid path with no graft point, after calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "dir002", ]) self.assertTrue(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) dvdwriter.addImageEntry(path, None) self.assertEqual({ path:None, }, dvdwriter._image.entries) def testAddImageEntry_006(self): """ Add a valid path with a graft point, after calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "dir002", ]) self.assertTrue(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) dvdwriter.addImageEntry(path, "ken") self.assertEqual({ path:"ken", }, dvdwriter._image.entries) def testAddImageEntry_007(self): """ Add a non-existent path with no graft point, after calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "bogus", ]) self.assertFalse(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) self.assertRaises(ValueError, dvdwriter.addImageEntry, path, None) def testAddImageEntry_008(self): """ Add a non-existent path with a graft point, after calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "bogus", ]) self.assertFalse(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) self.assertRaises(ValueError, dvdwriter.addImageEntry, path, "ken") def testAddImageEntry_009(self): """ Add the same path several times. """ self.extractTar("tree9") path = self.buildPath([ "tree9", "dir002", ]) self.assertTrue(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) dvdwriter.addImageEntry(path, "ken") self.assertEqual({ path:"ken", }, dvdwriter._image.entries) dvdwriter.addImageEntry(path, "ken") self.assertEqual({ path:"ken", }, dvdwriter._image.entries) dvdwriter.addImageEntry(path, "ken") self.assertEqual({ path:"ken", }, dvdwriter._image.entries) dvdwriter.addImageEntry(path, "ken") self.assertEqual({ path:"ken", }, dvdwriter._image.entries) def testAddImageEntry_010(self): """ Add several paths. """ self.extractTar("tree9") path1 = self.buildPath([ "tree9", "dir001", ]) path2 = self.buildPath([ "tree9", "dir002", ]) path3 = self.buildPath([ "tree9", "dir001", "dir001", ]) self.assertTrue(os.path.exists(path1)) self.assertTrue(os.path.exists(path2)) self.assertTrue(os.path.exists(path3)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) dvdwriter.addImageEntry(path1, None) self.assertEqual({ path1:None, }, dvdwriter._image.entries) dvdwriter.addImageEntry(path2, "ken") self.assertEqual({ path1:None, path2:"ken", }, dvdwriter._image.entries) dvdwriter.addImageEntry(path3, "another") self.assertEqual({ path1:None, path2:"ken", path3:"another", }, dvdwriter._image.entries) ############################ # Test _searchForOverburn() ############################ def testSearchForOverburn_001(self): """ Test with output=None. """ output = None DvdWriter._searchForOverburn(output) # no exception should be thrown def testSearchForOverburn_002(self): """ Test with output=[]. """ output = [] DvdWriter._searchForOverburn(output) # no exception should be thrown def testSearchForOverburn_003(self): """ Test with one-line output, not containing the pattern. """ output = [ "This line does not contain the pattern", ] DvdWriter._searchForOverburn(output) # no exception should be thrown output = [ ":-( /dev/cdrom: blocks are free, to be written!", ] DvdWriter._searchForOverburn(output) # no exception should be thrown output = [ ":-) /dev/cdrom: 89048 blocks are free, 2033746 to be written!", ] DvdWriter._searchForOverburn(output) # no exception should be thrown output = [ ":-( /dev/cdrom: 894048blocks are free, 2033746to be written!", ] DvdWriter._searchForOverburn(output) # no exception should be thrown def testSearchForOverburn_004(self): """ Test with one-line output(s), containing the pattern. """ output = [ ":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!", ] self.assertRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( /dev/cdrom: XXXX blocks are free, XXXX to be written!", ] self.assertRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( /dev/cdrom: 1 blocks are free, 1 to be written!", ] self.assertRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( /dev/cdrom: 0 blocks are free, 0 to be written!", ] self.assertRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( /dev/dvd: 0 blocks are free, 0 to be written!", ] self.assertRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( /dev/writer: 0 blocks are free, 0 to be written!", ] self.assertRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( bogus: 0 blocks are free, 0 to be written!", ] self.assertRaises(IOError, DvdWriter._searchForOverburn, output) def testSearchForOverburn_005(self): """ Test with multi-line output, not containing the pattern. """ output = [] output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append("Rock Ridge signatures found") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") DvdWriter._searchForOverburn(output) # no exception should be thrown") def testSearchForOverburn_006(self): """ Test with multi-line output, containing the pattern at the top. """ output = [] output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append("Rock Ridge signatures found") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") self.assertRaises(IOError, DvdWriter._searchForOverburn, output) def testSearchForOverburn_007(self): """ Test with multi-line output, containing the pattern at the bottom. """ output = [] output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append("Rock Ridge signatures found") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") self.assertRaises(IOError, DvdWriter._searchForOverburn, output) def testSearchForOverburn_008(self): """ Test with multi-line output, containing the pattern in the middle. """ output = [] output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append("Rock Ridge signatures found") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") self.assertRaises(IOError, DvdWriter._searchForOverburn, output) def testSearchForOverburn_009(self): """ Test with multi-line output, containing the pattern several times. """ output = [] output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Rock Ridge signatures found") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") self.assertRaises(IOError, DvdWriter._searchForOverburn, output) ########################### # Test _parseSectorsUsed() ########################### def testParseSectorsUsed_001(self): """ Test with output=None. """ output = None sectorsUsed = DvdWriter._parseSectorsUsed(output) self.assertEqual(0.0, sectorsUsed) def testParseSectorsUsed_002(self): """ Test with output=[]. """ output = [] sectorsUsed = DvdWriter._parseSectorsUsed(output) self.assertEqual(0.0, sectorsUsed) def testParseSectorsUsed_003(self): """ Test with one-line output, not containing the pattern. """ output = [ "This line does not contain the pattern", ] sectorsUsed = DvdWriter._parseSectorsUsed(output) self.assertEqual(0.0, sectorsUsed) def testParseSectorsUsed_004(self): """ Test with one-line output(s), containing the pattern. """ output = [ "'seek=10'", ] sectorsUsed = DvdWriter._parseSectorsUsed(output) self.assertEqual(10.0*16.0, sectorsUsed) output = [ "' seek= 10 '", ] sectorsUsed = DvdWriter._parseSectorsUsed(output) self.assertEqual(10.0*16.0, sectorsUsed) output = [ "Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'", ] sectorsUsed = DvdWriter._parseSectorsUsed(output) self.assertEqual(87566*16.0, sectorsUsed) def testParseSectorsUsed_005(self): """ Test with real growisofs output. """ output = [] output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append("Rock Ridge signatures found") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") sectorsUsed = DvdWriter._parseSectorsUsed(output) self.assertEqual(87566*16.0, sectorsUsed) ######################### # Test _buildWriteArgs() ######################### def testBuildWriteArgs_001(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=None, mediaLabel=None,dryRun=False. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = None mediaLabel = None dryRun = False self.assertRaises(ValueError, DvdWriter._buildWriteArgs, newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) def testBuildWriteArgs_002(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=None, mediaLabel=None, dryRun=True. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = None mediaLabel = None dryRun = True self.assertRaises(ValueError, DvdWriter._buildWriteArgs, newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) def testBuildWriteArgs_003(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=False. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-M", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) def testBuildWriteArgs_004(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=True. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-M", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) def testBuildWriteArgs_005(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=None, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=False. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = None imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-Z", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) def testBuildWriteArgs_006(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=None, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=True. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = None imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-Z", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) def testBuildWriteArgs_007(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=1, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=False. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = 1 imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-speed=1", "-M", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) def testBuildWriteArgs_008(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=2, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=True. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = 2 imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-speed=2", "-M", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) def testBuildWriteArgs_009(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=3, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=False. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 3 imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-speed=3", "-Z", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) def testBuildWriteArgs_010(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=4, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=True. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 4 imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-speed=4", "-Z", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) def testBuildWriteArgs_011(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=, mediaLabel=None, dryRun=False. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = { "path1":None, } mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-M", "/dev/dvd", "-r", "-graft-points", "path1", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) def testBuildWriteArgs_012(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=, mediaLabel=None, dryRun=True. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = { "path1":None, } mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-M", "/dev/dvd", "-r", "-graft-points", "path1", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) def testBuildWriteArgs_013(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=, mediaLabel=None, dryRun=False. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = { "path1":None, "path2":"graft2", "path3":"/path/to/graft3", } mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-Z", "/dev/dvd", "-r", "-graft-points", "path1", "graft2/=path2", "path/to/graft3/=path3", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) def testBuildWriteArgs_014(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=, mediaLabel=None, dryRun=True. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = { "path1":None, "path2":"graft2", "path3":"/path/to/graft3", } mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-Z", "/dev/dvd", "-r", "-graft-points", "path1", "graft2/=path2", "path/to/graft3/=path3", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) def testBuildWriteArgs_015(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=1, imagePath=None, entries=, mediaLabel=None, dryRun=False. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = 1 imagePath = None entries = { "path1":None, "path2":"graft2", } mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-speed=1", "-M", "/dev/dvd", "-r", "-graft-points", "path1", "graft2/=path2", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) def testBuildWriteArgs_016(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=2, imagePath=None, entries=, mediaLabel=None, dryRun=True. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = 2 imagePath = None entries = { "path1":None, "path2":"graft2", } mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-speed=2", "-M", "/dev/dvd", "-r", "-graft-points", "path1", "graft2/=path2", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) def testBuildWriteArgs_017(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=3, imagePath=None, entries=, mediaLabel=None, dryRun=False. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 3 imagePath = None entries = { "path1":None, "/path/to/path2":None, "/path/to/path3/":"/path/to/graft3/", } mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-speed=3", "-Z", "/dev/dvd", "-r", "-graft-points", "/path/to/path2", "path/to/graft3/=/path/to/path3/", "path1", ] # sorted order actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) def testBuildWriteArgs_018(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=4, imagePath=None, entries=, mediaLabel=None, dryRun=True. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 4 imagePath = None entries = { "path1":None, "/path/to/path2":None, "/path/to/path3/":"/path/to/graft3/", } mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-speed=4", "-Z", "/dev/dvd", "-r", "-graft-points", "/path/to/path2", "path/to/graft3/=/path/to/path3/", "path1", ] # sorted order actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) def testBuildWriteArgs_019(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=3, imagePath="/path/to/image", entries=None, mediaLabel="BACKUP", dryRun=False. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 3 imagePath = "/path/to/image" entries = None mediaLabel = "BACKUP" dryRun = False expected = [ "-use-the-force-luke=tty", "-speed=3", "-Z", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) def testBuildWriteArgs_020(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=4, imagePath=None, entries=, mediaLabel="BACKUP", dryRun=True. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 4 imagePath = None entries = { "path1":None, "/path/to/path2":None, "/path/to/path3/":"/path/to/graft3/", } mediaLabel = "BACKUP" dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-speed=4", "-Z", "/dev/dvd", "-V", "BACKUP", "-r", "-graft-points", "/path/to/path2", "path/to/graft3/=/path/to/path3/", "path1", ] # sorted order actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.assertEqual(actual, expected) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" tests = [ ] tests.append(unittest.makeSuite(TestMediaDefinition, 'test')) tests.append(unittest.makeSuite(TestMediaCapacity, 'test')) tests.append(unittest.makeSuite(TestDvdWriter, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/capacitytests.py0000664000175000017500000007640112560007330022543 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2008,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests capacity extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/extend/capacity.py. Code Coverage ============= This module contains individual tests for the the public classes implemented in extend/capacity.py. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a CAPACITYTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest # Cedar Backup modules from CedarBackup3.util import UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES from CedarBackup3.testutil import findResources, failUnlessAssignRaises from CedarBackup3.xmlutil import createOutputDom, serializeDom from CedarBackup3.extend.capacity import LocalConfig, CapacityConfig, ByteQuantity, PercentageQuantity ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "capacity.conf.1", "capacity.conf.2", "capacity.conf.3", "capacity.conf.4", ] ####################################################################### # Test Case Classes ####################################################################### ############################### # TestPercentageQuantity class ############################### class TestPercentageQuantity(unittest.TestCase): """Tests for the PercentageQuantity class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PercentageQuantity() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ quantity = PercentageQuantity() self.assertEqual(None, quantity.quantity) self.assertEqual(0.0, quantity.percentage) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ quantity = PercentageQuantity("6") self.assertEqual("6", quantity.quantity) self.assertEqual(6.0, quantity.percentage) def testConstructor_003(self): """ Test assignment of quantity attribute, None value. """ quantity = PercentageQuantity(quantity="1.0") self.assertEqual("1.0", quantity.quantity) self.assertEqual(1.0, quantity.percentage) quantity.quantity = None self.assertEqual(None, quantity.quantity) self.assertEqual(0.0, quantity.percentage) def testConstructor_004(self): """ Test assignment of quantity attribute, valid values. """ quantity = PercentageQuantity() self.assertEqual(None, quantity.quantity) self.assertEqual(0.0, quantity.percentage) quantity.quantity = "1.0" self.assertEqual("1.0", quantity.quantity) self.assertEqual(1.0, quantity.percentage) quantity.quantity = ".1" self.assertEqual(".1", quantity.quantity) self.assertEqual(0.1, quantity.percentage) quantity.quantity = "12" self.assertEqual("12", quantity.quantity) self.assertEqual(12.0, quantity.percentage) quantity.quantity = "0.5" self.assertEqual("0.5", quantity.quantity) self.assertEqual(0.5, quantity.percentage) quantity.quantity = "0.25E2" self.assertEqual("0.25E2", quantity.quantity) self.assertEqual(0.25e2, quantity.percentage) def testConstructor_005(self): """ Test assignment of quantity attribute, invalid value (empty). """ quantity = PercentageQuantity() self.assertEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "") self.assertEqual(None, quantity.quantity) def testConstructor_006(self): """ Test assignment of quantity attribute, invalid value (not a floating point number). """ quantity = PercentageQuantity() self.assertEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "blech") self.assertEqual(None, quantity.quantity) def testConstructor_007(self): """ Test assignment of quantity attribute, invalid value (negative number). """ quantity = PercentageQuantity() self.assertEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-3") self.assertEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-6.8") self.assertEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-0.2") self.assertEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-.1") self.assertEqual(None, quantity.quantity) def testConstructor_008(self): """ Test assignment of quantity attribute, invalid value (larger than 100%). """ quantity = PercentageQuantity() self.assertEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "100.0001") self.assertEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "101") self.assertEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "1e6") self.assertEqual(None, quantity.quantity) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ quantity1 = PercentageQuantity() quantity2 = PercentageQuantity() self.assertEqual(quantity1, quantity2) self.assertTrue(quantity1 == quantity2) self.assertTrue(not quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(quantity1 >= quantity2) self.assertTrue(not quantity1 != quantity2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ quantity1 = PercentageQuantity("12") quantity2 = PercentageQuantity("12") self.assertEqual(quantity1, quantity2) self.assertTrue(quantity1 == quantity2) self.assertTrue(not quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(quantity1 >= quantity2) self.assertTrue(not quantity1 != quantity2) def testComparison_003(self): """ Test comparison of two differing objects, quantity differs (one None). """ quantity1 = PercentageQuantity() quantity2 = PercentageQuantity(quantity="12") self.assertNotEqual(quantity1, quantity2) self.assertTrue(not quantity1 == quantity2) self.assertTrue(quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(not quantity1 >= quantity2) self.assertTrue(quantity1 != quantity2) def testComparison_004(self): """ Test comparison of two differing objects, quantity differs. """ quantity1 = PercentageQuantity("10") quantity2 = PercentageQuantity("12") self.assertNotEqual(quantity1, quantity2) self.assertTrue(not quantity1 == quantity2) self.assertTrue(quantity1 < quantity2) self.assertTrue(quantity1 <= quantity2) self.assertTrue(not quantity1 > quantity2) self.assertTrue(not quantity1 >= quantity2) self.assertTrue(quantity1 != quantity2) ########################## # TestCapacityConfig class ########################## class TestCapacityConfig(unittest.TestCase): """Tests for the CapacityConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = CapacityConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ capacity = CapacityConfig() self.assertEqual(None, capacity.maxPercentage) self.assertEqual(None, capacity.minBytes) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ capacity = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("2.0", UNIT_KBYTES)) self.assertEqual(PercentageQuantity("63.2"), capacity.maxPercentage) self.assertEqual(ByteQuantity("2.0", UNIT_KBYTES), capacity.minBytes) def testConstructor_003(self): """ Test assignment of maxPercentage attribute, None value. """ capacity = CapacityConfig(maxPercentage=PercentageQuantity("63.2")) self.assertEqual(PercentageQuantity("63.2"), capacity.maxPercentage) capacity.maxPercentage = None self.assertEqual(None, capacity.maxPercentage) def testConstructor_004(self): """ Test assignment of maxPercentage attribute, valid value. """ capacity = CapacityConfig() self.assertEqual(None, capacity.maxPercentage) capacity.maxPercentage = PercentageQuantity("63.2") self.assertEqual(PercentageQuantity("63.2"), capacity.maxPercentage) def testConstructor_005(self): """ Test assignment of maxPercentage attribute, invalid value (empty). """ capacity = CapacityConfig() self.assertEqual(None, capacity.maxPercentage) self.failUnlessAssignRaises(ValueError, capacity, "maxPercentage", "") self.assertEqual(None, capacity.maxPercentage) def testConstructor_006(self): """ Test assignment of maxPercentage attribute, invalid value (not a PercentageQuantity). """ capacity = CapacityConfig() self.assertEqual(None, capacity.maxPercentage) self.failUnlessAssignRaises(ValueError, capacity, "maxPercentage", "1.0 GB") self.assertEqual(None, capacity.maxPercentage) def testConstructor_007(self): """ Test assignment of minBytes attribute, None value. """ capacity = CapacityConfig(minBytes=ByteQuantity("1.00", UNIT_KBYTES)) self.assertEqual(ByteQuantity("1.00", UNIT_KBYTES), capacity.minBytes) capacity.minBytes = None self.assertEqual(None, capacity.minBytes) def testConstructor_008(self): """ Test assignment of minBytes attribute, valid value. """ capacity = CapacityConfig() self.assertEqual(None, capacity.minBytes) capacity.minBytes = ByteQuantity("1.00", UNIT_KBYTES) self.assertEqual(ByteQuantity("1.00", UNIT_KBYTES), capacity.minBytes) def testConstructor_009(self): """ Test assignment of minBytes attribute, invalid value (empty). """ capacity = CapacityConfig() self.assertEqual(None, capacity.minBytes) self.failUnlessAssignRaises(ValueError, capacity, "minBytes", "") self.assertEqual(None, capacity.minBytes) def testConstructor_010(self): """ Test assignment of minBytes attribute, invalid value (not a ByteQuantity). """ capacity = CapacityConfig() self.assertEqual(None, capacity.minBytes) self.failUnlessAssignRaises(ValueError, capacity, "minBytes", 12) self.assertEqual(None, capacity.minBytes) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ capacity1 = CapacityConfig() capacity2 = CapacityConfig() self.assertEqual(capacity1, capacity2) self.assertTrue(capacity1 == capacity2) self.assertTrue(not capacity1 < capacity2) self.assertTrue(capacity1 <= capacity2) self.assertTrue(not capacity1 > capacity2) self.assertTrue(capacity1 >= capacity2) self.assertTrue(not capacity1 != capacity2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ capacity1 = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("1.00", UNIT_MBYTES)) capacity2 = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("1.00", UNIT_MBYTES)) self.assertEqual(capacity1, capacity2) self.assertTrue(capacity1 == capacity2) self.assertTrue(not capacity1 < capacity2) self.assertTrue(capacity1 <= capacity2) self.assertTrue(not capacity1 > capacity2) self.assertTrue(capacity1 >= capacity2) self.assertTrue(not capacity1 != capacity2) def testComparison_003(self): """ Test comparison of two differing objects, maxPercentage differs (one None). """ capacity1 = CapacityConfig() capacity2 = CapacityConfig(maxPercentage=PercentageQuantity("63.2")) self.assertNotEqual(capacity1, capacity2) self.assertTrue(not capacity1 == capacity2) self.assertTrue(capacity1 < capacity2) self.assertTrue(capacity1 <= capacity2) self.assertTrue(not capacity1 > capacity2) self.assertTrue(not capacity1 >= capacity2) self.assertTrue(capacity1 != capacity2) def testComparison_004(self): """ Test comparison of two differing objects, maxPercentage differs. """ capacity1 = CapacityConfig(PercentageQuantity("15.0"), ByteQuantity("1.00", UNIT_MBYTES)) capacity2 = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("1.00", UNIT_MBYTES)) self.assertNotEqual(capacity1, capacity2) self.assertTrue(not capacity1 == capacity2) self.assertTrue(capacity1 < capacity2) self.assertTrue(capacity1 <= capacity2) self.assertTrue(not capacity1 > capacity2) self.assertTrue(not capacity1 >= capacity2) self.assertTrue(capacity1 != capacity2) def testComparison_005(self): """ Test comparison of two differing objects, minBytes differs (one None). """ capacity1 = CapacityConfig() capacity2 = CapacityConfig(minBytes=ByteQuantity("1.00", UNIT_MBYTES)) self.assertNotEqual(capacity1, capacity2) self.assertTrue(not capacity1 == capacity2) self.assertTrue(capacity1 < capacity2) self.assertTrue(capacity1 <= capacity2) self.assertTrue(not capacity1 > capacity2) self.assertTrue(not capacity1 >= capacity2) self.assertTrue(capacity1 != capacity2) def testComparison_006(self): """ Test comparison of two differing objects, minBytes differs. """ capacity1 = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("0.5", UNIT_MBYTES)) capacity2 = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("1.00", UNIT_MBYTES)) self.assertNotEqual(capacity1, capacity2) self.assertTrue(not capacity1 == capacity2) self.assertTrue(capacity1 < capacity2) self.assertTrue(capacity1 <= capacity2) self.assertTrue(not capacity1 > capacity2) self.assertTrue(not capacity1 >= capacity2) self.assertTrue(capacity1 != capacity2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the capacity configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.assertEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.assertEqual(None, config.capacity) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.assertEqual(None, config.capacity) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["capacity.conf.1"] with open(path) as f: contents = f.read() self.assertRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of capacity attribute, None value. """ config = LocalConfig() config.capacity = None self.assertEqual(None, config.capacity) def testConstructor_005(self): """ Test assignment of capacity attribute, valid value. """ config = LocalConfig() config.capacity = CapacityConfig() self.assertEqual(CapacityConfig(), config.capacity) def testConstructor_006(self): """ Test assignment of capacity attribute, invalid value (not CapacityConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "capacity", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.capacity = CapacityConfig() config2 = LocalConfig() config2.capacity = CapacityConfig() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, capacity differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.capacity = CapacityConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, capacity differs. """ config1 = LocalConfig() config1.capacity = CapacityConfig(minBytes=ByteQuantity("0.1", UNIT_MBYTES)) config2 = LocalConfig() config2.capacity = CapacityConfig(minBytes=ByteQuantity("1.00", UNIT_MBYTES)) self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None capacity section. """ config = LocalConfig() config.capacity = None self.assertRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty capacity section. """ config = LocalConfig() config.capacity = CapacityConfig() self.assertRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty capacity section with no values filled in. """ config = LocalConfig() config.capacity = CapacityConfig(None, None) self.assertRaises(ValueError, config.validate) def testValidate_004(self): """ Test validate on a non-empty capacity section with both max percentage and min bytes filled in. """ config = LocalConfig() config.capacity = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("1.00", UNIT_MBYTES)) self.assertRaises(ValueError, config.validate) def testValidate_005(self): """ Test validate on a non-empty capacity section with only max percentage filled in. """ config = LocalConfig() config.capacity = CapacityConfig(maxPercentage=PercentageQuantity("63.2")) config.validate() def testValidate_006(self): """ Test validate on a non-empty capacity section with only min bytes filled in. """ config = LocalConfig() config.capacity = CapacityConfig(minBytes=ByteQuantity("1.00", UNIT_MBYTES)) config.validate() ############################ # Test parsing of documents ############################ # Some of the byte-size parsing logic is tested more fully in splittests.py. # I decided not to duplicate it here, since it's shared from config.py. def testParse_001(self): """ Parse empty config document. """ path = self.resources["capacity.conf.1"] with open(path) as f: contents = f.read() self.assertRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.assertRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.assertEqual(None, config.capacity) config = LocalConfig(xmlData=contents, validate=False) self.assertEqual(None, config.capacity) def testParse_002(self): """ Parse config document that configures max percentage. """ path = self.resources["capacity.conf.2"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.capacity) self.assertEqual(PercentageQuantity("63.2"), config.capacity.maxPercentage) self.assertEqual(None, config.capacity.minBytes) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.capacity) self.assertEqual(PercentageQuantity("63.2"), config.capacity.maxPercentage) self.assertEqual(None, config.capacity.minBytes) def testParse_003(self): """ Parse config document that configures min bytes, size in bytes. """ path = self.resources["capacity.conf.3"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.capacity) self.assertEqual(None, config.capacity.maxPercentage) self.assertEqual(ByteQuantity("18", UNIT_BYTES), config.capacity.minBytes) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.capacity) self.assertEqual(None, config.capacity.maxPercentage) self.assertEqual(ByteQuantity("18", UNIT_BYTES), config.capacity.minBytes) def testParse_004(self): """ Parse config document with filled-in values, size in KB. """ path = self.resources["capacity.conf.4"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.capacity) self.assertEqual(None, config.capacity.maxPercentage) self.assertEqual(ByteQuantity("1.25", UNIT_KBYTES), config.capacity.minBytes) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.capacity) self.assertEqual(None, config.capacity.maxPercentage) self.assertEqual(ByteQuantity("1.25", UNIT_KBYTES), config.capacity.minBytes) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document. """ capacity = CapacityConfig() config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) def testAddConfig_002(self): """ Test with max percentage value set. """ capacity = CapacityConfig(maxPercentage=PercentageQuantity("63.29128310980123")) config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) def testAddConfig_003(self): """ Test with min bytes value set, byte values. """ capacity = CapacityConfig(minBytes=ByteQuantity("121231", UNIT_BYTES)) config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) def testAddConfig_004(self): """ Test with min bytes value set, KB values. """ capacity = CapacityConfig(minBytes=ByteQuantity("63352", UNIT_KBYTES)) config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) def testAddConfig_005(self): """ Test with min bytes value set, MB values. """ capacity = CapacityConfig(minBytes=ByteQuantity("63352", UNIT_MBYTES)) config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) def testAddConfig_006(self): """ Test with min bytes value set, GB values. """ capacity = CapacityConfig(minBytes=ByteQuantity("63352", UNIT_GBYTES)) config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" tests = [ ] tests.append(unittest.makeSuite(TestPercentageQuantity, 'test')) tests.append(unittest.makeSuite(TestCapacityConfig, 'test')) tests.append(unittest.makeSuite(TestLocalConfig, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/actionsutiltests.py0000664000175000017500000002257012560007330023302 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests action utility functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/actions/util.py. Code Coverage ============= This module contains individual tests for the public functions and classes implemented in actions/util.py. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a ACTIONSUTILTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import os import unittest import tempfile from CedarBackup3.testutil import findResources, buildPath, removedir, extractTar from CedarBackup3.actions.util import findDailyDirs, writeIndicatorFile from CedarBackup3.extend.encrypt import ENCRYPT_INDICATOR ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "tree1.tar.gz", "tree8.tar.gz", "tree15.tar.gz", "tree17.tar.gz", "tree18.tar.gz", "tree19.tar.gz", "tree20.tar.gz", ] INVALID_PATH = "bogus" # This path name should never exist ####################################################################### # Test Case Classes ####################################################################### ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the various public functions.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) ####################### # Test findDailyDirs() ####################### def testFindDailyDirs_001(self): """ Test with a nonexistent staging directory. """ stagingDir = self.buildPath([INVALID_PATH]) self.assertRaises(ValueError, findDailyDirs, stagingDir, ENCRYPT_INDICATOR) def testFindDailyDirs_002(self): """ Test with an empty staging directory. """ self.extractTar("tree8") stagingDir = self.buildPath(["tree8", "dir001", ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.assertEqual([], dailyDirs) def testFindDailyDirs_003(self): """ Test with a staging directory containing only files. """ self.extractTar("tree1") stagingDir = self.buildPath(["tree1", ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.assertEqual([], dailyDirs) def testFindDailyDirs_004(self): """ Test with a staging directory containing only links. """ self.extractTar("tree15") stagingDir = self.buildPath(["tree15", "dir001", ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.assertEqual([], dailyDirs) def testFindDailyDirs_005(self): """ Test with a valid staging directory, where the daily directories do NOT contain the encrypt indicator. """ self.extractTar("tree17") stagingDir = self.buildPath(["tree17" ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.assertEqual(6, len(dailyDirs)) self.assertTrue(self.buildPath([ "tree17", "2006", "12", "29", ]) in dailyDirs) self.assertTrue(self.buildPath([ "tree17", "2006", "12", "30", ]) in dailyDirs) self.assertTrue(self.buildPath([ "tree17", "2006", "12", "31", ]) in dailyDirs) self.assertTrue(self.buildPath([ "tree17", "2007", "01", "01", ]) in dailyDirs) self.assertTrue(self.buildPath([ "tree17", "2007", "01", "02", ]) in dailyDirs) self.assertTrue(self.buildPath([ "tree17", "2007", "01", "03", ]) in dailyDirs) def testFindDailyDirs_006(self): """ Test with a valid staging directory, where the daily directories DO contain the encrypt indicator. """ self.extractTar("tree18") stagingDir = self.buildPath(["tree18" ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.assertEqual([], dailyDirs) def testFindDailyDirs_007(self): """ Test with a valid staging directory, where some daily directories contain the encrypt indicator and others do not. """ self.extractTar("tree19") stagingDir = self.buildPath(["tree19" ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.assertEqual(3, len(dailyDirs)) self.assertTrue(self.buildPath([ "tree19", "2006", "12", "30", ]) in dailyDirs) self.assertTrue(self.buildPath([ "tree19", "2007", "01", "01", ]) in dailyDirs) self.assertTrue(self.buildPath([ "tree19", "2007", "01", "03", ]) in dailyDirs) def testFindDailyDirs_008(self): """ Test for case where directories other than daily directories contain the encrypt indicator (the indicator should be ignored). """ self.extractTar("tree20") stagingDir = self.buildPath(["tree20", ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.assertEqual(6, len(dailyDirs)) self.assertTrue(self.buildPath([ "tree20", "2006", "12", "29", ]) in dailyDirs) self.assertTrue(self.buildPath([ "tree20", "2006", "12", "30", ]) in dailyDirs) self.assertTrue(self.buildPath([ "tree20", "2006", "12", "31", ]) in dailyDirs) self.assertTrue(self.buildPath([ "tree20", "2007", "01", "01", ]) in dailyDirs) self.assertTrue(self.buildPath([ "tree20", "2007", "01", "02", ]) in dailyDirs) self.assertTrue(self.buildPath([ "tree20", "2007", "01", "03", ]) in dailyDirs) ############################ # Test writeIndicatorFile() ############################ def testWriteIndicatorFile_001(self): """ Test with a nonexistent staging directory. """ stagingDir = self.buildPath([INVALID_PATH]) self.assertRaises(IOError, writeIndicatorFile, stagingDir, ENCRYPT_INDICATOR, None, None) def testWriteIndicatorFile_002(self): """ Test with a valid staging directory. """ self.extractTar("tree8") stagingDir = self.buildPath(["tree8", "dir001", ]) writeIndicatorFile(stagingDir, ENCRYPT_INDICATOR, None, None) self.assertTrue(os.path.exists(self.buildPath(["tree8", "dir001", ENCRYPT_INDICATOR, ]))) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" tests = [ ] tests.append(unittest.makeSuite(TestFunctions, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/__init__.py0000664000175000017500000000145512560007330021417 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Provides package initialization. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Package initialization ######################################################################## """ This causes the test directory to be a package. """ __all__ = [ ] CedarBackup3-3.1.6/testcase/clitests.py0000664000175000017500000230550712560007330021521 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2005,2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests command-line interface functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/cli.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in cli.py. Where possible, we test functions that print output by passing a custom file descriptor. Sometimes, we only ensure that a function or method runs without failure, and we don't validate what its result is or what it prints out. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a CLITESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import unittest from os.path import isdir, isfile, islink, isabs, exists from getopt import GetoptError from CedarBackup3.testutil import failUnlessAssignRaises, captureOutput from CedarBackup3.config import OptionsConfig, PeersConfig, ExtensionsConfig from CedarBackup3.config import LocalPeer, RemotePeer from CedarBackup3.config import ExtendedAction, ActionDependencies, PreActionHook, PostActionHook from CedarBackup3.cli import _usage, _version, _diagnostics from CedarBackup3.cli import Options from CedarBackup3.cli import _ActionSet from CedarBackup3.action import executeCollect, executeStage, executeStore, executePurge, executeRebuild, executeValidate ####################################################################### # Test Case Classes ####################################################################### ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the public functions.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ######################## # Test simple functions ######################## def testSimpleFuncs_001(self): """ Test that the _usage() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_usage) def testSimpleFuncs_002(self): """ Test that the _version() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_version) def testSimpleFuncs_003(self): """ Test that the _diagnostics() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_diagnostics) #################### # TestOptions class #################### class TestOptions(unittest.TestCase): """Tests for the Options class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = Options() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no arguments. """ options = Options() self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_002(self): """ Test constructor with validate=False, no other arguments. """ options = Options(validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_003(self): """ Test constructor with argumentList=[], validate=False. """ options = Options(argumentList=[], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_004(self): """ Test constructor with argumentString="", validate=False. """ options = Options(argumentString="", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_005(self): """ Test constructor with argumentList=["--help", ], validate=False. """ options = Options(argumentList=["--help", ], validate=False) self.assertEqual(True, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_006(self): """ Test constructor with argumentString="--help", validate=False. """ options = Options(argumentString="--help", validate=False) self.assertEqual(True, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_007(self): """ Test constructor with argumentList=["-h", ], validate=False. """ options = Options(argumentList=["-h", ], validate=False) self.assertEqual(True, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_008(self): """ Test constructor with argumentString="-h", validate=False. """ options = Options(argumentString="-h", validate=False) self.assertEqual(True, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_009(self): """ Test constructor with argumentList=["--version", ], validate=False. """ options = Options(argumentList=["--version", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(True, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_010(self): """ Test constructor with argumentString="--version", validate=False. """ options = Options(argumentString="--version", validate=False) self.assertEqual(False, options.help) self.assertEqual(True, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_011(self): """ Test constructor with argumentList=["-V", ], validate=False. """ options = Options(argumentList=["-V", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(True, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_012(self): """ Test constructor with argumentString="-V", validate=False. """ options = Options(argumentString="-V", validate=False) self.assertEqual(False, options.help) self.assertEqual(True, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_013(self): """ Test constructor with argumentList=["--verbose", ], validate=False. """ options = Options(argumentList=["--verbose", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(True, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_014(self): """ Test constructor with argumentString="--verbose", validate=False. """ options = Options(argumentString="--verbose", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(True, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_015(self): """ Test constructor with argumentList=["-b", ], validate=False. """ options = Options(argumentList=["-b", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(True, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_016(self): """ Test constructor with argumentString="-b", validate=False. """ options = Options(argumentString="-b", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(True, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_017(self): """ Test constructor with argumentList=["--quiet", ], validate=False. """ options = Options(argumentList=["--quiet", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(True, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_018(self): """ Test constructor with argumentString="--quiet", validate=False. """ options = Options(argumentString="--quiet", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(True, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_019(self): """ Test constructor with argumentList=["-q", ], validate=False. """ options = Options(argumentList=["-q", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(True, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_020(self): """ Test constructor with argumentString="-q", validate=False. """ options = Options(argumentString="-q", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(True, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_021(self): """ Test constructor with argumentList=["--config", ], validate=False. """ self.assertRaises(GetoptError, Options, argumentList=["--config", ], validate=False) def testConstructor_022(self): """ Test constructor with argumentString="--config", validate=False. """ self.assertRaises(GetoptError, Options, argumentString="--config", validate=False) def testConstructor_023(self): """ Test constructor with argumentList=["-c", ], validate=False. """ self.assertRaises(GetoptError, Options, argumentList=["-c", ], validate=False) def testConstructor_024(self): """ Test constructor with argumentString="-c", validate=False. """ self.assertRaises(GetoptError, Options, argumentString="-c", validate=False) def testConstructor_025(self): """ Test constructor with argumentList=["--config", "something", ], validate=False. """ options = Options(argumentList=["--config", "something", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual("something", options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_026(self): """ Test constructor with argumentString="--config something", validate=False. """ options = Options(argumentString="--config something", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual("something", options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_027(self): """ Test constructor with argumentList=["-c", "something", ], validate=False. """ options = Options(argumentList=["-c", "something", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual("something", options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_028(self): """ Test constructor with argumentString="-c something", validate=False. """ options = Options(argumentString="-c something", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual("something", options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_029(self): """ Test constructor with argumentList=["--full", ], validate=False. """ options = Options(argumentList=["--full", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(True, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_030(self): """ Test constructor with argumentString="--full", validate=False. """ options = Options(argumentString="--full", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(True, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_031(self): """ Test constructor with argumentList=["-f", ], validate=False. """ options = Options(argumentList=["-f", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(True, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_032(self): """ Test constructor with argumentString="-f", validate=False. """ options = Options(argumentString="-f", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(True, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_033(self): """ Test constructor with argumentList=["--logfile", ], validate=False. """ self.assertRaises(GetoptError, Options, argumentList=["--logfile", ], validate=False) def testConstructor_034(self): """ Test constructor with argumentString="--logfile", validate=False. """ self.assertRaises(GetoptError, Options, argumentString="--logfile", validate=False) def testConstructor_035(self): """ Test constructor with argumentList=["-l", ], validate=False. """ self.assertRaises(GetoptError, Options, argumentList=["-l", ], validate=False) def testConstructor_036(self): """ Test constructor with argumentString="-l", validate=False. """ self.assertRaises(GetoptError, Options, argumentString="-l", validate=False) def testConstructor_037(self): """ Test constructor with argumentList=["--logfile", "something", ], validate=False. """ options = Options(argumentList=["--logfile", "something", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual("something", options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_038(self): """ Test constructor with argumentString="--logfile something", validate=False. """ options = Options(argumentString="--logfile something", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual("something", options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_039(self): """ Test constructor with argumentList=["-l", "something", ], validate=False. """ options = Options(argumentList=["-l", "something", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual("something", options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_040(self): """ Test constructor with argumentString="-l something", validate=False. """ options = Options(argumentString="-l something", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual("something", options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_041(self): """ Test constructor with argumentList=["--owner", ], validate=False. """ self.assertRaises(GetoptError, Options, argumentList=["--owner", ], validate=False) def testConstructor_042(self): """ Test constructor with argumentString="--owner", validate=False. """ self.assertRaises(GetoptError, Options, argumentString="--owner", validate=False) def testConstructor_043(self): """ Test constructor with argumentList=["-o", ], validate=False. """ self.assertRaises(GetoptError, Options, argumentList=["-o", ], validate=False) def testConstructor_044(self): """ Test constructor with argumentString="-o", validate=False. """ self.assertRaises(GetoptError, Options, argumentString="-o", validate=False) def testConstructor_045(self): """ Test constructor with argumentList=["--owner", "something", ], validate=False. """ self.assertRaises(ValueError, Options, argumentList=["--owner", "something", ], validate=False) def testConstructor_046(self): """ Test constructor with argumentString="--owner something", validate=False. """ self.assertRaises(ValueError, Options, argumentString="--owner something", validate=False) def testConstructor_047(self): """ Test constructor with argumentList=["-o", "something", ], validate=False. """ self.assertRaises(ValueError, Options, argumentList=["-o", "something", ], validate=False) def testConstructor_048(self): """ Test constructor with argumentString="-o something", validate=False. """ self.assertRaises(ValueError, Options, argumentString="-o something", validate=False) def testConstructor_049(self): """ Test constructor with argumentList=["--owner", "a:b", ], validate=False. """ options = Options(argumentList=["--owner", "a:b", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(("a", "b"), options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_050(self): """ Test constructor with argumentString="--owner a:b", validate=False. """ options = Options(argumentString="--owner a:b", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(("a", "b"), options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_051(self): """ Test constructor with argumentList=["-o", "a:b", ], validate=False. """ options = Options(argumentList=["-o", "a:b", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(("a", "b"), options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_052(self): """ Test constructor with argumentString="-o a:b", validate=False. """ options = Options(argumentString="-o a:b", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(("a", "b"), options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_053(self): """ Test constructor with argumentList=["--mode", ], validate=False. """ self.assertRaises(GetoptError, Options, argumentList=["--mode", ], validate=False) def testConstructor_054(self): """ Test constructor with argumentString="--mode", validate=False. """ self.assertRaises(GetoptError, Options, argumentString="--mode", validate=False) def testConstructor_055(self): """ Test constructor with argumentList=["-m", ], validate=False. """ self.assertRaises(GetoptError, Options, argumentList=["-m", ], validate=False) def testConstructor_056(self): """ Test constructor with argumentString="-m", validate=False. """ self.assertRaises(GetoptError, Options, argumentString="-m", validate=False) def testConstructor_057(self): """ Test constructor with argumentList=["--mode", "something", ], validate=False. """ self.assertRaises(ValueError, Options, argumentList=["--mode", "something", ], validate=False) def testConstructor_058(self): """ Test constructor with argumentString="--mode something", validate=False. """ self.assertRaises(ValueError, Options, argumentString="--mode something", validate=False) def testConstructor_059(self): """ Test constructor with argumentList=["-m", "something", ], validate=False. """ self.assertRaises(ValueError, Options, argumentList=["-m", "something", ], validate=False) def testConstructor_060(self): """ Test constructor with argumentString="-m something", validate=False. """ self.assertRaises(ValueError, Options, argumentString="-m something", validate=False) def testConstructor_061(self): """ Test constructor with argumentList=["--mode", "631", ], validate=False. """ options = Options(argumentList=["--mode", "631", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(0o631, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_062(self): """ Test constructor with argumentString="--mode 631", validate=False. """ options = Options(argumentString="--mode 631", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(0o631, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_063(self): """ Test constructor with argumentList=["-m", "631", ], validate=False. """ options = Options(argumentList=["-m", "631", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(0o631, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_064(self): """ Test constructor with argumentString="-m 631", validate=False. """ options = Options(argumentString="-m 631", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(0o631, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_065(self): """ Test constructor with argumentList=["--output", ], validate=False. """ options = Options(argumentList=["--output", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(True, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_066(self): """ Test constructor with argumentString="--output", validate=False. """ options = Options(argumentString="--output", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(True, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_067(self): """ Test constructor with argumentList=["-O", ], validate=False. """ options = Options(argumentList=["-O", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(True, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_068(self): """ Test constructor with argumentString="-O", validate=False. """ options = Options(argumentString="-O", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(True, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_069(self): """ Test constructor with argumentList=["--debug", ], validate=False. """ options = Options(argumentList=["--debug", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(True, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_070(self): """ Test constructor with argumentString="--debug", validate=False. """ options = Options(argumentString="--debug", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(True, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_071(self): """ Test constructor with argumentList=["-d", ], validate=False. """ options = Options(argumentList=["-d", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(True, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_072(self): """ Test constructor with argumentString="-d", validate=False. """ options = Options(argumentString="-d", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(True, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_073(self): """ Test constructor with argumentList=["--stack", ], validate=False. """ options = Options(argumentList=["--stack", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(True, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_074(self): """ Test constructor with argumentString="--stack", validate=False. """ options = Options(argumentString="--stack", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(True, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_075(self): """ Test constructor with argumentList=["-s", ], validate=False. """ options = Options(argumentList=["-s", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(True, options.stacktrace) self.assertEqual([], options.actions) def testConstructor_076(self): """ Test constructor with argumentString="-s", validate=False. """ options = Options(argumentString="-s", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(True, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_077(self): """ Test constructor with argumentList=["all", ], validate=False. """ options = Options(argumentList=["all", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["all", ], options.actions) def testConstructor_078(self): """ Test constructor with argumentString="all", validate=False. """ options = Options(argumentString="all", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["all", ], options.actions) def testConstructor_079(self): """ Test constructor with argumentList=["collect", ], validate=False. """ options = Options(argumentList=["collect", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["collect", ], options.actions) def testConstructor_080(self): """ Test constructor with argumentString="collect", validate=False. """ options = Options(argumentString="collect", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["collect", ], options.actions) def testConstructor_081(self): """ Test constructor with argumentList=["stage", ], validate=False. """ options = Options(argumentList=["stage", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["stage", ], options.actions) def testConstructor_082(self): """ Test constructor with argumentString="stage", validate=False. """ options = Options(argumentString="stage", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["stage", ], options.actions) def testConstructor_083(self): """ Test constructor with argumentList=["store", ], validate=False. """ options = Options(argumentList=["store", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["store", ], options.actions) def testConstructor_084(self): """ Test constructor with argumentString="store", validate=False. """ options = Options(argumentString="store", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["store", ], options.actions) def testConstructor_085(self): """ Test constructor with argumentList=["purge", ], validate=False. """ options = Options(argumentList=["purge", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["purge", ], options.actions) def testConstructor_086(self): """ Test constructor with argumentString="purge", validate=False. """ options = Options(argumentString="purge", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["purge", ], options.actions) def testConstructor_087(self): """ Test constructor with argumentList=["rebuild", ], validate=False. """ options = Options(argumentList=["rebuild", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["rebuild", ], options.actions) def testConstructor_088(self): """ Test constructor with argumentString="rebuild", validate=False. """ options = Options(argumentString="rebuild", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["rebuild", ], options.actions) def testConstructor_089(self): """ Test constructor with argumentList=["validate", ], validate=False. """ options = Options(argumentList=["validate", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["validate", ], options.actions) def testConstructor_090(self): """ Test constructor with argumentString="validate", validate=False. """ options = Options(argumentString="validate", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["validate", ], options.actions) def testConstructor_091(self): """ Test constructor with argumentList=["collect", "all", ], validate=False. """ options = Options(argumentList=["collect", "all", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["collect", "all", ], options.actions) def testConstructor_092(self): """ Test constructor with argumentString="collect all", validate=False. """ options = Options(argumentString="collect all", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["collect", "all", ], options.actions) def testConstructor_093(self): """ Test constructor with argumentList=["collect", "rebuild", ], validate=False. """ options = Options(argumentList=["collect", "rebuild", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["collect", "rebuild", ], options.actions) def testConstructor_094(self): """ Test constructor with argumentString="collect rebuild", validate=False. """ options = Options(argumentString="collect rebuild", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["collect", "rebuild", ], options.actions) def testConstructor_095(self): """ Test constructor with argumentList=["collect", "validate", ], validate=False. """ options = Options(argumentList=["collect", "validate", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["collect", "validate", ], options.actions) def testConstructor_096(self): """ Test constructor with argumentString="collect validate", validate=False. """ options = Options(argumentString="collect validate", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["collect", "validate", ], options.actions) def testConstructor_097(self): """ Test constructor with argumentList=["-d", "--verbose", "-O", "--mode", "600", "collect", "stage", ], validate=False. """ options = Options(argumentList=["-d", "--verbose", "-O", "--mode", "600", "collect", "stage", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(True, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(0o600, options.mode) self.assertEqual(True, options.output) self.assertEqual(True, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["collect", "stage", ], options.actions) def testConstructor_098(self): """ Test constructor with argumentString="-d --verbose -O --mode 600 collect stage", validate=False. """ options = Options(argumentString="-d --verbose -O --mode 600 collect stage", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(True, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(0o600, options.mode) self.assertEqual(True, options.output) self.assertEqual(True, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["collect", "stage", ], options.actions) def testConstructor_099(self): """ Test constructor with argumentList=[], validate=True. """ self.assertRaises(ValueError, Options, argumentList=[], validate=True) def testConstructor_100(self): """ Test constructor with argumentString="", validate=True. """ self.assertRaises(ValueError, Options, argumentString="", validate=True) def testConstructor_101(self): """ Test constructor with argumentList=["--help", ], validate=True. """ options = Options(argumentList=["--help", ], validate=True) self.assertEqual(True, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_102(self): """ Test constructor with argumentString="--help", validate=True. """ options = Options(argumentString="--help", validate=True) self.assertEqual(True, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_103(self): """ Test constructor with argumentList=["-h", ], validate=True. """ options = Options(argumentList=["-h", ], validate=True) self.assertEqual(True, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_104(self): """ Test constructor with argumentString="-h", validate=True. """ options = Options(argumentString="-h", validate=True) self.assertEqual(True, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_105(self): """ Test constructor with argumentList=["--version", ], validate=True. """ options = Options(argumentList=["--version", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(True, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_106(self): """ Test constructor with argumentString="--version", validate=True. """ options = Options(argumentString="--version", validate=True) self.assertEqual(False, options.help) self.assertEqual(True, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_107(self): """ Test constructor with argumentList=["-V", ], validate=True. """ options = Options(argumentList=["-V", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(True, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_108(self): """ Test constructor with argumentString="-V", validate=True. """ options = Options(argumentString="-V", validate=True) self.assertEqual(False, options.help) self.assertEqual(True, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_109(self): """ Test constructor with argumentList=["--verbose", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--verbose", ], validate=True) def testConstructor_110(self): """ Test constructor with argumentString="--verbose", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--verbose", validate=True) def testConstructor_111(self): """ Test constructor with argumentList=["-b", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-b", ], validate=True) def testConstructor_112(self): """ Test constructor with argumentString="-b", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-b", validate=True) def testConstructor_113(self): """ Test constructor with argumentList=["--quiet", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--quiet", ], validate=True) def testConstructor_114(self): """ Test constructor with argumentString="--quiet", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--quiet", validate=True) def testConstructor_115(self): """ Test constructor with argumentList=["-q", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-q", ], validate=True) def testConstructor_116(self): """ Test constructor with argumentString="-q", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-q", validate=True) def testConstructor_117(self): """ Test constructor with argumentList=["--config", ], validate=True. """ self.assertRaises(GetoptError, Options, argumentList=["--config", ], validate=True) def testConstructor_118(self): """ Test constructor with argumentString="--config", validate=True. """ self.assertRaises(GetoptError, Options, argumentString="--config", validate=True) def testConstructor_119(self): """ Test constructor with argumentList=["-c", ], validate=True. """ self.assertRaises(GetoptError, Options, argumentList=["-c", ], validate=True) def testConstructor_120(self): """ Test constructor with argumentString="-c", validate=True. """ self.assertRaises(GetoptError, Options, argumentString="-c", validate=True) def testConstructor_121(self): """ Test constructor with argumentList=["--config", "something", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--config", "something", ], validate=True) def testConstructor_122(self): """ Test constructor with argumentString="--config something", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--config something", validate=True) def testConstructor_123(self): """ Test constructor with argumentList=["-c", "something", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-c", "something", ], validate=True) def testConstructor_124(self): """ Test constructor with argumentString="-c something", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-c something", validate=True) def testConstructor_125(self): """ Test constructor with argumentList=["--full", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--full", ], validate=True) def testConstructor_126(self): """ Test constructor with argumentString="--full", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--full", validate=True) def testConstructor_127(self): """ Test constructor with argumentList=["-f", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-f", ], validate=True) def testConstructor_128(self): """ Test constructor with argumentString="-f", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-f", validate=True) def testConstructor_129(self): """ Test constructor with argumentList=["--logfile", ], validate=True. """ self.assertRaises(GetoptError, Options, argumentList=["--logfile", ], validate=True) def testConstructor_130(self): """ Test constructor with argumentString="--logfile", validate=True. """ self.assertRaises(GetoptError, Options, argumentString="--logfile", validate=True) def testConstructor_131(self): """ Test constructor with argumentList=["-l", ], validate=True. """ self.assertRaises(GetoptError, Options, argumentList=["-l", ], validate=True) def testConstructor_132(self): """ Test constructor with argumentString="-l", validate=True. """ self.assertRaises(GetoptError, Options, argumentString="-l", validate=True) def testConstructor_133(self): """ Test constructor with argumentList=["--logfile", "something", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--logfile", "something", ], validate=True) def testConstructor_134(self): """ Test constructor with argumentString="--logfile something", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--logfile something", validate=True) def testConstructor_135(self): """ Test constructor with argumentList=["-l", "something", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-l", "something", ], validate=True) def testConstructor_136(self): """ Test constructor with argumentString="-l something", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-l something", validate=True) def testConstructor_137(self): """ Test constructor with argumentList=["--owner", ], validate=True. """ self.assertRaises(GetoptError, Options, argumentList=["--owner", ], validate=True) def testConstructor_138(self): """ Test constructor with argumentString="--owner", validate=True. """ self.assertRaises(GetoptError, Options, argumentString="--owner", validate=True) def testConstructor_139(self): """ Test constructor with argumentList=["-o", ], validate=True. """ self.assertRaises(GetoptError, Options, argumentList=["-o", ], validate=True) def testConstructor_140(self): """ Test constructor with argumentString="-o", validate=True. """ self.assertRaises(GetoptError, Options, argumentString="-o", validate=True) def testConstructor_141(self): """ Test constructor with argumentList=["--owner", "something", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--owner", "something", ], validate=True) def testConstructor_142(self): """ Test constructor with argumentString="--owner something", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--owner something", validate=True) def testConstructor_143(self): """ Test constructor with argumentList=["-o", "something", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-o", "something", ], validate=True) def testConstructor_144(self): """ Test constructor with argumentString="-o something", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-o something", validate=True) def testConstructor_145(self): """ Test constructor with argumentList=["--owner", "a:b", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--owner", "a:b", ], validate=True) def testConstructor_146(self): """ Test constructor with argumentString="--owner a:b", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--owner a:b", validate=True) def testConstructor_147(self): """ Test constructor with argumentList=["-o", "a:b", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-o", "a:b", ], validate=True) def testConstructor_148(self): """ Test constructor with argumentString="-o a:b", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-o a:b", validate=True) def testConstructor_149(self): """ Test constructor with argumentList=["--mode", ], validate=True. """ self.assertRaises(GetoptError, Options, argumentList=["--mode", ], validate=True) def testConstructor_150(self): """ Test constructor with argumentString="--mode", validate=True. """ self.assertRaises(GetoptError, Options, argumentString="--mode", validate=True) def testConstructor_151(self): """ Test constructor with argumentList=["-m", ], validate=True. """ self.assertRaises(GetoptError, Options, argumentList=["-m", ], validate=True) def testConstructor_152(self): """ Test constructor with argumentString="-m", validate=True. """ self.assertRaises(GetoptError, Options, argumentString="-m", validate=True) def testConstructor_153(self): """ Test constructor with argumentList=["--mode", "something", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--mode", "something", ], validate=True) def testConstructor_154(self): """ Test constructor with argumentString="--mode something", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--mode something", validate=True) def testConstructor_155(self): """ Test constructor with argumentList=["-m", "something", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-m", "something", ], validate=True) def testConstructor_156(self): """ Test constructor with argumentString="-m something", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-m something", validate=True) def testConstructor_157(self): """ Test constructor with argumentList=["--mode", "631", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--mode", "631", ], validate=True) def testConstructor_158(self): """ Test constructor with argumentString="--mode 631", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--mode 631", validate=True) def testConstructor_159(self): """ Test constructor with argumentList=["-m", "631", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-m", "631", ], validate=True) def testConstructor_160(self): """ Test constructor with argumentString="-m 631", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-m 631", validate=True) def testConstructor_161(self): """ Test constructor with argumentList=["--output", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--output", ], validate=True) def testConstructor_162(self): """ Test constructor with argumentString="--output", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--output", validate=True) def testConstructor_163(self): """ Test constructor with argumentList=["-O", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-O", ], validate=True) def testConstructor_164(self): """ Test constructor with argumentString="-O", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-O", validate=True) def testConstructor_165(self): """ Test constructor with argumentList=["--debug", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--debug", ], validate=True) def testConstructor_166(self): """ Test constructor with argumentString="--debug", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--debug", validate=True) def testConstructor_167(self): """ Test constructor with argumentList=["-d", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-d", ], validate=True) def testConstructor_168(self): """ Test constructor with argumentString="-d", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-d", validate=True) def testConstructor_169(self): """ Test constructor with argumentList=["--stack", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--stack", ], validate=True) def testConstructor_170(self): """ Test constructor with argumentString="--stack", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--stack", validate=True) def testConstructor_171(self): """ Test constructor with argumentList=["-s", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-s", ], validate=True) def testConstructor_172(self): """ Test constructor with argumentString="-s", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-s", validate=True) def testConstructor_173(self): """ Test constructor with argumentList=["all", ], validate=True. """ options = Options(argumentList=["all", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["all", ], options.actions) def testConstructor_174(self): """ Test constructor with argumentString="all", validate=True. """ options = Options(argumentString="all", validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["all", ], options.actions) def testConstructor_175(self): """ Test constructor with argumentList=["collect", ], validate=True. """ options = Options(argumentList=["collect", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["collect", ], options.actions) def testConstructor_176(self): """ Test constructor with argumentString="collect", validate=True. """ options = Options(argumentString="collect", validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["collect", ], options.actions) def testConstructor_177(self): """ Test constructor with argumentList=["stage", ], validate=True. """ options = Options(argumentList=["stage", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["stage", ], options.actions) def testConstructor_178(self): """ Test constructor with argumentString="stage", validate=True. """ options = Options(argumentString="stage", validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["stage", ], options.actions) def testConstructor_179(self): """ Test constructor with argumentList=["store", ], validate=True. """ options = Options(argumentList=["store", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["store", ], options.actions) def testConstructor_180(self): """ Test constructor with argumentString="store", validate=True. """ options = Options(argumentString="store", validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["store", ], options.actions) def testConstructor_181(self): """ Test constructor with argumentList=["purge", ], validate=True. """ options = Options(argumentList=["purge", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["purge", ], options.actions) def testConstructor_182(self): """ Test constructor with argumentString="purge", validate=True. """ options = Options(argumentString="purge", validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["purge", ], options.actions) def testConstructor_183(self): """ Test constructor with argumentList=["rebuild", ], validate=True. """ options = Options(argumentList=["rebuild", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["rebuild", ], options.actions) def testConstructor_184(self): """ Test constructor with argumentString="rebuild", validate=True. """ options = Options(argumentString="rebuild", validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["rebuild", ], options.actions) def testConstructor_185(self): """ Test constructor with argumentList=["validate", ], validate=True. """ options = Options(argumentList=["validate", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["validate", ], options.actions) def testConstructor_186(self): """ Test constructor with argumentString="validate", validate=True. """ options = Options(argumentString="validate", validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["validate", ], options.actions) def testConstructor_187(self): """ Test constructor with argumentList=["-d", "--verbose", "-O", "--mode", "600", "collect", "stage", ], validate=True. """ options = Options(argumentList=["-d", "--verbose", "-O", "--mode", "600", "collect", "stage", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(True, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(0o600, options.mode) self.assertEqual(True, options.output) self.assertEqual(True, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["collect", "stage", ], options.actions) def testConstructor_188(self): """ Test constructor with argumentString="-d --verbose -O --mode 600 collect stage", validate=True. """ options = Options(argumentString="-d --verbose -O --mode 600 collect stage", validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(True, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(0o600, options.mode) self.assertEqual(True, options.output) self.assertEqual(True, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual(["collect", "stage", ], options.actions) def testConstructor_189(self): """ Test constructor with argumentList=["--managed", ], validate=False. """ options = Options(argumentList=["--managed", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(True, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_190(self): """ Test constructor with argumentString="--managed", validate=False. """ options = Options(argumentString="--managed", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(True, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_191(self): """ Test constructor with argumentList=["-M", ], validate=False. """ options = Options(argumentList=["-M", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(True, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_192(self): """ Test constructor with argumentString="-M", validate=False. """ options = Options(argumentString="-M", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(True, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_193(self): """ Test constructor with argumentList=["--managed-only", ], validate=False. """ options = Options(argumentList=["--managed-only", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(True, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_194(self): """ Test constructor with argumentString="--managed-only", validate=False. """ options = Options(argumentString="--managed-only", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(True, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_195(self): """ Test constructor with argumentList=["-N", ], validate=False. """ options = Options(argumentList=["-N", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(True, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_196(self): """ Test constructor with argumentString="-N", validate=False. """ options = Options(argumentString="-N", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(True, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(False, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_197(self): """ Test constructor with argumentList=["--managed", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--managed", ], validate=True) def testConstructor_198(self): """ Test constructor with argumentString="--managed", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--managed", validate=True) def testConstructor_199(self): """ Test constructor with argumentList=["-M", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-M", ], validate=True) def testConstructor_200(self): """ Test constructor with argumentString="-M", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-M", validate=True) def testConstructor_201(self): """ Test constructor with argumentList=["--managed-only", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["--managed-only", ], validate=True) def testConstructor_202(self): """ Test constructor with argumentString="--managed-only", validate=True. """ self.assertRaises(ValueError, Options, argumentString="--managed-only", validate=True) def testConstructor_203(self): """ Test constructor with argumentList=["-N", ], validate=True. """ self.assertRaises(ValueError, Options, argumentList=["-N", ], validate=True) def testConstructor_204(self): """ Test constructor with argumentString="-N", validate=True. """ self.assertRaises(ValueError, Options, argumentString="-N", validate=True) def testConstructor_205(self): """ Test constructor with argumentList=["--diagnostics", ], validate=False. """ options = Options(argumentList=["--diagnostics", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(True, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_206(self): """ Test constructor with argumentString="--diagnostics", validate=False. """ options = Options(argumentString="--diagnostics", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(True, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_207(self): """ Test constructor with argumentList=["-D", ], validate=False. """ options = Options(argumentList=["-D", ], validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(True, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_208(self): """ Test constructor with argumentString="-D", validate=False. """ options = Options(argumentString="-D", validate=False) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(True, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_209(self): """ Test constructor with argumentList=["--diagnostics", ], validate=True. """ options = Options(argumentList=["--diagnostics", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(True, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_210(self): """ Test constructor with argumentString="--diagnostics", validate=True. """ options = Options(argumentString="--diagnostics", validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(True, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_211(self): """ Test constructor with argumentList=["-D", ], validate=True. """ options = Options(argumentList=["-D", ], validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(True, options.diagnostics) self.assertEqual([], options.actions) def testConstructor_212(self): """ Test constructor with argumentString="-D", validate=True. """ options = Options(argumentString="-D", validate=True) self.assertEqual(False, options.help) self.assertEqual(False, options.version) self.assertEqual(False, options.verbose) self.assertEqual(False, options.quiet) self.assertEqual(None, options.config) self.assertEqual(False, options.full) self.assertEqual(False, options.managed) self.assertEqual(False, options.managedOnly) self.assertEqual(None, options.logfile) self.assertEqual(None, options.owner) self.assertEqual(None, options.mode) self.assertEqual(False, options.output) self.assertEqual(False, options.debug) self.assertEqual(False, options.stacktrace) self.assertEqual(True, options.diagnostics) self.assertEqual([], options.actions) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes at defaults. """ options1 = Options() options2 = Options() self.assertEqual(options1, options2) self.assertTrue(options1 == options2) self.assertTrue(not options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(not options1 != options2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes filled in and same. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0o631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.assertEqual(options1, options2) self.assertTrue(options1 == options2) self.assertTrue(not options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(not options1 != options2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes filled in, help different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = False options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0o631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(not options1 < options2) self.assertTrue(not options1 <= options2) self.assertTrue(options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(options1 != options2) def testComparison_004(self): """ Test comparison of two identical objects, all attributes filled in, version different. """ options1 = Options() options2 = Options() options1.help = True options1.version = False options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0o631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_005(self): """ Test comparison of two identical objects, all attributes filled in, verbose different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = False options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0o631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_006(self): """ Test comparison of two identical objects, all attributes filled in, quiet different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = False options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0o631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(not options1 < options2) self.assertTrue(not options1 <= options2) self.assertTrue(options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(options1 != options2) def testComparison_007(self): """ Test comparison of two identical objects, all attributes filled in, config different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "whatever" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0o631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(not options1 < options2) self.assertTrue(not options1 <= options2) self.assertTrue(options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(options1 != options2) def testComparison_008(self): """ Test comparison of two identical objects, all attributes filled in, full different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = False options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0o631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_009(self): """ Test comparison of two identical objects, all attributes filled in, logfile different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "stuff" options2.owner = ("a", "b") options2.mode = 0o631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_010(self): """ Test comparison of two identical objects, all attributes filled in, owner different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("c", "d") options2.mode = 0o631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_011(self): """ Test comparison of two identical objects, all attributes filled in, mode different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = 0o600 options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0o631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_012(self): """ Test comparison of two identical objects, all attributes filled in, output different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = False options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0o631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_013(self): """ Test comparison of two identical objects, all attributes filled in, debug different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0o631 options2.output = True options2.debug = False options1.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(not options1 < options2) self.assertTrue(not options1 <= options2) self.assertTrue(options1 > options2) self.assertTrue(options1 >= options2) self.assertTrue(options1 != options2) def testComparison_014(self): """ Test comparison of two identical objects, all attributes filled in, stacktrace different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0o631 options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = False options2.actions = ["collect", ] self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_015(self): """ Test comparison of two identical objects, all attributes filled in, managed different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = False options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0o631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_016(self): """ Test comparison of two identical objects, all attributes filled in, managedOnly different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = False options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0o631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) def testComparison_017(self): """ Test comparison of two identical objects, all attributes filled in, diagnostics different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = 0o631 options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0o631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = True options2.actions = ["collect", ] self.assertNotEqual(options1, options2) self.assertTrue(not options1 == options2) self.assertTrue(options1 < options2) self.assertTrue(options1 <= options2) self.assertTrue(not options1 > options2) self.assertTrue(not options1 >= options2) self.assertTrue(options1 != options2) ########################### # Test buildArgumentList() ########################### def testBuildArgumentList_001(self): """Test with no values set, validate=False.""" options = Options() argumentList = options.buildArgumentList(validate=False) self.assertEqual([], argumentList) def testBuildArgumentList_002(self): """Test with help set, validate=False.""" options = Options() options.help = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--help", ], argumentList) def testBuildArgumentList_003(self): """Test with version set, validate=False.""" options = Options() options.version = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--version", ], argumentList) def testBuildArgumentList_004(self): """Test with verbose set, validate=False.""" options = Options() options.verbose = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--verbose", ], argumentList) def testBuildArgumentList_005(self): """Test with quiet set, validate=False.""" options = Options() options.quiet = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--quiet", ], argumentList) def testBuildArgumentList_006(self): """Test with config set, validate=False.""" options = Options() options.config = "stuff" argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--config", "stuff", ], argumentList) def testBuildArgumentList_007(self): """Test with full set, validate=False.""" options = Options() options.full = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--full", ], argumentList) def testBuildArgumentList_008(self): """Test with logfile set, validate=False.""" options = Options() options.logfile = "bogus" argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--logfile", "bogus", ], argumentList) def testBuildArgumentList_009(self): """Test with owner set, validate=False.""" options = Options() options.owner = ("ken", "group") argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--owner", "ken:group", ], argumentList) def testBuildArgumentList_010(self): """Test with mode set, validate=False.""" options = Options() options.mode = 0o644 argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--mode", "644", ], argumentList) def testBuildArgumentList_011(self): """Test with output set, validate=False.""" options = Options() options.output = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--output", ], argumentList) def testBuildArgumentList_012(self): """Test with debug set, validate=False.""" options = Options() options.debug = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--debug", ], argumentList) def testBuildArgumentList_013(self): """Test with stacktrace set, validate=False.""" options = Options() options.stacktrace = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--stack", ], argumentList) def testBuildArgumentList_014(self): """Test with actions containing one item, validate=False.""" options = Options() options.actions = [ "collect", ] argumentList = options.buildArgumentList(validate=False) self.assertEqual(["collect", ], argumentList) def testBuildArgumentList_015(self): """Test with actions containing multiple items, validate=False.""" options = Options() options.actions = [ "collect", "stage", "store", "purge", ] argumentList = options.buildArgumentList(validate=False) self.assertEqual(["collect", "stage", "store", "purge", ], argumentList) def testBuildArgumentList_016(self): """Test with all values set, actions containing one item, validate=False.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--help", "--version", "--verbose", "--quiet", "--config", "config", "--full", "--managed", "--managed-only", "--logfile", "logfile", "--owner", "a:b", "--mode", "631", "--output", "--debug", "--stack", "--diagnostics", "collect", ], argumentList) def testBuildArgumentList_017(self): """Test with all values set, actions containing multiple items, validate=False.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--help", "--version", "--verbose", "--quiet", "--config", "config", "--full", "--managed", "--managed-only", "--logfile", "logfile", "--owner", "a:b", "--mode", "631", "--output", "--debug", "--stack", "--diagnostics", "collect", "stage", ], argumentList) def testBuildArgumentList_018(self): """Test with no values set, validate=True.""" options = Options() self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_019(self): """Test with help set, validate=True.""" options = Options() options.help = True argumentList = options.buildArgumentList(validate=True) self.assertEqual(["--help", ], argumentList) def testBuildArgumentList_020(self): """Test with version set, validate=True.""" options = Options() options.version = True argumentList = options.buildArgumentList(validate=True) self.assertEqual(["--version", ], argumentList) def testBuildArgumentList_021(self): """Test with verbose set, validate=True.""" options = Options() options.verbose = True self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_022(self): """Test with quiet set, validate=True.""" options = Options() options.quiet = True self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_023(self): """Test with config set, validate=True.""" options = Options() options.config = "stuff" self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_024(self): """Test with full set, validate=True.""" options = Options() options.full = True self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_025(self): """Test with logfile set, validate=True.""" options = Options() options.logfile = "bogus" self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_026(self): """Test with owner set, validate=True.""" options = Options() options.owner = ("ken", "group") self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_027(self): """Test with mode set, validate=True.""" options = Options() options.mode = 0o644 self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_028(self): """Test with output set, validate=True.""" options = Options() options.output = True self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_029(self): """Test with debug set, validate=True.""" options = Options() options.debug = True self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_030(self): """Test with stacktrace set, validate=True.""" options = Options() options.stacktrace = True self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_031(self): """Test with actions containing one item, validate=True.""" options = Options() options.actions = [ "collect", ] argumentList = options.buildArgumentList(validate=True) self.assertEqual(["collect", ], argumentList) def testBuildArgumentList_032(self): """Test with actions containing multiple items, validate=True.""" options = Options() options.actions = [ "collect", "stage", "store", "purge", ] argumentList = options.buildArgumentList(validate=True) self.assertEqual(["collect", "stage", "store", "purge", ], argumentList) def testBuildArgumentList_033(self): """Test with all values set (except managed ones), actions containing one item, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] argumentList = options.buildArgumentList(validate=True) self.assertEqual(["--help", "--version", "--verbose", "--quiet", "--config", "config", "--full", "--logfile", "logfile", "--owner", "a:b", "--mode", "631", "--output", "--debug", "--stack", "--diagnostics", "collect", ], argumentList) def testBuildArgumentList_034(self): """Test with all values set (except managed ones), actions containing multiple items, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] argumentList = options.buildArgumentList(validate=True) self.assertEqual(["--help", "--version", "--verbose", "--quiet", "--config", "config", "--full", "--logfile", "logfile", "--owner", "a:b", "--mode", "631", "--output", "--debug", "--stack", "--diagnostics", "collect", "stage", ], argumentList) def testBuildArgumentList_035(self): """Test with managed set, validate=False.""" options = Options() options.managed = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--managed", ], argumentList) def testBuildArgumentList_036(self): """Test with managed set, validate=True.""" options = Options() options.managed = True self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_037(self): """Test with managedOnly set, validate=False.""" options = Options() options.managedOnly = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--managed-only", ], argumentList) def testBuildArgumentList_038(self): """Test with managedOnly set, validate=True.""" options = Options() options.managedOnly = True self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_039(self): """Test with all values set, actions containing one item, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_040(self): """Test with all values set, actions containing multiple items, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] self.assertRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_041(self): """Test with diagnostics set, validate=False.""" options = Options() options.diagnostics = True argumentList = options.buildArgumentList(validate=False) self.assertEqual(["--diagnostics", ], argumentList) def testBuildArgumentList_042(self): """Test with diagnostics set, validate=True.""" options = Options() options.diagnostics = True argumentList = options.buildArgumentList(validate=True) self.assertEqual(["--diagnostics", ], argumentList) ############################# # Test buildArgumentString() ############################# def testBuildArgumentString_001(self): """Test with no values set, validate=False.""" options = Options() argumentString = options.buildArgumentString(validate=False) self.assertEqual("", argumentString) def testBuildArgumentString_002(self): """Test with help set, validate=False.""" options = Options() options.help = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--help ", argumentString) def testBuildArgumentString_003(self): """Test with version set, validate=False.""" options = Options() options.version = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--version ", argumentString) def testBuildArgumentString_004(self): """Test with verbose set, validate=False.""" options = Options() options.verbose = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--verbose ", argumentString) def testBuildArgumentString_005(self): """Test with quiet set, validate=False.""" options = Options() options.quiet = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--quiet ", argumentString) def testBuildArgumentString_006(self): """Test with config set, validate=False.""" options = Options() options.config = "stuff" argumentString = options.buildArgumentString(validate=False) self.assertEqual('--config "stuff" ', argumentString) def testBuildArgumentString_007(self): """Test with full set, validate=False.""" options = Options() options.full = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--full ", argumentString) def testBuildArgumentString_008(self): """Test with logfile set, validate=False.""" options = Options() options.logfile = "bogus" argumentString = options.buildArgumentString(validate=False) self.assertEqual('--logfile "bogus" ', argumentString) def testBuildArgumentString_009(self): """Test with owner set, validate=False.""" options = Options() options.owner = ("ken", "group") argumentString = options.buildArgumentString(validate=False) self.assertEqual('--owner "ken:group" ', argumentString) def testBuildArgumentString_010(self): """Test with mode set, validate=False.""" options = Options() options.mode = 0o644 argumentString = options.buildArgumentString(validate=False) self.assertEqual('--mode 644 ', argumentString) def testBuildArgumentString_011(self): """Test with output set, validate=False.""" options = Options() options.output = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--output ", argumentString) def testBuildArgumentString_012(self): """Test with debug set, validate=False.""" options = Options() options.debug = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--debug ", argumentString) def testBuildArgumentString_013(self): """Test with stacktrace set, validate=False.""" options = Options() options.stacktrace = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--stack ", argumentString) def testBuildArgumentString_014(self): """Test with actions containing one item, validate=False.""" options = Options() options.actions = [ "collect", ] argumentString = options.buildArgumentString(validate=False) self.assertEqual('"collect" ', argumentString) def testBuildArgumentString_015(self): """Test with actions containing multiple items, validate=False.""" options = Options() options.actions = [ "collect", "stage", "store", "purge", ] argumentString = options.buildArgumentString(validate=False) self.assertEqual('"collect" "stage" "store" "purge" ', argumentString) def testBuildArgumentString_016(self): """Test with all values set, actions containing one item, validate=False.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] argumentString = options.buildArgumentString(validate=False) self.assertEqual('--help --version --verbose --quiet --config "config" --full --managed --managed-only --logfile "logfile" --owner "a:b" --mode 631 --output --debug --stack --diagnostics "collect" ', argumentString) def testBuildArgumentString_017(self): """Test with all values set, actions containing multiple items, validate=False.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] argumentString = options.buildArgumentString(validate=False) self.assertEqual('--help --version --verbose --quiet --config "config" --full --logfile "logfile" --owner "a:b" --mode 631 --output --debug --stack --diagnostics "collect" "stage" ', argumentString) def testBuildArgumentString_018(self): """Test with no values set, validate=True.""" options = Options() self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_019(self): """Test with help set, validate=True.""" options = Options() options.help = True argumentString = options.buildArgumentString(validate=True) self.assertEqual("--help ", argumentString) def testBuildArgumentString_020(self): """Test with version set, validate=True.""" options = Options() options.version = True argumentString = options.buildArgumentString(validate=True) self.assertEqual("--version ", argumentString) def testBuildArgumentString_021(self): """Test with verbose set, validate=True.""" options = Options() options.verbose = True self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_022(self): """Test with quiet set, validate=True.""" options = Options() options.quiet = True self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_023(self): """Test with config set, validate=True.""" options = Options() options.config = "stuff" self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_024(self): """Test with full set, validate=True.""" options = Options() options.full = True self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_025(self): """Test with logfile set, validate=True.""" options = Options() options.logfile = "bogus" self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_026(self): """Test with owner set, validate=True.""" options = Options() options.owner = ("ken", "group") self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_027(self): """Test with mode set, validate=True.""" options = Options() options.mode = 0o644 self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_028(self): """Test with output set, validate=True.""" options = Options() options.output = True self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_029(self): """Test with debug set, validate=True.""" options = Options() options.debug = True self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_030(self): """Test with stacktrace set, validate=True.""" options = Options() options.stacktrace = True self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_031(self): """Test with actions containing one item, validate=True.""" options = Options() options.actions = [ "collect", ] argumentString = options.buildArgumentString(validate=True) self.assertEqual('"collect" ', argumentString) def testBuildArgumentString_032(self): """Test with actions containing multiple items, validate=True.""" options = Options() options.actions = [ "collect", "stage", "store", "purge", ] argumentString = options.buildArgumentString(validate=True) self.assertEqual('"collect" "stage" "store" "purge" ', argumentString) def testBuildArgumentString_033(self): """Test with all values set (except managed ones), actions containing one item, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] argumentString = options.buildArgumentString(validate=True) self.assertEqual('--help --version --verbose --quiet --config "config" --full --logfile "logfile" --owner "a:b" --mode 631 --output --debug --stack --diagnostics "collect" ', argumentString) def testBuildArgumentString_034(self): """Test with all values set (except managed ones), actions containing multiple items, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] argumentString = options.buildArgumentString(validate=True) self.assertEqual('--help --version --verbose --quiet --config "config" --full --logfile "logfile" --owner "a:b" --mode 631 --output --debug --stack --diagnostics "collect" "stage" ', argumentString) def testBuildArgumentString_035(self): """Test with managed set, validate=False.""" options = Options() options.managed = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--managed ", argumentString) def testBuildArgumentString_036(self): """Test with managed set, validate=True.""" options = Options() options.managed = True self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_037(self): """Test with full set, validate=False.""" options = Options() options.managedOnly = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--managed-only ", argumentString) def testBuildArgumentString_038(self): """Test with managedOnly set, validate=True.""" options = Options() options.managedOnly = True self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_039(self): """Test with all values set (except managed ones), actions containing one item, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_040(self): """Test with all values set (except managed ones), actions containing multiple items, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] self.assertRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_041(self): """Test with diagnostics set, validate=False.""" options = Options() options.diagnostics = True argumentString = options.buildArgumentString(validate=False) self.assertEqual("--diagnostics ", argumentString) def testBuildArgumentString_042(self): """Test with diagnostics set, validate=True.""" options = Options() options.diagnostics = True argumentString = options.buildArgumentString(validate=True) self.assertEqual("--diagnostics ", argumentString) ###################### # TestActionSet class ###################### class TestActionSet(unittest.TestCase): """Tests for the _ActionSet class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################################### # Test constructor, "index" order mode ####################################### def testActionSet_001(self): """ Test with actions=None, extensions=None. """ actions = None extensions = ExtensionsConfig(None, None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_002(self): """ Test with actions=[], extensions=None. """ actions = [] extensions = ExtensionsConfig(None, None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_003(self): """ Test with actions=[], extensions=[]. """ actions = [] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_004(self): """ Test with actions=[ collect ], extensions=[]. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_005(self): """ Test with actions=[ stage ], extensions=[]. """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) def testActionSet_006(self): """ Test with actions=[ store ], extensions=[]. """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(300, actionSet.actionSet[0].index) self.assertEqual("store", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStore, actionSet.actionSet[0].function) def testActionSet_007(self): """ Test with actions=[ purge ], extensions=[]. """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(400, actionSet.actionSet[0].index) self.assertEqual("purge", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executePurge, actionSet.actionSet[0].function) def testActionSet_008(self): """ Test with actions=[ all ], extensions=[]. """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 4) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) self.assertEqual(300, actionSet.actionSet[2].index) self.assertEqual("store", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(executeStore, actionSet.actionSet[2].function) self.assertEqual(400, actionSet.actionSet[3].index) self.assertEqual("purge", actionSet.actionSet[3].name) self.assertEqual(None, actionSet.actionSet[3].preHooks) self.assertEqual(None, actionSet.actionSet[3].postHooks) self.assertEqual(executePurge, actionSet.actionSet[3].function) def testActionSet_009(self): """ Test with actions=[ rebuild ], extensions=[]. """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(0, actionSet.actionSet[0].index) self.assertEqual("rebuild", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeRebuild, actionSet.actionSet[0].function) def testActionSet_010(self): """ Test with actions=[ validate ], extensions=[]. """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(0, actionSet.actionSet[0].index) self.assertEqual("validate", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeValidate, actionSet.actionSet[0].function) def testActionSet_011(self): """ Test with actions=[ collect, collect ], extensions=[]. """ actions = [ "collect", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_012(self): """ Test with actions=[ collect, stage ], extensions=[]. """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testActionSet_013(self): """ Test with actions=[ collect, store ], extensions=[]. """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_014(self): """ Test with actions=[ collect, purge ], extensions=[]. """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_015(self): """ Test with actions=[ collect, all ], extensions=[]. """ actions = [ "collect", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_016(self): """ Test with actions=[ collect, rebuild ], extensions=[]. """ actions = [ "collect", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_017(self): """ Test with actions=[ collect, validate ], extensions=[]. """ actions = [ "collect", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_018(self): """ Test with actions=[ stage, collect ], extensions=[]. """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testActionSet_019(self): """ Test with actions=[ stage, stage ], extensions=[]. """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testActionSet_020(self): """ Test with actions=[ stage, store ], extensions=[]. """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_021(self): """ Test with actions=[ stage, purge ], extensions=[]. """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_022(self): """ Test with actions=[ stage, all ], extensions=[]. """ actions = [ "stage", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_023(self): """ Test with actions=[ stage, rebuild ], extensions=[]. """ actions = [ "stage", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_024(self): """ Test with actions=[ stage, validate ], extensions=[]. """ actions = [ "stage", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_025(self): """ Test with actions=[ store, collect ], extensions=[]. """ actions = [ "store", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_026(self): """ Test with actions=[ store, stage ], extensions=[]. """ actions = [ "store", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_027(self): """ Test with actions=[ store, store ], extensions=[]. """ actions = [ "store", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(300, actionSet.actionSet[0].index) self.assertEqual("store", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStore, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_028(self): """ Test with actions=[ store, purge ], extensions=[]. """ actions = [ "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(300, actionSet.actionSet[0].index) self.assertEqual("store", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStore, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_029(self): """ Test with actions=[ store, all ], extensions=[]. """ actions = [ "store", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_030(self): """ Test with actions=[ store, rebuild ], extensions=[]. """ actions = [ "store", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_031(self): """ Test with actions=[ store, validate ], extensions=[]. """ actions = [ "store", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_032(self): """ Test with actions=[ purge, collect ], extensions=[]. """ actions = [ "purge", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_033(self): """ Test with actions=[ purge, stage ], extensions=[]. """ actions = [ "purge", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_034(self): """ Test with actions=[ purge, store ], extensions=[]. """ actions = [ "purge", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(300, actionSet.actionSet[0].index) self.assertEqual("store", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStore, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_035(self): """ Test with actions=[ purge, purge ], extensions=[]. """ actions = [ "purge", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(400, actionSet.actionSet[0].index) self.assertEqual("purge", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executePurge, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_036(self): """ Test with actions=[ purge, all ], extensions=[]. """ actions = [ "purge", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_037(self): """ Test with actions=[ purge, rebuild ], extensions=[]. """ actions = [ "purge", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_038(self): """ Test with actions=[ purge, validate ], extensions=[]. """ actions = [ "purge", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_039(self): """ Test with actions=[ all, collect ], extensions=[]. """ actions = [ "all", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_040(self): """ Test with actions=[ all, stage ], extensions=[]. """ actions = [ "all", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_041(self): """ Test with actions=[ all, store ], extensions=[]. """ actions = [ "all", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_042(self): """ Test with actions=[ all, purge ], extensions=[]. """ actions = [ "all", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_043(self): """ Test with actions=[ all, all ], extensions=[]. """ actions = [ "all", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_044(self): """ Test with actions=[ all, rebuild ], extensions=[]. """ actions = [ "all", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_045(self): """ Test with actions=[ all, validate ], extensions=[]. """ actions = [ "all", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_046(self): """ Test with actions=[ rebuild, collect ], extensions=[]. """ actions = [ "rebuild", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_047(self): """ Test with actions=[ rebuild, stage ], extensions=[]. """ actions = [ "rebuild", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_048(self): """ Test with actions=[ rebuild, store ], extensions=[]. """ actions = [ "rebuild", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_049(self): """ Test with actions=[ rebuild, purge ], extensions=[]. """ actions = [ "rebuild", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_050(self): """ Test with actions=[ rebuild, all ], extensions=[]. """ actions = [ "rebuild", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_051(self): """ Test with actions=[ rebuild, rebuild ], extensions=[]. """ actions = [ "rebuild", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_052(self): """ Test with actions=[ rebuild, validate ], extensions=[]. """ actions = [ "rebuild", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_053(self): """ Test with actions=[ validate, collect ], extensions=[]. """ actions = [ "validate", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_054(self): """ Test with actions=[ validate, stage ], extensions=[]. """ actions = [ "validate", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_055(self): """ Test with actions=[ validate, store ], extensions=[]. """ actions = [ "validate", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_056(self): """ Test with actions=[ validate, purge ], extensions=[]. """ actions = [ "validate", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_057(self): """ Test with actions=[ validate, all ], extensions=[]. """ actions = [ "validate", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_058(self): """ Test with actions=[ validate, rebuild ], extensions=[]. """ actions = [ "validate", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_059(self): """ Test with actions=[ validate, validate ], extensions=[]. """ actions = [ "validate", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_060(self): """ Test with actions=[ bogus ], extensions=[]. """ actions = [ "bogus", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_061(self): """ Test with actions=[ bogus, collect ], extensions=[]. """ actions = [ "bogus", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_062(self): """ Test with actions=[ bogus, stage ], extensions=[]. """ actions = [ "bogus", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_063(self): """ Test with actions=[ bogus, store ], extensions=[]. """ actions = [ "bogus", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_064(self): """ Test with actions=[ bogus, purge ], extensions=[]. """ actions = [ "bogus", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_065(self): """ Test with actions=[ bogus, all ], extensions=[]. """ actions = [ "bogus", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_066(self): """ Test with actions=[ bogus, rebuild ], extensions=[]. """ actions = [ "bogus", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_067(self): """ Test with actions=[ bogus, validate ], extensions=[]. """ actions = [ "bogus", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_068(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ]. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_069(self): """ Test with actions=[ stage, one ], extensions=[ (one, index 50) ]. """ actions = [ "stage", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testActionSet_070(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ]. """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_071(self): """ Test with actions=[ purge, one ], extensions=[ (one, index 50) ]. """ actions = [ "purge", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_072(self): """ Test with actions=[ all, one ], extensions=[ (one, index 50) ]. """ actions = [ "all", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_073(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, index 50) ]. """ actions = [ "rebuild", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_074(self): """ Test with actions=[ validate, one ], extensions=[ (one, index 50) ]. """ actions = [ "validate", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_075(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ]. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(150, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) def testActionSet_076(self): """ Test with actions=[ stage, one ], extensions=[ (one, index 150) ]. """ actions = [ "stage", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(150, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testActionSet_077(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ]. """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(150, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_078(self): """ Test with actions=[ purge, one ], extensions=[ (one, index 150) ]. """ actions = [ "purge", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(150, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_079(self): """ Test with actions=[ all, one ], extensions=[ (one, index 150) ]. """ actions = [ "all", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_080(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, index 150) ]. """ actions = [ "rebuild", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_081(self): """ Test with actions=[ validate, one ], extensions=[ (one, index 150) ]. """ actions = [ "validate", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_082(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 250) ]. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(250, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) def testActionSet_083(self): """ Test with actions=[ stage, one ], extensions=[ (one, index 250) ]. """ actions = [ "stage", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(250, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) def testActionSet_084(self): """ Test with actions=[ store, one ], extensions=[ (one, index 250) ]. """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(250, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_085(self): """ Test with actions=[ purge, one ], extensions=[ (one, index 250) ]. """ actions = [ "purge", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(250, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_086(self): """ Test with actions=[ all, one ], extensions=[ (one, index 250) ]. """ actions = [ "all", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_087(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, index 250) ]. """ actions = [ "rebuild", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_088(self): """ Test with actions=[ validate, one ], extensions=[ (one, index 250) ]. """ actions = [ "validate", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_089(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 350) ]. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(350, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) def testActionSet_090(self): """ Test with actions=[ stage, one ], extensions=[ (one, index 350) ]. """ actions = [ "stage", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(350, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) def testActionSet_091(self): """ Test with actions=[ store, one ], extensions=[ (one, index 350) ]. """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(300, actionSet.actionSet[0].index) self.assertEqual("store", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStore, actionSet.actionSet[0].function) self.assertEqual(350, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) def testActionSet_092(self): """ Test with actions=[ purge, one ], extensions=[ (one, index 350) ]. """ actions = [ "purge", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(350, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_093(self): """ Test with actions=[ all, one ], extensions=[ (one, index 350) ]. """ actions = [ "all", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_094(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, index 350) ]. """ actions = [ "rebuild", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_095(self): """ Test with actions=[ validate, one ], extensions=[ (one, index 350) ]. """ actions = [ "validate", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_096(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 450) ]. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(450, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) def testActionSet_097(self): """ Test with actions=[ stage, one ], extensions=[ (one, index 450) ]. """ actions = [ "stage", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(450, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) def testActionSet_098(self): """ Test with actions=[ store, one ], extensions=[ (one, index 450) ]. """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(300, actionSet.actionSet[0].index) self.assertEqual("store", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStore, actionSet.actionSet[0].function) self.assertEqual(450, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) def testActionSet_099(self): """ Test with actions=[ purge, one ], extensions=[ (one, index 450) ]. """ actions = [ "purge", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(400, actionSet.actionSet[0].index) self.assertEqual("purge", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executePurge, actionSet.actionSet[0].function) self.assertEqual(450, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) def testActionSet_100(self): """ Test with actions=[ all, one ], extensions=[ (one, index 450) ]. """ actions = [ "all", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_101(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, index 450) ]. """ actions = [ "rebuild", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_102(self): """ Test with actions=[ validate, one ], extensions=[ (one, index 450) ]. """ actions = [ "validate", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_103(self): """ Test with actions=[ one, one ], extensions=[ (one, index 450) ]. """ actions = [ "one", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(450, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(450, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) def testActionSet_104(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[]. """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 4) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) self.assertEqual(300, actionSet.actionSet[2].index) self.assertEqual("store", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(executeStore, actionSet.actionSet[2].function) self.assertEqual(400, actionSet.actionSet[3].index) self.assertEqual("purge", actionSet.actionSet[3].name) self.assertEqual(None, actionSet.actionSet[3].preHooks) self.assertEqual(None, actionSet.actionSet[3].postHooks) self.assertEqual(executePurge, actionSet.actionSet[3].function) def testActionSet_105(self): """ Test with actions=[ stage, purge, collect, store ], extensions=[]. """ actions = [ "stage", "purge", "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 4) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) self.assertEqual(300, actionSet.actionSet[2].index) self.assertEqual("store", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(executeStore, actionSet.actionSet[2].function) self.assertEqual(400, actionSet.actionSet[3].index) self.assertEqual("purge", actionSet.actionSet[3].name) self.assertEqual(None, actionSet.actionSet[3].preHooks) self.assertEqual(None, actionSet.actionSet[3].postHooks) self.assertEqual(executePurge, actionSet.actionSet[3].function) def testActionSet_106(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)]. """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 9) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) self.assertEqual(150, actionSet.actionSet[2].index) self.assertEqual("two", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(isfile, actionSet.actionSet[2].function) self.assertEqual(200, actionSet.actionSet[3].index) self.assertEqual("stage", actionSet.actionSet[3].name) self.assertEqual(None, actionSet.actionSet[3].preHooks) self.assertEqual(None, actionSet.actionSet[3].postHooks) self.assertEqual(executeStage, actionSet.actionSet[3].function) self.assertEqual(250, actionSet.actionSet[4].index) self.assertEqual("three", actionSet.actionSet[4].name) self.assertEqual(None, actionSet.actionSet[4].preHooks) self.assertEqual(None, actionSet.actionSet[4].postHooks) self.assertEqual(islink, actionSet.actionSet[4].function) self.assertEqual(300, actionSet.actionSet[5].index) self.assertEqual("store", actionSet.actionSet[5].name) self.assertEqual(None, actionSet.actionSet[5].preHooks) self.assertEqual(None, actionSet.actionSet[5].postHooks) self.assertEqual(executeStore, actionSet.actionSet[5].function) self.assertEqual(350, actionSet.actionSet[6].index) self.assertEqual("four", actionSet.actionSet[6].name) self.assertEqual(None, actionSet.actionSet[6].preHooks) self.assertEqual(None, actionSet.actionSet[6].postHooks) self.assertEqual(isabs, actionSet.actionSet[6].function) self.assertEqual(400, actionSet.actionSet[7].index) self.assertEqual("purge", actionSet.actionSet[7].name) self.assertEqual(None, actionSet.actionSet[7].preHooks) self.assertEqual(None, actionSet.actionSet[7].postHooks) self.assertEqual(executePurge, actionSet.actionSet[7].function) self.assertEqual(450, actionSet.actionSet[8].index) self.assertEqual("five", actionSet.actionSet[8].name) self.assertEqual(None, actionSet.actionSet[8].preHooks) self.assertEqual(None, actionSet.actionSet[8].postHooks) self.assertEqual(exists, actionSet.actionSet[8].function) def testActionSet_107(self): """ Test with actions=[ one, five, collect, store, three, stage, four, purge, two ], extensions=[ (index 50, 150, 250, 350, 450)]. """ actions = [ "one", "five", "collect", "store", "three", "stage", "four", "purge", "two", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 9) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) self.assertEqual(150, actionSet.actionSet[2].index) self.assertEqual("two", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(isfile, actionSet.actionSet[2].function) self.assertEqual(200, actionSet.actionSet[3].index) self.assertEqual("stage", actionSet.actionSet[3].name) self.assertEqual(None, actionSet.actionSet[3].preHooks) self.assertEqual(None, actionSet.actionSet[3].postHooks) self.assertEqual(executeStage, actionSet.actionSet[3].function) self.assertEqual(250, actionSet.actionSet[4].index) self.assertEqual("three", actionSet.actionSet[4].name) self.assertEqual(None, actionSet.actionSet[4].preHooks) self.assertEqual(None, actionSet.actionSet[4].postHooks) self.assertEqual(islink, actionSet.actionSet[4].function) self.assertEqual(300, actionSet.actionSet[5].index) self.assertEqual("store", actionSet.actionSet[5].name) self.assertEqual(None, actionSet.actionSet[5].preHooks) self.assertEqual(None, actionSet.actionSet[5].postHooks) self.assertEqual(executeStore, actionSet.actionSet[5].function) self.assertEqual(350, actionSet.actionSet[6].index) self.assertEqual("four", actionSet.actionSet[6].name) self.assertEqual(None, actionSet.actionSet[6].preHooks) self.assertEqual(None, actionSet.actionSet[6].postHooks) self.assertEqual(isabs, actionSet.actionSet[6].function) self.assertEqual(400, actionSet.actionSet[7].index) self.assertEqual("purge", actionSet.actionSet[7].name) self.assertEqual(None, actionSet.actionSet[7].preHooks) self.assertEqual(None, actionSet.actionSet[7].postHooks) self.assertEqual(executePurge, actionSet.actionSet[7].function) self.assertEqual(450, actionSet.actionSet[8].index) self.assertEqual("five", actionSet.actionSet[8].name) self.assertEqual(None, actionSet.actionSet[8].preHooks) self.assertEqual(None, actionSet.actionSet[8].postHooks) self.assertEqual(exists, actionSet.actionSet[8].function) def testActionSet_108(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ]. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) def testActionSet_109(self): """ Test with actions=[ collect ], extensions=[], hooks=[] """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_110(self): """ Test with actions=[ collect ], extensions=[], pre-hook on 'stage' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [ PreActionHook("stage", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_111(self): """ Test with actions=[ collect ], extensions=[], post-hook on 'stage' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [ PostActionHook("stage", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_112(self): """ Test with actions=[ collect ], extensions=[], pre-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual([ PreActionHook("collect", "something"), ], actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_113(self): """ Test with actions=[ collect ], extensions=[], post-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [ PostActionHook("collect", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual([ PostActionHook("collect", "something"), ], actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_114(self): """ Test with actions=[ collect ], extensions=[], pre- and post-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something1"), PostActionHook("collect", "something2") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual([ PreActionHook("collect", "something1"), ], actionSet.actionSet[0].preHooks) self.assertEqual([ PostActionHook("collect", "something2"), ], actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_115(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], hooks=[] """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) def testActionSet_116(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], pre-hook on "store" action. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PreActionHook("store", "whatever"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) def testActionSet_117(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], post-hook on "store" action. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("store", "whatever"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) def testActionSet_118(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], pre-hook on "one" action. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PreActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual([ PreActionHook("one", "extension"), ], actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) def testActionSet_119(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], post-hook on "one" action. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual([ PostActionHook("one", "extension"), ], actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) def testActionSet_120(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], pre- and post-hook on "one" action. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension2"), PreActionHook("one", "extension1"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual([ PreActionHook("one", "extension1"), ], actionSet.actionSet[0].preHooks) self.assertEqual([ PostActionHook("one", "extension2"), ], actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) def testActionSet_121(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], hooks=[] """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_122(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], pre-hook on "purge" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PreActionHook("purge", "rm -f"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_123(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], post-hook on "purge" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("purge", "rm -f"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_124(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], pre-hook on "collect" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual([ PreActionHook("collect", "something"), ], actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_125(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], post-hook on "collect" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("collect", "something"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual([ PostActionHook("collect", "something"), ], actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_126(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], pre-hook on "one" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PreActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual([ PreActionHook("one", "extension"), ], actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_127(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], post-hook on "one" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual([ PostActionHook("one", "extension"), ], actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_128(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], set of various pre- and post hooks. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), PreActionHook("collect", "something1"), PreActionHook("collect", "something2"), PostActionHook("stage", "whatever1"), PostActionHook("stage", "whatever2"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual([ PostActionHook("one", "extension"), ], actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual([ PreActionHook("collect", "something1"), PreActionHook("collect", "something2") ], actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_129(self): """ Test with actions=[ stage, one ], extensions=[ (one, index 50) ], set of various pre- and post hooks. """ actions = [ "stage", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), PreActionHook("collect", "something1"), PreActionHook("collect", "something2"), PostActionHook("stage", "whatever1"), PostActionHook("stage", "whatever2"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual([ PostActionHook("one", "extension"), ], actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual([ PostActionHook("stage", "whatever1"), PostActionHook("stage", "whatever2") ], actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) ############################################ # Test constructor, "dependency" order mode ############################################ def testDependencyMode_001(self): """ Test with actions=None, extensions=None. """ actions = None extensions = ExtensionsConfig(None, "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_002(self): """ Test with actions=[], extensions=None. """ actions = [] extensions = ExtensionsConfig(None, "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_003(self): """ Test with actions=[], extensions=[]. """ actions = [] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_004(self): """ Test with actions=[ collect ], extensions=[]. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_005(self): """ Test with actions=[ stage ], extensions=[]. """ actions = [ "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) def testDependencyMode_006(self): """ Test with actions=[ store ], extensions=[]. """ actions = [ "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(300, actionSet.actionSet[0].index) self.assertEqual("store", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStore, actionSet.actionSet[0].function) def testDependencyMode_007(self): """ Test with actions=[ purge ], extensions=[]. """ actions = [ "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(400, actionSet.actionSet[0].index) self.assertEqual("purge", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executePurge, actionSet.actionSet[0].function) def testDependencyMode_008(self): """ Test with actions=[ all ], extensions=[]. """ actions = [ "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 4) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) self.assertEqual(300, actionSet.actionSet[2].index) self.assertEqual("store", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(executeStore, actionSet.actionSet[2].function) self.assertEqual(400, actionSet.actionSet[3].index) self.assertEqual("purge", actionSet.actionSet[3].name) self.assertEqual(None, actionSet.actionSet[3].preHooks) self.assertEqual(None, actionSet.actionSet[3].postHooks) self.assertEqual(executePurge, actionSet.actionSet[3].function) def testDependencyMode_009(self): """ Test with actions=[ rebuild ], extensions=[]. """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(0, actionSet.actionSet[0].index) self.assertEqual("rebuild", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeRebuild, actionSet.actionSet[0].function) def testDependencyMode_010(self): """ Test with actions=[ validate ], extensions=[]. """ actions = [ "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(0, actionSet.actionSet[0].index) self.assertEqual("validate", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeValidate, actionSet.actionSet[0].function) def testDependencyMode_011(self): """ Test with actions=[ collect, collect ], extensions=[]. """ actions = [ "collect", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_012(self): """ Test with actions=[ collect, stage ], extensions=[]. """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_013(self): """ Test with actions=[ collect, store ], extensions=[]. """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_014(self): """ Test with actions=[ collect, purge ], extensions=[]. """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_015(self): """ Test with actions=[ collect, all ], extensions=[]. """ actions = [ "collect", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_016(self): """ Test with actions=[ collect, rebuild ], extensions=[]. """ actions = [ "collect", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_017(self): """ Test with actions=[ collect, validate ], extensions=[]. """ actions = [ "collect", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_018(self): """ Test with actions=[ stage, collect ], extensions=[]. """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_019(self): """ Test with actions=[ stage, stage ], extensions=[]. """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_020(self): """ Test with actions=[ stage, store ], extensions=[]. """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_021(self): """ Test with actions=[ stage, purge ], extensions=[]. """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_022(self): """ Test with actions=[ stage, all ], extensions=[]. """ actions = [ "stage", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_023(self): """ Test with actions=[ stage, rebuild ], extensions=[]. """ actions = [ "stage", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_024(self): """ Test with actions=[ stage, validate ], extensions=[]. """ actions = [ "stage", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_025(self): """ Test with actions=[ store, collect ], extensions=[]. """ actions = [ "store", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_026(self): """ Test with actions=[ store, stage ], extensions=[]. """ actions = [ "store", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_027(self): """ Test with actions=[ store, store ], extensions=[]. """ actions = [ "store", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(300, actionSet.actionSet[0].index) self.assertEqual("store", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStore, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_028(self): """ Test with actions=[ store, purge ], extensions=[]. """ actions = [ "store", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(300, actionSet.actionSet[0].index) self.assertEqual("store", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStore, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_029(self): """ Test with actions=[ store, all ], extensions=[]. """ actions = [ "store", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_030(self): """ Test with actions=[ store, rebuild ], extensions=[]. """ actions = [ "store", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_031(self): """ Test with actions=[ store, validate ], extensions=[]. """ actions = [ "store", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_032(self): """ Test with actions=[ purge, collect ], extensions=[]. """ actions = [ "purge", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_033(self): """ Test with actions=[ purge, stage ], extensions=[]. """ actions = [ "purge", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_034(self): """ Test with actions=[ purge, store ], extensions=[]. """ actions = [ "purge", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(300, actionSet.actionSet[0].index) self.assertEqual("store", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStore, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_035(self): """ Test with actions=[ purge, purge ], extensions=[]. """ actions = [ "purge", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(400, actionSet.actionSet[0].index) self.assertEqual("purge", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executePurge, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_036(self): """ Test with actions=[ purge, all ], extensions=[]. """ actions = [ "purge", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_037(self): """ Test with actions=[ purge, rebuild ], extensions=[]. """ actions = [ "purge", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_038(self): """ Test with actions=[ purge, validate ], extensions=[]. """ actions = [ "purge", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_039(self): """ Test with actions=[ all, collect ], extensions=[]. """ actions = [ "all", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_040(self): """ Test with actions=[ all, stage ], extensions=[]. """ actions = [ "all", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_041(self): """ Test with actions=[ all, store ], extensions=[]. """ actions = [ "all", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_042(self): """ Test with actions=[ all, purge ], extensions=[]. """ actions = [ "all", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_043(self): """ Test with actions=[ all, all ], extensions=[]. """ actions = [ "all", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_044(self): """ Test with actions=[ all, rebuild ], extensions=[]. """ actions = [ "all", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_045(self): """ Test with actions=[ all, validate ], extensions=[]. """ actions = [ "all", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_046(self): """ Test with actions=[ rebuild, collect ], extensions=[]. """ actions = [ "rebuild", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_047(self): """ Test with actions=[ rebuild, stage ], extensions=[]. """ actions = [ "rebuild", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_048(self): """ Test with actions=[ rebuild, store ], extensions=[]. """ actions = [ "rebuild", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_049(self): """ Test with actions=[ rebuild, purge ], extensions=[]. """ actions = [ "rebuild", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_050(self): """ Test with actions=[ rebuild, all ], extensions=[]. """ actions = [ "rebuild", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_051(self): """ Test with actions=[ rebuild, rebuild ], extensions=[]. """ actions = [ "rebuild", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_052(self): """ Test with actions=[ rebuild, validate ], extensions=[]. """ actions = [ "rebuild", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_053(self): """ Test with actions=[ validate, collect ], extensions=[]. """ actions = [ "validate", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_054(self): """ Test with actions=[ validate, stage ], extensions=[]. """ actions = [ "validate", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_055(self): """ Test with actions=[ validate, store ], extensions=[]. """ actions = [ "validate", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_056(self): """ Test with actions=[ validate, purge ], extensions=[]. """ actions = [ "validate", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_057(self): """ Test with actions=[ validate, all ], extensions=[]. """ actions = [ "validate", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_058(self): """ Test with actions=[ validate, rebuild ], extensions=[]. """ actions = [ "validate", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_059(self): """ Test with actions=[ validate, validate ], extensions=[]. """ actions = [ "validate", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_060(self): """ Test with actions=[ bogus ], extensions=[]. """ actions = [ "bogus", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_061(self): """ Test with actions=[ bogus, collect ], extensions=[]. """ actions = [ "bogus", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_062(self): """ Test with actions=[ bogus, stage ], extensions=[]. """ actions = [ "bogus", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_063(self): """ Test with actions=[ bogus, store ], extensions=[]. """ actions = [ "bogus", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_064(self): """ Test with actions=[ bogus, purge ], extensions=[]. """ actions = [ "bogus", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_065(self): """ Test with actions=[ bogus, all ], extensions=[]. """ actions = [ "bogus", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_066(self): """ Test with actions=[ bogus, rebuild ], extensions=[]. """ actions = [ "bogus", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_067(self): """ Test with actions=[ bogus, validate ], extensions=[]. """ actions = [ "bogus", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_068(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_069(self): """ Test with actions=[ stage, one ], extensions=[ (one, before stage) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(["stage", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_070(self): """ Test with actions=[ store, one ], extensions=[ (one, before store) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_071(self): """ Test with actions=[ purge, one ], extensions=[ (one, before purge) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(["purge", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_072(self): """ Test with actions=[ all, one ], extensions=[ (one, before collect) ]. """ actions = [ "all", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_073(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, before collect) ]. """ actions = [ "rebuild", "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_074(self): """ Test with actions=[ validate, one ], extensions=[ (one, before collect) ]. """ actions = [ "validate", "one", ] dependencies = ActionDependencies(["stage", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_075(self): """ Test with actions=[ collect, one ], extensions=[ (one, after collect) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies([], ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_076(self): """ Test with actions=[ stage, one ], extensions=[ (one, after collect) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(None, ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_077(self): """ Test with actions=[ store, one ], extensions=[ (one, after collect) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies([], ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_078(self): """ Test with actions=[ purge, one ], extensions=[ (one, after collect) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(None, ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_079(self): """ Test with actions=[ stage, one ], extensions=[ (one, before stage) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(["stage", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_080(self): """ Test with actions=[ store, one ], extensions=[ (one, before stage ) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(["stage", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_081(self): """ Test with actions=[ purge, one ], extensions=[ (one, before stage) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(["stage", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_082(self): """ Test with actions=[ all, one ], extensions=[ (one, after collect) ]. """ actions = [ "all", "one", ] dependencies = ActionDependencies(None, ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_083(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, after collect) ]. """ actions = [ "rebuild", "one", ] dependencies = ActionDependencies([], ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_084(self): """ Test with actions=[ validate, one ], extensions=[ (one, after collect) ]. """ actions = [ "validate", "one", ] dependencies = ActionDependencies(None, ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_085(self): """ Test with actions=[ collect, one ], extensions=[ (one, after stage) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies([], ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_086(self): """ Test with actions=[ stage, one ], extensions=[ (one, after stage) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies([], ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_087(self): """ Test with actions=[ store, one ], extensions=[ (one, after stage) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(None, ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_088(self): """ Test with actions=[ purge, one ], extensions=[ (one, after stage) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies([], ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_089(self): """ Test with actions=[ collect, one ], extensions=[ (one, before store) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["store", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_090(self): """ Test with actions=[ stage, one ], extensions=[ (one, before store) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_091(self): """ Test with actions=[ store, one ], extensions=[ (one, before store) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(["store", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_092(self): """ Test with actions=[ purge, one ], extensions=[ (one, before store) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_093(self): """ Test with actions=[ all, one ], extensions=[ (one, after stage) ]. """ actions = [ "all", "one", ] dependencies = ActionDependencies(None, ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_094(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, after stage) ]. """ actions = [ "rebuild", "one", ] dependencies = ActionDependencies([], ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_095(self): """ Test with actions=[ validate, one ], extensions=[ (one, after stage) ]. """ actions = [ "validate", "one", ] dependencies = ActionDependencies(None, ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_096(self): """ Test with actions=[ collect, one ], extensions=[ (one, after store) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_097(self): """ Test with actions=[ stage, one ], extensions=[ (one, after store) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(["store", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_098(self): """ Test with actions=[ store, one ], extensions=[ (one, after store) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_099(self): """ Test with actions=[ purge, one ], extensions=[ (one, after store) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(["store", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_100(self): """ Test with actions=[ collect, one ], extensions=[ (one, before purge) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_101(self): """ Test with actions=[ stage, one ], extensions=[ (one, before purge) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) def testDependencyMode_102(self): """ Test with actions=[ store, one ], extensions=[ (one, before purge) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) self.assertEqual("store", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStore, actionSet.actionSet[0].function) def testDependencyMode_103(self): """ Test with actions=[ purge, one ], extensions=[ (one, before purge) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("purge", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executePurge, actionSet.actionSet[0].function) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_104(self): """ Test with actions=[ all, one ], extensions=[ (one, after store) ]. """ actions = [ "all", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_105(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, after store) ]. """ actions = [ "rebuild", "one", ] dependencies = ActionDependencies(["store", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_106(self): """ Test with actions=[ validate, one ], extensions=[ (one, after store) ]. """ actions = [ "validate", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_107(self): """ Test with actions=[ collect, one ], extensions=[ (one, after purge) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_108(self): """ Test with actions=[ stage, one ], extensions=[ (one, after purge) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) def testDependencyMode_109(self): """ Test with actions=[ store, one ], extensions=[ (one, after purge) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) self.assertEqual("store", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStore, actionSet.actionSet[0].function) def testDependencyMode_110(self): """ Test with actions=[ purge, one ], extensions=[ (one, after purge) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("purge", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executePurge, actionSet.actionSet[0].function) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_111(self): """ Test with actions=[ all, one ], extensions=[ (one, after purge) ]. """ actions = [ "all", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_112(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, after purge) ]. """ actions = [ "rebuild", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_113(self): """ Test with actions=[ validate, one ], extensions=[ (one, after purge) ]. """ actions = [ "validate", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_114(self): """ Test with actions=[ one, one ], extensions=[ (one, after purge) ]. """ actions = [ "one", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_115(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[]. """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 4) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) self.assertEqual(300, actionSet.actionSet[2].index) self.assertEqual("store", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(executeStore, actionSet.actionSet[2].function) self.assertEqual(400, actionSet.actionSet[3].index) self.assertEqual("purge", actionSet.actionSet[3].name) self.assertEqual(None, actionSet.actionSet[3].preHooks) self.assertEqual(None, actionSet.actionSet[3].postHooks) self.assertEqual(executePurge, actionSet.actionSet[3].function) def testDependencyMode_116(self): """ Test with actions=[ stage, purge, collect, store ], extensions=[]. """ actions = [ "stage", "purge", "collect", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 4) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) self.assertEqual(300, actionSet.actionSet[2].index) self.assertEqual("store", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(executeStore, actionSet.actionSet[2].function) self.assertEqual(400, actionSet.actionSet[3].index) self.assertEqual("purge", actionSet.actionSet[3].name) self.assertEqual(None, actionSet.actionSet[3].preHooks) self.assertEqual(None, actionSet.actionSet[3].postHooks) self.assertEqual(executePurge, actionSet.actionSet[3].function) def testDependencyMode_117(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ one before collect, two before stage, etc. ]. """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] dependencies1 = ActionDependencies(["collect", "stage", "store", "purge", ], None) dependencies2 = ActionDependencies(["stage", "store", "purge", ], ["collect", ]) dependencies3 = ActionDependencies(["store", "purge", ], ["collect", "stage", ]) dependencies4 = ActionDependencies(["purge", ], ["collect", "stage", "store", ]) dependencies5 = ActionDependencies([], ["collect", "stage", "store", "purge", ]) eaction1 = ExtendedAction("one", "os.path", "isdir", dependencies=dependencies1) eaction2 = ExtendedAction("two", "os.path", "isfile", dependencies=dependencies2) eaction3 = ExtendedAction("three", "os.path", "islink", dependencies=dependencies3) eaction4 = ExtendedAction("four", "os.path", "isabs", dependencies=dependencies4) eaction5 = ExtendedAction("five", "os.path", "exists", dependencies=dependencies5) extensions = ExtensionsConfig([ eaction1, eaction2, eaction3, eaction4, eaction5, ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 9) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) self.assertEqual("two", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(isfile, actionSet.actionSet[2].function) self.assertEqual("stage", actionSet.actionSet[3].name) self.assertEqual(None, actionSet.actionSet[3].preHooks) self.assertEqual(None, actionSet.actionSet[3].postHooks) self.assertEqual(executeStage, actionSet.actionSet[3].function) self.assertEqual("three", actionSet.actionSet[4].name) self.assertEqual(None, actionSet.actionSet[4].preHooks) self.assertEqual(None, actionSet.actionSet[4].postHooks) self.assertEqual(islink, actionSet.actionSet[4].function) self.assertEqual("store", actionSet.actionSet[5].name) self.assertEqual(None, actionSet.actionSet[5].preHooks) self.assertEqual(None, actionSet.actionSet[5].postHooks) self.assertEqual(executeStore, actionSet.actionSet[5].function) self.assertEqual("four", actionSet.actionSet[6].name) self.assertEqual(None, actionSet.actionSet[6].preHooks) self.assertEqual(None, actionSet.actionSet[6].postHooks) self.assertEqual(isabs, actionSet.actionSet[6].function) self.assertEqual("purge", actionSet.actionSet[7].name) self.assertEqual(None, actionSet.actionSet[7].preHooks) self.assertEqual(None, actionSet.actionSet[7].postHooks) self.assertEqual(executePurge, actionSet.actionSet[7].function) self.assertEqual("five", actionSet.actionSet[8].name) self.assertEqual(None, actionSet.actionSet[8].preHooks) self.assertEqual(None, actionSet.actionSet[8].postHooks) self.assertEqual(exists, actionSet.actionSet[8].function) def testDependencyMode_118(self): """ Test with actions=[ one, five, collect, store, three, stage, four, purge, two ], extensions=[ one before collect, two before stage, etc. ]. """ actions = [ "one", "five", "collect", "store", "three", "stage", "four", "purge", "two", ] dependencies1 = ActionDependencies(["collect", "stage", "store", "purge", ], []) dependencies2 = ActionDependencies(["stage", "store", "purge", ], ["collect", ]) dependencies3 = ActionDependencies(["store", "purge", ], ["collect", "stage", ]) dependencies4 = ActionDependencies(["purge", ], ["collect", "stage", "store", ]) dependencies5 = ActionDependencies(None, ["collect", "stage", "store", "purge", ]) eaction1 = ExtendedAction("one", "os.path", "isdir", dependencies=dependencies1) eaction2 = ExtendedAction("two", "os.path", "isfile", dependencies=dependencies2) eaction3 = ExtendedAction("three", "os.path", "islink", dependencies=dependencies3) eaction4 = ExtendedAction("four", "os.path", "isabs", dependencies=dependencies4) eaction5 = ExtendedAction("five", "os.path", "exists", dependencies=dependencies5) extensions = ExtensionsConfig([ eaction1, eaction2, eaction3, eaction4, eaction5, ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 9) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) self.assertEqual("two", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(isfile, actionSet.actionSet[2].function) self.assertEqual("stage", actionSet.actionSet[3].name) self.assertEqual(None, actionSet.actionSet[3].preHooks) self.assertEqual(None, actionSet.actionSet[3].postHooks) self.assertEqual(executeStage, actionSet.actionSet[3].function) self.assertEqual("three", actionSet.actionSet[4].name) self.assertEqual(None, actionSet.actionSet[4].preHooks) self.assertEqual(None, actionSet.actionSet[4].postHooks) self.assertEqual(islink, actionSet.actionSet[4].function) self.assertEqual("store", actionSet.actionSet[5].name) self.assertEqual(None, actionSet.actionSet[5].preHooks) self.assertEqual(None, actionSet.actionSet[5].postHooks) self.assertEqual(executeStore, actionSet.actionSet[5].function) self.assertEqual("four", actionSet.actionSet[6].name) self.assertEqual(None, actionSet.actionSet[6].preHooks) self.assertEqual(None, actionSet.actionSet[6].postHooks) self.assertEqual(isabs, actionSet.actionSet[6].function) self.assertEqual("purge", actionSet.actionSet[7].name) self.assertEqual(None, actionSet.actionSet[7].preHooks) self.assertEqual(None, actionSet.actionSet[7].postHooks) self.assertEqual(executePurge, actionSet.actionSet[7].function) self.assertEqual("five", actionSet.actionSet[8].name) self.assertEqual(None, actionSet.actionSet[8].preHooks) self.assertEqual(None, actionSet.actionSet[8].postHooks) self.assertEqual(exists, actionSet.actionSet[8].function) def testDependencyMode_119(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ]. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_120(self): """ Test with actions=[ collect ], extensions=[], hooks=[] """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_121(self): """ Test with actions=[ collect ], extensions=[], pre-hook on 'stage' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [ PreActionHook("stage", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_122(self): """ Test with actions=[ collect ], extensions=[], post-hook on 'stage' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [ PostActionHook("stage", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_123(self): """ Test with actions=[ collect ], extensions=[], pre-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual([ PreActionHook("collect", "something"), ], actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_124(self): """ Test with actions=[ collect ], extensions=[], post-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [ PostActionHook("collect", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual([ PostActionHook("collect", "something"), ], actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_125(self): """ Test with actions=[ collect ], extensions=[], pre- and post-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something1"), PostActionHook("collect", "something2") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual([ PreActionHook("collect", "something1"), ], actionSet.actionSet[0].preHooks) self.assertEqual([ PostActionHook("collect", "something2"), ], actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_126(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], hooks=[] """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_127(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], pre-hook on "store" action. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PreActionHook("store", "whatever"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_128(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], post-hook on "store" action. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("store", "whatever"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_129(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], pre-hook on "one" action. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PreActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual([ PreActionHook("one", "extension"), ], actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_130(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], post-hook on "one" action. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual([ PostActionHook("one", "extension"), ], actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_131(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], pre- and post-hook on "one" action. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension2"), PreActionHook("one", "extension1"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual([ PreActionHook("one", "extension1"), ], actionSet.actionSet[0].preHooks) self.assertEqual([ PostActionHook("one", "extension2"), ], actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_132(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], hooks=[] """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_133(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], pre-hook on "purge" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PreActionHook("purge", "rm -f"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_134(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], post-hook on "purge" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("purge", "rm -f"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_135(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], pre-hook on "collect" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual([ PreActionHook("collect", "something"), ], actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_136(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], post-hook on "collect" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("collect", "something"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual([ PostActionHook("collect", "something"), ], actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_137(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], pre-hook on "one" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PreActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual([ PreActionHook("one", "extension"), ], actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_138(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], post-hook on "one" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual([ PostActionHook("one", "extension"), ], actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_139a(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], set of various pre- and post hooks. """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), PreActionHook("collect", "something1"), PreActionHook("collect", "something2"), PostActionHook("stage", "whatever"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual([ PostActionHook("one", "extension"), ], actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual([ PreActionHook("collect", "something1"), PreActionHook("collect", "something2"), ], actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_139b(self): """ Test with actions=[ stage, one ], extensions=[ (one, before stage) ], set of various pre- and post hooks. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(["stage", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), PreActionHook("collect", "something1"), PostActionHook("stage", "whatever1"), PostActionHook("stage", "whatever2"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual([ PostActionHook("one", "extension"), ], actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual([ PostActionHook("stage", "whatever1"), PostActionHook("stage", "whatever2"), ], actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_140(self): """ Test with actions=[ one, five, collect, store, three, stage, four, purge, two ], extensions= [recursive loop]. """ actions = [ "one", "five", "collect", "store", "three", "stage", "four", "purge", "two", ] dependencies1 = ActionDependencies(["collect", "stage", "store", "purge", ], []) dependencies2 = ActionDependencies(["stage", "store", "purge", ], ["collect", ]) dependencies3 = ActionDependencies(["store", "purge", ], ["collect", "stage", ]) dependencies4 = ActionDependencies(["purge", ], ["collect", "stage", "store", ]) dependencies5 = ActionDependencies(["one", ], ["collect", "stage", "store", "purge", ]) eaction1 = ExtendedAction("one", "os.path", "isdir", dependencies=dependencies1) eaction2 = ExtendedAction("two", "os.path", "isfile", dependencies=dependencies2) eaction3 = ExtendedAction("three", "os.path", "islink", dependencies=dependencies3) eaction4 = ExtendedAction("four", "os.path", "isabs", dependencies=dependencies4) eaction5 = ExtendedAction("five", "os.path", "exists", dependencies=dependencies5) extensions = ExtensionsConfig([ eaction1, eaction2, eaction3, eaction4, eaction5, ], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_141(self): """ Test with actions=[ one, five, collect, store, three, stage, four, purge, two ], and one extension for which a dependency does not exist. """ actions = [ "one", "five", "collect", "store", "three", "stage", "four", "purge", "two", ] dependencies1 = ActionDependencies(["collect", "stage", "store", "purge", ], []) dependencies2 = ActionDependencies(["stage", "store", "purge", ], ["collect", ]) dependencies3 = ActionDependencies(["store", "bogus", ], ["collect", "stage", ]) dependencies4 = ActionDependencies(["purge", ], ["collect", "stage", "store", ]) dependencies5 = ActionDependencies([], ["collect", "stage", "store", "purge", ]) eaction1 = ExtendedAction("one", "os.path", "isdir", dependencies=dependencies1) eaction2 = ExtendedAction("two", "os.path", "isfile", dependencies=dependencies2) eaction3 = ExtendedAction("three", "os.path", "islink", dependencies=dependencies3) eaction4 = ExtendedAction("four", "os.path", "isabs", dependencies=dependencies4) eaction5 = ExtendedAction("five", "os.path", "exists", dependencies=dependencies5) extensions = ExtensionsConfig([ eaction1, eaction2, eaction3, eaction4, eaction5, ], "dependency") options = OptionsConfig() self.assertRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) ######################################### # Test constructor, with managed peers ######################################### def testManagedPeer_001(self): """ Test with actions=[ collect ], extensions=[], peers=None, managed=True, local=True """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testManagedPeer_002(self): """ Test with actions=[ stage ], extensions=[], peers=None, managed=True, local=True """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(executeStage, actionSet.actionSet[0].function) def testManagedPeer_003(self): """ Test with actions=[ store ], extensions=[], peers=None, managed=True, local=True """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(300, actionSet.actionSet[0].index) self.assertEqual("store", actionSet.actionSet[0].name) self.assertEqual(executeStore, actionSet.actionSet[0].function) def testManagedPeer_004(self): """ Test with actions=[ purge ], extensions=[], peers=None, managed=True, local=True """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(400, actionSet.actionSet[0].index) self.assertEqual("purge", actionSet.actionSet[0].name) self.assertEqual(executePurge, actionSet.actionSet[0].function) def testManagedPeer_005(self): """ Test with actions=[ all ], extensions=[], peers=None, managed=True, local=True """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 4) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(executeStage, actionSet.actionSet[1].function) self.assertEqual(300, actionSet.actionSet[2].index) self.assertEqual("store", actionSet.actionSet[2].name) self.assertEqual(executeStore, actionSet.actionSet[2].function) self.assertEqual(400, actionSet.actionSet[3].index) self.assertEqual("purge", actionSet.actionSet[3].name) self.assertEqual(executePurge, actionSet.actionSet[3].function) def testManagedPeer_006(self): """ Test with actions=[ rebuild ], extensions=[], peers=None, managed=True, local=True """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(0, actionSet.actionSet[0].index) self.assertEqual("rebuild", actionSet.actionSet[0].name) self.assertEqual(executeRebuild, actionSet.actionSet[0].function) def testManagedPeer_007(self): """ Test with actions=[ validate ], extensions=[], peers=None, managed=True, local=True """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(0, actionSet.actionSet[0].index) self.assertEqual("validate", actionSet.actionSet[0].name) self.assertEqual(executeValidate, actionSet.actionSet[0].function) def testManagedPeer_008(self): """ Test with actions=[ collect, stage ], extensions=[], peers=None, managed=True, local=True """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_009(self): """ Test with actions=[ collect, store ], extensions=[], peers=None, managed=True, local=True """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_010(self): """ Test with actions=[ collect, purge ], extensions=[], peers=None, managed=True, local=True """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testManagedPeer_011(self): """ Test with actions=[ stage, collect ], extensions=[], peers=None, managed=True, local=True """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_012(self): """ Test with actions=[ stage, stage ], extensions=[], peers=None, managed=True, local=True """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_013(self): """ Test with actions=[ stage, store ], extensions=[], peers=None, managed=True, local=True """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_014(self): """ Test with actions=[ stage, purge ], extensions=[], peers=None, managed=True, local=True """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testManagedPeer_015(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], peers=None, managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testManagedPeer_016(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], peers=None, managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_017(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], peers=None, managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(150, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(isdir, actionSet.actionSet[1].function) def testManagedPeer_018(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], peers=None, managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(150, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_019(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], peers=None, managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertTrue(len(actionSet.actionSet) == 4) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(executeStage, actionSet.actionSet[1].function) self.assertEqual(300, actionSet.actionSet[2].index) self.assertEqual("store", actionSet.actionSet[2].name) self.assertEqual(executeStore, actionSet.actionSet[2].function) self.assertEqual(400, actionSet.actionSet[3].index) self.assertEqual("purge", actionSet.actionSet[3].name) self.assertEqual(executePurge, actionSet.actionSet[3].function) def testManagedPeer_020(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], peers=None, managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertTrue(len(actionSet.actionSet) == 9) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(executeCollect, actionSet.actionSet[1].function) self.assertEqual(150, actionSet.actionSet[2].index) self.assertEqual("two", actionSet.actionSet[2].name) self.assertEqual(isfile, actionSet.actionSet[2].function) self.assertEqual(200, actionSet.actionSet[3].index) self.assertEqual("stage", actionSet.actionSet[3].name) self.assertEqual(executeStage, actionSet.actionSet[3].function) self.assertEqual(250, actionSet.actionSet[4].index) self.assertEqual("three", actionSet.actionSet[4].name) self.assertEqual(islink, actionSet.actionSet[4].function) self.assertEqual(300, actionSet.actionSet[5].index) self.assertEqual("store", actionSet.actionSet[5].name) self.assertEqual(executeStore, actionSet.actionSet[5].function) self.assertEqual(350, actionSet.actionSet[6].index) self.assertEqual("four", actionSet.actionSet[6].name) self.assertEqual(isabs, actionSet.actionSet[6].function) self.assertEqual(400, actionSet.actionSet[7].index) self.assertEqual("purge", actionSet.actionSet[7].name) self.assertEqual(executePurge, actionSet.actionSet[7].function) self.assertEqual(450, actionSet.actionSet[8].index) self.assertEqual("five", actionSet.actionSet[8].name) self.assertEqual(exists, actionSet.actionSet[8].function) def testManagedPeer_021(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], peers=None, managed=True, local=True """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(isdir, actionSet.actionSet[0].function) def testManagedPeer_022(self): """ Test with actions=[ collect ], extensions=[], no peers, managed=True, local=True """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(executeCollect, actionSet.actionSet[0].function) def testManagedPeer_023(self): """ Test with actions=[ stage ], extensions=[], no peers, managed=True, local=True """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(executeStage, actionSet.actionSet[0].function) def testManagedPeer_024(self): """ Test with actions=[ store ], extensions=[], no peers, managed=True, local=True """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(300, actionSet.actionSet[0].index) self.assertEqual("store", actionSet.actionSet[0].name) self.assertEqual(executeStore, actionSet.actionSet[0].function) def testManagedPeer_025(self): """ Test with actions=[ purge ], extensions=[], no peers, managed=True, local=True """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(400, actionSet.actionSet[0].index) self.assertEqual("purge", actionSet.actionSet[0].name) self.assertEqual(executePurge, actionSet.actionSet[0].function) def testManagedPeer_026(self): """ Test with actions=[ all ], extensions=[], no peers, managed=True, local=True """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 4) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(executeStage, actionSet.actionSet[1].function) self.assertEqual(300, actionSet.actionSet[2].index) self.assertEqual("store", actionSet.actionSet[2].name) self.assertEqual(executeStore, actionSet.actionSet[2].function) self.assertEqual(400, actionSet.actionSet[3].index) self.assertEqual("purge", actionSet.actionSet[3].name) self.assertEqual(executePurge, actionSet.actionSet[3].function) def testManagedPeer_027(self): """ Test with actions=[ rebuild ], extensions=[], no peers, managed=True, local=True """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(0, actionSet.actionSet[0].index) self.assertEqual("rebuild", actionSet.actionSet[0].name) self.assertEqual(executeRebuild, actionSet.actionSet[0].function) def testManagedPeer_028(self): """ Test with actions=[ validate ], extensions=[], no peers, managed=True, local=True """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(0, actionSet.actionSet[0].index) self.assertEqual("validate", actionSet.actionSet[0].name) self.assertEqual(executeValidate, actionSet.actionSet[0].function) def testManagedPeer_029(self): """ Test with actions=[ collect, stage ], extensions=[], no peers, managed=True, local=True """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_030(self): """ Test with actions=[ collect, store ], extensions=[], no peers, managed=True, local=True """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_031(self): """ Test with actions=[ collect, purge ], extensions=[], no peers, managed=True, local=True """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testManagedPeer_032(self): """ Test with actions=[ stage, collect ], extensions=[], no peers, managed=True, local=True """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_033(self): """ Test with actions=[ stage, stage ], extensions=[], no peers, managed=True, local=True """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_034(self): """ Test with actions=[ stage, store ], extensions=[], no peers, managed=True, local=True """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_035(self): """ Test with actions=[ stage, purge ], extensions=[], no peers, managed=True, local=True """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(executePurge, actionSet.actionSet[1].function) def testManagedPeer_036(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], no peers, managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(executeCollect, actionSet.actionSet[1].function) def testManagedPeer_037(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], no peers, managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_038(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], no peers, managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(150, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertEqual(isdir, actionSet.actionSet[1].function) def testManagedPeer_039(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], no peers, managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(150, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_040(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], no peers, managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 4) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(executeStage, actionSet.actionSet[1].function) self.assertEqual(300, actionSet.actionSet[2].index) self.assertEqual("store", actionSet.actionSet[2].name) self.assertEqual(executeStore, actionSet.actionSet[2].function) self.assertEqual(400, actionSet.actionSet[3].index) self.assertEqual("purge", actionSet.actionSet[3].name) self.assertEqual(executePurge, actionSet.actionSet[3].function) def testManagedPeer_041(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], no peers, managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 9) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertEqual(executeCollect, actionSet.actionSet[1].function) self.assertEqual(150, actionSet.actionSet[2].index) self.assertEqual("two", actionSet.actionSet[2].name) self.assertEqual(isfile, actionSet.actionSet[2].function) self.assertEqual(200, actionSet.actionSet[3].index) self.assertEqual("stage", actionSet.actionSet[3].name) self.assertEqual(executeStage, actionSet.actionSet[3].function) self.assertEqual(250, actionSet.actionSet[4].index) self.assertEqual("three", actionSet.actionSet[4].name) self.assertEqual(islink, actionSet.actionSet[4].function) self.assertEqual(300, actionSet.actionSet[5].index) self.assertEqual("store", actionSet.actionSet[5].name) self.assertEqual(executeStore, actionSet.actionSet[5].function) self.assertEqual(350, actionSet.actionSet[6].index) self.assertEqual("four", actionSet.actionSet[6].name) self.assertEqual(isabs, actionSet.actionSet[6].function) self.assertEqual(400, actionSet.actionSet[7].index) self.assertEqual("purge", actionSet.actionSet[7].name) self.assertEqual(executePurge, actionSet.actionSet[7].function) self.assertEqual(450, actionSet.actionSet[8].index) self.assertEqual("five", actionSet.actionSet[8].name) self.assertEqual(exists, actionSet.actionSet[8].function) def testManagedPeer_042(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], no peers, managed=True, local=True """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(isdir, actionSet.actionSet[0].function) def testManagedPeer_043(self): """ Test with actions=[ collect ], extensions=[], no peers, managed=True, local=False """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_044(self): """ Test with actions=[ stage ], extensions=[], no peers, managed=True, local=False """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_045(self): """ Test with actions=[ store ], extensions=[], no peers, managed=True, local=False """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_046(self): """ Test with actions=[ purge ], extensions=[], no peers, managed=True, local=False """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_047(self): """ Test with actions=[ all ], extensions=[], no peers, managed=True, local=False """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_048(self): """ Test with actions=[ rebuild ], extensions=[], no peers, managed=True, local=False """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_049(self): """ Test with actions=[ validate ], extensions=[], no peers, managed=True, local=False """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_050(self): """ Test with actions=[ collect, stage ], extensions=[], no peers, managed=True, local=False """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_051(self): """ Test with actions=[ collect, store ], extensions=[], no peers, managed=True, local=False """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_052(self): """ Test with actions=[ collect, purge ], extensions=[], no peers, managed=True, local=False """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_053(self): """ Test with actions=[ stage, collect ], extensions=[], no peers, managed=True, local=False """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_054(self): """ Test with actions=[ stage, stage ], extensions=[], no peers, managed=True, local=False """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_055(self): """ Test with actions=[ stage, store ], extensions=[], no peers, managed=True, local=False """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_056(self): """ Test with actions=[ stage, purge ], extensions=[], no peers, managed=True, local=False """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_057(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], no peers, managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_058(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], no peers, managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_059(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], no peers, managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_060(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], no peers, managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_061(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], no peers, managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_062(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], no peers, managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_063(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], no peers, managed=True, local=False """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_064(self): """ Test with actions=[ collect ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_065(self): """ Test with actions=[ stage ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_066(self): """ Test with actions=[ store ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_067(self): """ Test with actions=[ purge ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_068(self): """ Test with actions=[ all ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_069(self): """ Test with actions=[ rebuild ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_070(self): """ Test with actions=[ validate ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_071(self): """ Test with actions=[ collect, stage ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_072(self): """ Test with actions=[ collect, store ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_073(self): """ Test with actions=[ collect, purge ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_074(self): """ Test with actions=[ stage, collect ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_075(self): """ Test with actions=[ stage, stage ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_076(self): """ Test with actions=[ stage, store ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_077(self): """ Test with actions=[ stage, purge ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_078(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], one peer (not managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_079(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], one peer (not managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_080(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], one peer (not managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_081(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], one peer (not managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_082(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_083(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], one peer (not managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_084(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], one peer (not managed), managed=True, local=False """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_085(self): """ Test with actions=[ collect ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", None, "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_086(self): """ Test with actions=[ stage ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_087(self): """ Test with actions=[ store ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_088(self): """ Test with actions=[ purge ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(400, actionSet.actionSet[0].index) self.assertEqual("purge", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_089(self): """ Test with actions=[ all ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_090(self): """ Test with actions=[ rebuild ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_091(self): """ Test with actions=[ validate ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_092(self): """ Test with actions=[ collect, stage ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_093(self): """ Test with actions=[ collect, store ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_094(self): """ Test with actions=[ collect, purge ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_095(self): """ Test with actions=[ stage, collect ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_096(self): """ Test with actions=[ stage, stage ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_097(self): """ Test with actions=[ stage, store ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_098(self): """ Test with actions=[ stage, purge ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(400, actionSet.actionSet[0].index) self.assertEqual("purge", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_099(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], one peer (managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_100(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], one peer (managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_101(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], one peer (managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual(150, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_102(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], one peer (managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(150, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_103(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_104(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], one peer (managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 3) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual(400, actionSet.actionSet[2].index) self.assertEqual("purge", actionSet.actionSet[2].name) self.assertFalse(actionSet.actionSet[2].remotePeers is None) self.assertTrue(len(actionSet.actionSet[2].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[2].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[2].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[2].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[2].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[2].remotePeers[0].cbackCommand) def testManagedPeer_105(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], one peer (managed), managed=True, local=False """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_106(self): """ Test with actions=[ collect ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", None, "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_107(self): """ Test with actions=[ stage ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_108(self): """ Test with actions=[ store ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_109(self): """ Test with actions=[ purge ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(400, actionSet.actionSet[0].index) self.assertEqual("purge", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_110(self): """ Test with actions=[ all ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_111(self): """ Test with actions=[ rebuild ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_112(self): """ Test with actions=[ validate ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_113(self): """ Test with actions=[ collect, stage ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_114(self): """ Test with actions=[ collect, store ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_115(self): """ Test with actions=[ collect, purge ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_116(self): """ Test with actions=[ stage, collect ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_117(self): """ Test with actions=[ stage, stage ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_118(self): """ Test with actions=[ stage, store ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_119(self): """ Test with actions=[ stage, purge ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(400, actionSet.actionSet[0].index) self.assertEqual("purge", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_120(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_121(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], two peers (one managed, one not), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_122(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual(150, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_123(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], two peers (one managed, one not), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(150, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_124(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_125(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 3) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual(400, actionSet.actionSet[2].index) self.assertEqual("purge", actionSet.actionSet[2].name) self.assertFalse(actionSet.actionSet[2].remotePeers is None) self.assertTrue(len(actionSet.actionSet[2].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[2].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[2].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[2].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[2].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[2].remotePeers[0].cbackCommand) def testManagedPeer_126(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], two peers (one managed, one not), managed=True, local=False """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_127(self): """ Test with actions=[ collect ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", None, "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_128(self): """ Test with actions=[ stage ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_129(self): """ Test with actions=[ store ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_130(self): """ Test with actions=[ purge ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(400, actionSet.actionSet[0].index) self.assertEqual("purge", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_131(self): """ Test with actions=[ all ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_132(self): """ Test with actions=[ rebuild ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_133(self): """ Test with actions=[ validate ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_134(self): """ Test with actions=[ collect, stage ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_135(self): """ Test with actions=[ collect, store ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_136(self): """ Test with actions=[ collect, purge ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_137(self): """ Test with actions=[ stage, collect ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_138(self): """ Test with actions=[ stage, stage ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_139(self): """ Test with actions=[ stage, store ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 0) def testManagedPeer_140(self): """ Test with actions=[ stage, purge ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(400, actionSet.actionSet[0].index) self.assertEqual("purge", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_141(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_142(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_143(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], two peers (both managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.assertEqual(150, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_144(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], two peers (both managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(150, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_145(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_146(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], two peers (both managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 3) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.assertEqual(400, actionSet.actionSet[2].index) self.assertEqual("purge", actionSet.actionSet[2].name) self.assertFalse(actionSet.actionSet[2].remotePeers is None) self.assertTrue(len(actionSet.actionSet[2].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[2].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[2].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[2].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[2].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[2].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[2].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[2].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[2].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[2].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[2].remotePeers[1].cbackCommand) def testManagedPeer_147(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=False """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertFalse(actionSet.actionSet[0].remotePeers is None) self.assertTrue(len(actionSet.actionSet[0].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_148(self): """ Test with actions=[ collect ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", None, "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_149(self): """ Test with actions=[ stage ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) def testManagedPeer_150(self): """ Test with actions=[ store ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(300, actionSet.actionSet[0].index) self.assertEqual("store", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStore, actionSet.actionSet[0].function) def testManagedPeer_151(self): """ Test with actions=[ purge ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(400, actionSet.actionSet[0].index) self.assertEqual("purge", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executePurge, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_152(self): """ Test with actions=[ all ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertFalse(actionSet.actionSet is None) self.assertTrue(len(actionSet.actionSet) == 6) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.assertEqual(200, actionSet.actionSet[2].index) self.assertEqual("stage", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(executeStage, actionSet.actionSet[2].function) self.assertEqual(300, actionSet.actionSet[3].index) self.assertEqual("store", actionSet.actionSet[3].name) self.assertEqual(None, actionSet.actionSet[3].preHooks) self.assertEqual(None, actionSet.actionSet[3].postHooks) self.assertEqual(executeStore, actionSet.actionSet[3].function) self.assertEqual(400, actionSet.actionSet[4].index) self.assertEqual("purge", actionSet.actionSet[4].name) self.assertEqual(None, actionSet.actionSet[4].preHooks) self.assertEqual(None, actionSet.actionSet[4].postHooks) self.assertEqual(executePurge, actionSet.actionSet[4].function) self.assertEqual(400, actionSet.actionSet[5].index) self.assertEqual("purge", actionSet.actionSet[5].name) self.assertFalse(actionSet.actionSet[5].remotePeers is None) self.assertTrue(len(actionSet.actionSet[5].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[5].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[5].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[5].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[5].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[5].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[5].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[5].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[5].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[5].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[5].remotePeers[1].cbackCommand) def testManagedPeer_153(self): """ Test with actions=[ rebuild ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(0, actionSet.actionSet[0].index) self.assertEqual("rebuild", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeRebuild, actionSet.actionSet[0].function) def testManagedPeer_154(self): """ Test with actions=[ validate ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 1) self.assertEqual(0, actionSet.actionSet[0].index) self.assertEqual("validate", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeValidate, actionSet.actionSet[0].function) def testManagedPeer_155(self): """ Test with actions=[ collect, stage ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 3) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.assertEqual(200, actionSet.actionSet[2].index) self.assertEqual("stage", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(executeStage, actionSet.actionSet[2].function) def testManagedPeer_156(self): """ Test with actions=[ collect, store ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 3) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.assertEqual(300, actionSet.actionSet[2].index) self.assertEqual("store", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(executeStore, actionSet.actionSet[2].function) def testManagedPeer_157(self): """ Test with actions=[ collect, purge ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 4) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.assertEqual(400, actionSet.actionSet[2].index) self.assertEqual("purge", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(executePurge, actionSet.actionSet[2].function) self.assertEqual(400, actionSet.actionSet[3].index) self.assertEqual("purge", actionSet.actionSet[3].name) self.assertFalse(actionSet.actionSet[3].remotePeers is None) self.assertTrue(len(actionSet.actionSet[3].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[3].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[3].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[3].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[3].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[3].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[3].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[3].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[3].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[3].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[3].remotePeers[1].cbackCommand) def testManagedPeer_158(self): """ Test with actions=[ stage, collect ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 3) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.assertEqual(200, actionSet.actionSet[2].index) self.assertEqual("stage", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(executeStage, actionSet.actionSet[2].function) def testManagedPeer_159(self): """ Test with actions=[ stage, stage ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(200, actionSet.actionSet[1].index) self.assertEqual("stage", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_160(self): """ Test with actions=[ stage, store ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(300, actionSet.actionSet[1].index) self.assertEqual("store", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_161(self): """ Test with actions=[ stage, purge ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 3) self.assertEqual(200, actionSet.actionSet[0].index) self.assertEqual("stage", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeStage, actionSet.actionSet[0].function) self.assertEqual(400, actionSet.actionSet[1].index) self.assertEqual("purge", actionSet.actionSet[1].name) self.assertEqual(None, actionSet.actionSet[1].preHooks) self.assertEqual(None, actionSet.actionSet[1].postHooks) self.assertEqual(executePurge, actionSet.actionSet[1].function) self.assertEqual(400, actionSet.actionSet[2].index) self.assertEqual("purge", actionSet.actionSet[2].name) self.assertFalse(actionSet.actionSet[2].remotePeers is None) self.assertTrue(len(actionSet.actionSet[2].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[2].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[2].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[2].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[2].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[2].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[2].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[2].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[2].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[2].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[2].remotePeers[1].cbackCommand) def testManagedPeer_162(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 4) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(50, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.assertEqual(100, actionSet.actionSet[2].index) self.assertEqual("collect", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[2].function) self.assertEqual(100, actionSet.actionSet[3].index) self.assertEqual("collect", actionSet.actionSet[3].name) self.assertFalse(actionSet.actionSet[3].remotePeers is None) self.assertTrue(len(actionSet.actionSet[3].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[3].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[3].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[3].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[3].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[3].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[3].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[3].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[3].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[3].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[3].remotePeers[1].cbackCommand) def testManagedPeer_163(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 3) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(50, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.assertEqual(300, actionSet.actionSet[2].index) self.assertEqual("store", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(executeStore, actionSet.actionSet[2].function) def testManagedPeer_164(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], two peers (both managed), managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 4) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.assertEqual(150, actionSet.actionSet[2].index) self.assertEqual("one", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(isdir, actionSet.actionSet[2].function) self.assertEqual(150, actionSet.actionSet[3].index) self.assertEqual("one", actionSet.actionSet[3].name) self.assertFalse(actionSet.actionSet[3].remotePeers is None) self.assertTrue(len(actionSet.actionSet[3].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[3].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[3].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[3].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[3].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[3].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[3].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[3].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[3].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[3].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[3].remotePeers[1].cbackCommand) def testManagedPeer_165(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], two peers (both managed), managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 3) self.assertEqual(150, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(150, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.assertEqual(300, actionSet.actionSet[2].index) self.assertEqual("store", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(executeStore, actionSet.actionSet[2].function) def testManagedPeer_166(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 6) self.assertEqual(100, actionSet.actionSet[0].index) self.assertEqual("collect", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[0].function) self.assertEqual(100, actionSet.actionSet[1].index) self.assertEqual("collect", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.assertEqual(200, actionSet.actionSet[2].index) self.assertEqual("stage", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(executeStage, actionSet.actionSet[2].function) self.assertEqual(300, actionSet.actionSet[3].index) self.assertEqual("store", actionSet.actionSet[3].name) self.assertEqual(None, actionSet.actionSet[3].preHooks) self.assertEqual(None, actionSet.actionSet[3].postHooks) self.assertEqual(executeStore, actionSet.actionSet[3].function) self.assertEqual(400, actionSet.actionSet[4].index) self.assertEqual("purge", actionSet.actionSet[4].name) self.assertEqual(None, actionSet.actionSet[4].preHooks) self.assertEqual(None, actionSet.actionSet[4].postHooks) self.assertEqual(executePurge, actionSet.actionSet[4].function) self.assertEqual(400, actionSet.actionSet[5].index) self.assertEqual("purge", actionSet.actionSet[5].name) self.assertFalse(actionSet.actionSet[5].remotePeers is None) self.assertTrue(len(actionSet.actionSet[5].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[5].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[5].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[5].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[5].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[5].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[5].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[5].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[5].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[5].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[5].remotePeers[1].cbackCommand) def testManagedPeer_167(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], two peers (both managed), managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", "one", "two", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 9) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.assertEqual(100, actionSet.actionSet[2].index) self.assertEqual("collect", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[2].function) self.assertEqual(100, actionSet.actionSet[3].index) self.assertEqual("collect", actionSet.actionSet[3].name) self.assertFalse(actionSet.actionSet[3].remotePeers is None) self.assertTrue(len(actionSet.actionSet[3].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[3].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[3].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[3].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[3].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[3].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[3].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[3].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[3].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[3].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[3].remotePeers[1].cbackCommand) self.assertEqual(150, actionSet.actionSet[4].index) self.assertEqual("two", actionSet.actionSet[4].name) self.assertEqual(None, actionSet.actionSet[4].preHooks) self.assertEqual(None, actionSet.actionSet[4].postHooks) self.assertEqual(isfile, actionSet.actionSet[4].function) self.assertEqual(200, actionSet.actionSet[5].index) self.assertEqual("stage", actionSet.actionSet[5].name) self.assertEqual(None, actionSet.actionSet[5].preHooks) self.assertEqual(None, actionSet.actionSet[5].postHooks) self.assertEqual(executeStage, actionSet.actionSet[5].function) self.assertEqual(300, actionSet.actionSet[6].index) self.assertEqual("store", actionSet.actionSet[6].name) self.assertEqual(None, actionSet.actionSet[6].preHooks) self.assertEqual(None, actionSet.actionSet[6].postHooks) self.assertEqual(executeStore, actionSet.actionSet[6].function) self.assertEqual(400, actionSet.actionSet[7].index) self.assertEqual("purge", actionSet.actionSet[7].name) self.assertEqual(None, actionSet.actionSet[7].preHooks) self.assertEqual(None, actionSet.actionSet[7].postHooks) self.assertEqual(executePurge, actionSet.actionSet[7].function) self.assertEqual(400, actionSet.actionSet[8].index) self.assertEqual("purge", actionSet.actionSet[8].name) self.assertFalse(actionSet.actionSet[8].remotePeers is None) self.assertTrue(len(actionSet.actionSet[8].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[8].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[8].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[8].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[8].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[8].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[8].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[8].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[8].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[8].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[8].remotePeers[1].cbackCommand) def testManagedPeer_168(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=True """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 2) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(50, actionSet.actionSet[1].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 2) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.assertEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.assertEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.assertEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.assertEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_169(self): """ Test to make sure that various options all seem to be pulled from the right places with mixed data. """ actions = [ "collect", "stage", "store", "purge", "one", "two", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] options.backupUser = "userZ" options.rshCommand = "rshZ" options.cbackCommand = "cbackZ" peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, None, None, None, "cback", managed=True), RemotePeer("remote2", None, "ruser2", None, "rsh2", None, managed=True, managedActions=[ "stage", ]), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.assertTrue(len(actionSet.actionSet) == 10) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[0].name) self.assertEqual(None, actionSet.actionSet[0].preHooks) self.assertEqual(None, actionSet.actionSet[0].postHooks) self.assertEqual(isdir, actionSet.actionSet[0].function) self.assertEqual(50, actionSet.actionSet[0].index) self.assertEqual("one", actionSet.actionSet[1].name) self.assertFalse(actionSet.actionSet[1].remotePeers is None) self.assertTrue(len(actionSet.actionSet[1].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.assertEqual("userZ", actionSet.actionSet[1].remotePeers[0].remoteUser) self.assertEqual("userZ", actionSet.actionSet[1].remotePeers[0].localUser) self.assertEqual("rshZ", actionSet.actionSet[1].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.assertEqual(100, actionSet.actionSet[2].index) self.assertEqual("collect", actionSet.actionSet[2].name) self.assertEqual(None, actionSet.actionSet[2].preHooks) self.assertEqual(None, actionSet.actionSet[2].postHooks) self.assertEqual(executeCollect, actionSet.actionSet[2].function) self.assertEqual(100, actionSet.actionSet[3].index) self.assertEqual("collect", actionSet.actionSet[3].name) self.assertFalse(actionSet.actionSet[3].remotePeers is None) self.assertTrue(len(actionSet.actionSet[3].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[3].remotePeers[0].name) self.assertEqual("userZ", actionSet.actionSet[3].remotePeers[0].remoteUser) self.assertEqual("userZ", actionSet.actionSet[3].remotePeers[0].localUser) self.assertEqual("rshZ", actionSet.actionSet[3].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[3].remotePeers[0].cbackCommand) self.assertEqual(150, actionSet.actionSet[4].index) self.assertEqual("two", actionSet.actionSet[4].name) self.assertEqual(None, actionSet.actionSet[4].preHooks) self.assertEqual(None, actionSet.actionSet[4].postHooks) self.assertEqual(isfile, actionSet.actionSet[4].function) self.assertEqual(200, actionSet.actionSet[5].index) self.assertEqual("stage", actionSet.actionSet[5].name) self.assertEqual(None, actionSet.actionSet[5].preHooks) self.assertEqual(None, actionSet.actionSet[5].postHooks) self.assertEqual(executeStage, actionSet.actionSet[5].function) self.assertEqual(200, actionSet.actionSet[6].index) self.assertEqual("stage", actionSet.actionSet[6].name) self.assertFalse(actionSet.actionSet[6].remotePeers is None) self.assertTrue(len(actionSet.actionSet[6].remotePeers) == 1) self.assertEqual("remote2", actionSet.actionSet[6].remotePeers[0].name) self.assertEqual("ruser2", actionSet.actionSet[6].remotePeers[0].remoteUser) self.assertEqual("userZ", actionSet.actionSet[6].remotePeers[0].localUser) self.assertEqual("rsh2", actionSet.actionSet[6].remotePeers[0].rshCommand) self.assertEqual("cbackZ", actionSet.actionSet[6].remotePeers[0].cbackCommand) self.assertEqual(300, actionSet.actionSet[7].index) self.assertEqual("store", actionSet.actionSet[7].name) self.assertEqual(None, actionSet.actionSet[7].preHooks) self.assertEqual(None, actionSet.actionSet[7].postHooks) self.assertEqual(executeStore, actionSet.actionSet[7].function) self.assertEqual(400, actionSet.actionSet[8].index) self.assertEqual("purge", actionSet.actionSet[8].name) self.assertEqual(None, actionSet.actionSet[8].preHooks) self.assertEqual(None, actionSet.actionSet[8].postHooks) self.assertEqual(executePurge, actionSet.actionSet[8].function) self.assertEqual(400, actionSet.actionSet[9].index) self.assertEqual("purge", actionSet.actionSet[9].name) self.assertFalse(actionSet.actionSet[9].remotePeers is None) self.assertTrue(len(actionSet.actionSet[9].remotePeers) == 1) self.assertEqual("remote", actionSet.actionSet[9].remotePeers[0].name) self.assertEqual("userZ", actionSet.actionSet[9].remotePeers[0].remoteUser) self.assertEqual("userZ", actionSet.actionSet[9].remotePeers[0].localUser) self.assertEqual("rshZ", actionSet.actionSet[9].remotePeers[0].rshCommand) self.assertEqual("cback", actionSet.actionSet[9].remotePeers[0].cbackCommand) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" tests = [ ] tests.append(unittest.makeSuite(TestFunctions, 'test')) tests.append(unittest.makeSuite(TestOptions, 'test')) tests.append(unittest.makeSuite(TestActionSet, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/spantests.py0000664000175000017500000001204312560007330021677 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests span tool functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/tools/span.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in tools/span.py. Where possible, we test functions that print output by passing a custom file descriptor. Sometimes, we only ensure that a function or method runs without failure, and we don't validate what its result is or what it prints out. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a SPANTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import unittest from CedarBackup3.testutil import captureOutput from CedarBackup3.tools.span import _usage, _version from CedarBackup3.tools.span import Options ####################################################################### # Test Case Classes ####################################################################### ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the public functions.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ######################## # Test simple functions ######################## def testSimpleFuncs_001(self): """ Test that the _usage() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_usage) def testSimpleFuncs_002(self): """ Test that the _version() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_version) ######################## # TestSpanOptions class ######################## class TestSpanOptions(unittest.TestCase): """Tests for the SpanOptions class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = Options() obj.__repr__() obj.__str__() ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" tests = [ ] tests.append(unittest.makeSuite(TestFunctions, 'test')) tests.append(unittest.makeSuite(TestSpanOptions, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/utiltests.py0000664000175000017500000041561512642032634021735 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests utility functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # pylint: disable=C0322,C0324 ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/util.py. Code Coverage ============= This module contains individual tests for the public functions and classes implemented in util.py. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a UTILTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import sys import unittest import tempfile import time import logging import os from os.path import isdir from CedarBackup3.testutil import findResources, removedir, extractTar, buildPath, captureOutput from CedarBackup3.util import UnorderedList, AbsolutePathList, ObjectTypeList from CedarBackup3.util import RestrictedContentList, RegexMatchList, RegexList from CedarBackup3.util import DirectedGraph, PathResolverSingleton, Diagnostics, parseCommaSeparatedString from CedarBackup3.util import sortDict, resolveCommand, executeCommand, getFunctionReference, encodePath from CedarBackup3.util import convertSize, UNIT_BYTES, UNIT_SECTORS, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES from CedarBackup3.util import displayBytes, deriveDayOfWeek, isStartOfWeek, dereferenceLink from CedarBackup3.util import buildNormalizedPath, splitCommandLine, nullDevice ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data" ] RESOURCES = [ "lotsoflines.py", "tree10.tar.gz", ] ####################################################################### # Test Case Classes ####################################################################### ########################## # TestUnorderedList class ########################## class TestUnorderedList(unittest.TestCase): """Tests for the UnorderedList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ################################## # Test unordered list comparisons ################################## def testComparison_001(self): """ Test two empty lists. """ list1 = UnorderedList() list2 = UnorderedList() self.assertEqual(list1, list2) self.assertEqual(list2, list1) def testComparison_002(self): """ Test empty vs. non-empty list. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(3) list1.append(4) self.assertEqual([1,2,3,4, ], list1) self.assertEqual([2,3,4,1, ], list1) self.assertEqual([3,4,1,2, ], list1) self.assertEqual([4,1,2,3, ], list1) self.assertEqual(list1, [4,3,2,1, ]) self.assertEqual(list1, [3,2,1,4, ]) self.assertEqual(list1, [2,1,4,3, ]) self.assertEqual(list1, [1,4,3,2, ]) self.assertNotEqual(list1, list2) self.assertNotEqual(list2, list1) def testComparison_003(self): """ Test two non-empty lists, completely different contents. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(3) list1.append(4) list2.append('a') list2.append('b') list2.append('c') list2.append('d') self.assertEqual([1,2,3,4, ], list1) self.assertEqual([2,3,4,1, ], list1) self.assertEqual([3,4,1,2, ], list1) self.assertEqual([4,1,2,3, ], list1) self.assertEqual(list1, [4,3,2,1, ]) self.assertEqual(list1, [3,2,1,4, ]) self.assertEqual(list1, [2,1,4,3, ]) self.assertEqual(list1, [1,4,3,2, ]) self.assertEqual(['a','b','c','d', ], list2) self.assertEqual(['b','c','d','a', ], list2) self.assertEqual(['c','d','a','b', ], list2) self.assertEqual(['d','a','b','c', ], list2) self.assertEqual(list2, ['d','c','b','a', ]) self.assertEqual(list2, ['c','b','a','d', ]) self.assertEqual(list2, ['b','a','d','c', ]) self.assertEqual(list2, ['a','d','c','b', ]) self.assertNotEqual(list1, list2) self.assertNotEqual(list2, list1) def testComparison_004(self): """ Test two non-empty lists, different but overlapping contents. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(3) list1.append(4) list2.append(3) list2.append(4) list2.append('a') list2.append('b') self.assertEqual([1,2,3,4, ], list1) self.assertEqual([2,3,4,1, ], list1) self.assertEqual([3,4,1,2, ], list1) self.assertEqual([4,1,2,3, ], list1) self.assertEqual(list1, [4,3,2,1, ]) self.assertEqual(list1, [3,2,1,4, ]) self.assertEqual(list1, [2,1,4,3, ]) self.assertEqual(list1, [1,4,3,2, ]) self.assertEqual([3,4,'a','b', ], list2) self.assertEqual([4,'a','b',3, ], list2) self.assertEqual(['a','b',3,4, ], list2) self.assertEqual(['b',3,4,'a', ], list2) self.assertEqual(list2, ['b','a',4,3, ]) self.assertEqual(list2, ['a',4,3,'b', ]) self.assertEqual(list2, [4,3,'b','a', ]) self.assertEqual(list2, [3,'b','a',4, ]) self.assertNotEqual(list1, list2) self.assertNotEqual(list2, list1) def testComparison_005(self): """ Test two non-empty lists, exactly the same contents, same order. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(3) list1.append(4) list2.append(1) list2.append(2) list2.append(3) list2.append(4) self.assertEqual([1,2,3,4, ], list1) self.assertEqual([2,3,4,1, ], list1) self.assertEqual([3,4,1,2, ], list1) self.assertEqual([4,1,2,3, ], list1) self.assertEqual(list1, [4,3,2,1, ]) self.assertEqual(list1, [3,2,1,4, ]) self.assertEqual(list1, [2,1,4,3, ]) self.assertEqual(list1, [1,4,3,2, ]) self.assertEqual([1,2,3,4, ], list2) self.assertEqual([2,3,4,1, ], list2) self.assertEqual([3,4,1,2, ], list2) self.assertEqual([4,1,2,3, ], list2) self.assertEqual(list2, [4,3,2,1, ]) self.assertEqual(list2, [3,2,1,4, ]) self.assertEqual(list2, [2,1,4,3, ]) self.assertEqual(list2, [1,4,3,2, ]) self.assertEqual(list1, list2) self.assertEqual(list2, list1) def testComparison_006(self): """ Test two non-empty lists, exactly the same contents, different order. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(3) list1.append(4) list2.append(3) list2.append(1) list2.append(2) list2.append(4) self.assertEqual([1,2,3,4, ], list1) self.assertEqual([2,3,4,1, ], list1) self.assertEqual([3,4,1,2, ], list1) self.assertEqual([4,1,2,3, ], list1) self.assertEqual(list1, [4,3,2,1, ]) self.assertEqual(list1, [3,2,1,4, ]) self.assertEqual(list1, [2,1,4,3, ]) self.assertEqual(list1, [1,4,3,2, ]) self.assertEqual([1,2,3,4, ], list2) self.assertEqual([2,3,4,1, ], list2) self.assertEqual([3,4,1,2, ], list2) self.assertEqual([4,1,2,3, ], list2) self.assertEqual(list2, [4,3,2,1, ]) self.assertEqual(list2, [3,2,1,4, ]) self.assertEqual(list2, [2,1,4,3, ]) self.assertEqual(list2, [1,4,3,2, ]) self.assertEqual(list1, list2) self.assertEqual(list2, list1) def testComparison_007(self): """ Test two non-empty lists, exactly the same contents, some duplicates, same order. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(2) list1.append(3) list1.append(4) list1.append(4) list2.append(1) list2.append(2) list2.append(2) list2.append(3) list2.append(4) list2.append(4) self.assertEqual([1,2,2,3,4,4, ], list1) self.assertEqual([2,2,3,4,1,4, ], list1) self.assertEqual([2,3,4,1,4,2, ], list1) self.assertEqual([2,4,1,4,2,3, ], list1) self.assertEqual(list1, [1,2,2,3,4,4, ]) self.assertEqual(list1, [2,2,3,4,1,4, ]) self.assertEqual(list1, [2,3,4,1,4,2, ]) self.assertEqual(list1, [2,4,1,4,2,3, ]) self.assertEqual([1,2,2,3,4,4, ], list2) self.assertEqual([2,2,3,4,1,4, ], list2) self.assertEqual([2,3,4,1,4,2, ], list2) self.assertEqual([2,4,1,4,2,3, ], list2) self.assertEqual(list2, [1,2,2,3,4,4, ]) self.assertEqual(list2, [2,2,3,4,1,4, ]) self.assertEqual(list2, [2,3,4,1,4,2, ]) self.assertEqual(list2, [2,4,1,4,2,3, ]) self.assertEqual(list1, list2) self.assertEqual(list2, list1) def testComparison_008(self): """ Test two non-empty lists, exactly the same contents, some duplicates, different order. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(2) list1.append(3) list1.append(4) list1.append(4) list2.append(3) list2.append(1) list2.append(2) list2.append(2) list2.append(4) list2.append(4) self.assertEqual([1,2,2,3,4,4, ], list1) self.assertEqual([2,2,3,4,1,4, ], list1) self.assertEqual([2,3,4,1,4,2, ], list1) self.assertEqual([2,4,1,4,2,3, ], list1) self.assertEqual(list1, [1,2,2,3,4,4, ]) self.assertEqual(list1, [2,2,3,4,1,4, ]) self.assertEqual(list1, [2,3,4,1,4,2, ]) self.assertEqual(list1, [2,4,1,4,2,3, ]) self.assertEqual([1,2,2,3,4,4, ], list2) self.assertEqual([2,2,3,4,1,4, ], list2) self.assertEqual([2,3,4,1,4,2, ], list2) self.assertEqual([2,4,1,4,2,3, ], list2) self.assertEqual(list2, [1,2,2,3,4,4, ]) self.assertEqual(list2, [2,2,3,4,1,4, ]) self.assertEqual(list2, [2,3,4,1,4,2, ]) self.assertEqual(list2, [2,4,1,4,2,3, ]) self.assertEqual(list1, list2) self.assertEqual(list2, list1) ############################# # TestAbsolutePathList class ############################# class TestAbsolutePathList(unittest.TestCase): """Tests for the AbsolutePathList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################### # Test list operations ####################### def testListOperations_001(self): """ Test append() for a valid absolute path. """ list1 = AbsolutePathList() list1.append("/path/to/something/absolute") self.assertEqual(list1, [ "/path/to/something/absolute", ]) self.assertEqual(list1[0], "/path/to/something/absolute") list1.append("/path/to/something/else") self.assertEqual(list1, [ "/path/to/something/absolute", "/path/to/something/else", ]) self.assertEqual(list1[0], "/path/to/something/absolute") self.assertEqual(list1[1], "/path/to/something/else") def testListOperations_002(self): """ Test append() for an invalid, non-absolute path. """ list1 = AbsolutePathList() self.assertEqual(list1, []) self.assertRaises(ValueError, list1.append, "path/to/something/relative") self.assertEqual(list1, []) def testListOperations_003(self): """ Test insert() for a valid absolute path. """ list1 = AbsolutePathList() list1.insert(0, "/path/to/something/absolute") self.assertEqual(list1, [ "/path/to/something/absolute", ]) self.assertEqual(list1[0], "/path/to/something/absolute") list1.insert(0, "/path/to/something/else") self.assertEqual(list1, [ "/path/to/something/else", "/path/to/something/absolute", ]) self.assertEqual(list1[0], "/path/to/something/else") self.assertEqual(list1[1], "/path/to/something/absolute") def testListOperations_004(self): """ Test insert() for an invalid, non-absolute path. """ list1 = AbsolutePathList() self.assertRaises(ValueError, list1.insert, 0, "path/to/something/relative") def testListOperations_005(self): """ Test extend() for a valid absolute path. """ list1 = AbsolutePathList() list1.extend(["/path/to/something/absolute", ]) self.assertEqual(list1, [ "/path/to/something/absolute", ]) self.assertEqual(list1[0], "/path/to/something/absolute") list1.extend(["/path/to/something/else", ]) self.assertEqual(list1, [ "/path/to/something/absolute", "/path/to/something/else", ]) self.assertEqual(list1[0], "/path/to/something/absolute") self.assertEqual(list1[1], "/path/to/something/else") def testListOperations_006(self): """ Test extend() for an invalid, non-absolute path. """ list1 = AbsolutePathList() self.assertEqual(list1, []) self.assertRaises(ValueError, list1.extend, [ "path/to/something/relative", ]) self.assertEqual(list1, []) ########################### # TestObjectTypeList class ########################### class TestObjectTypeList(unittest.TestCase): """Tests for the ObjectTypeList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################### # Test list operations ####################### def testListOperations_001(self): """ Test append() for a valid object type. """ list1 = ObjectTypeList(str, "str") list1.append("string") self.assertEqual(list1, [ "string", ]) self.assertEqual(list1[0], "string") list1.append("string2") self.assertEqual(list1, [ "string", "string2", ]) self.assertEqual(list1[0], "string") self.assertEqual(list1[1], "string2") def testListOperations_002(self): """ Test append() for an invalid object type. """ list1 = ObjectTypeList(str, "str") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.append, 1) self.assertEqual(list1, []) def testListOperations_003(self): """ Test insert() for a valid object type. """ list1 = ObjectTypeList(str, "str") list1.insert(0, "string") self.assertEqual(list1, [ "string", ]) self.assertEqual(list1[0], "string") list1.insert(0, "string2") self.assertEqual(list1, [ "string2", "string", ]) self.assertEqual(list1[0], "string2") self.assertEqual(list1[1], "string") def testListOperations_004(self): """ Test insert() for an invalid object type. """ list1 = ObjectTypeList(str, "str") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.insert, 0, AbsolutePathList()) self.assertEqual(list1, []) def testListOperations_005(self): """ Test extend() for a valid object type. """ list1 = ObjectTypeList(str, "str") list1.extend(["string", ]) self.assertEqual(list1, [ "string", ]) self.assertEqual(list1[0], "string") list1.extend(["string2", ]) self.assertEqual(list1, [ "string", "string2", ]) self.assertEqual(list1[0], "string") self.assertEqual(list1[1], "string2") def testListOperations_006(self): """ Test extend() for an invalid object type. """ list1 = ObjectTypeList(str, "str") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.extend, [ 12.0, ]) self.assertEqual(list1, []) ################################## # TestRestrictedContentList class ################################## class TestRestrictedContentList(unittest.TestCase): """Tests for the RestrictedContentList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################### # Test list operations ####################### def testListOperations_001(self): """ Test append() for a valid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") list1.append("a") self.assertEqual(list1, [ "a", ]) self.assertEqual(list1[0], "a") list1.append("b") self.assertEqual(list1, [ "a", "b", ]) self.assertEqual(list1[0], "a") self.assertEqual(list1[1], "b") list1.append("c") self.assertEqual(list1, [ "a", "b", "c", ]) self.assertEqual(list1[0], "a") self.assertEqual(list1[1], "b") self.assertEqual(list1[2], "c") def testListOperations_002(self): """ Test append() for an invalid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.append, "d") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.append, 1) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.append, UnorderedList()) self.assertEqual(list1, []) def testListOperations_003(self): """ Test insert() for a valid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") list1.insert(0, "a") self.assertEqual(list1, [ "a", ]) self.assertEqual(list1[0], "a") list1.insert(0, "b") self.assertEqual(list1, [ "b", "a", ]) self.assertEqual(list1[0], "b") self.assertEqual(list1[1], "a") list1.insert(0, "c") self.assertEqual(list1, [ "c", "b", "a", ]) self.assertEqual(list1[0], "c") self.assertEqual(list1[1], "b") self.assertEqual(list1[2], "a") def testListOperations_004(self): """ Test insert() for an invalid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.insert, 0, "d") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.insert, 0, 1) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.insert, 0, UnorderedList()) self.assertEqual(list1, []) def testListOperations_005(self): """ Test extend() for a valid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") list1.extend(["a", ]) self.assertEqual(list1, [ "a", ]) self.assertEqual(list1[0], "a") list1.extend(["b", ]) self.assertEqual(list1, [ "a", "b", ]) self.assertEqual(list1[0], "a") self.assertEqual(list1[1], "b") list1.extend(["c", ]) self.assertEqual(list1, [ "a", "b", "c", ]) self.assertEqual(list1[0], "a") self.assertEqual(list1[1], "b") self.assertEqual(list1[2], "c") def testListOperations_006(self): """ Test extend() for an invalid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.extend, ["d", ]) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.extend, [1, ]) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.extend, [ UnorderedList(), ]) self.assertEqual(list1, []) ########################### # TestRegexMatchList class ########################### class TestRegexMatchList(unittest.TestCase): """Tests for the RegexMatchList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################### # Test list operations ####################### def testListOperations_001(self): """ Test append() for a valid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) list1.append("a") self.assertEqual(list1, [ "a", ]) self.assertEqual(list1[0], "a") list1.append("1") self.assertEqual(list1, [ "a", "1", ]) self.assertEqual(list1[0], "a") self.assertEqual(list1[1], "1") list1.append("abcd12345") self.assertEqual(list1, [ "a", "1", "abcd12345", ]) self.assertEqual(list1[0], "a") self.assertEqual(list1[1], "1") self.assertEqual(list1[2], "abcd12345") list1.append("") self.assertEqual(list1, [ "a", "1", "abcd12345", "", ]) self.assertEqual(list1[0], "a") self.assertEqual(list1[1], "1") self.assertEqual(list1[2], "abcd12345") self.assertEqual(list1[3], "") def testListOperations_002(self): """ Test append() for an invalid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.append, "A") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.append, "ABC") self.assertEqual(list1, []) self.assertRaises(TypeError, list1.append, 12) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.append, "KEN_12") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.append, None) self.assertEqual(list1, []) def testListOperations_003(self): """ Test insert() for a valid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) list1.insert(0, "a") self.assertEqual(list1, [ "a", ]) self.assertEqual(list1[0], "a") list1.insert(0, "1") self.assertEqual(list1, [ "1", "a", ]) self.assertEqual(list1[0], "1") self.assertEqual(list1[1], "a") list1.insert(0, "abcd12345") self.assertEqual(list1, [ "abcd12345", "1", "a", ]) self.assertEqual(list1[0], "abcd12345") self.assertEqual(list1[1], "1") self.assertEqual(list1[2], "a") list1.insert(0, "") self.assertEqual(list1, [ "abcd12345", "1", "a", "", ]) self.assertEqual(list1[0], "") self.assertEqual(list1[1], "abcd12345") self.assertEqual(list1[2], "1") self.assertEqual(list1[3], "a") def testListOperations_004(self): """ Test insert() for an invalid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.insert, 0, "A") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.insert, 0, "ABC") self.assertEqual(list1, []) self.assertRaises(TypeError, list1.insert, 0, 12) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.insert, 0, "KEN_12") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.insert, 0, None) self.assertEqual(list1, []) def testListOperations_005(self): """ Test extend() for a valid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) list1.extend(["a", ]) self.assertEqual(list1, [ "a", ]) self.assertEqual(list1[0], "a") list1.extend(["1", ]) self.assertEqual(list1, [ "a", "1", ]) self.assertEqual(list1[0], "a") self.assertEqual(list1[1], "1") list1.extend(["abcd12345", ]) self.assertEqual(list1, [ "a", "1", "abcd12345", ]) self.assertEqual(list1[0], "a") self.assertEqual(list1[1], "1") self.assertEqual(list1[2], "abcd12345") list1.extend(["", ]) self.assertEqual(list1, [ "a", "1", "abcd12345", "", ]) self.assertEqual(list1[0], "a") self.assertEqual(list1[1], "1") self.assertEqual(list1[2], "abcd12345") self.assertEqual(list1[3], "") def testListOperations_006(self): """ Test extend() for an invalid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.extend, [ "A", ]) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.extend, [ "ABC", ]) self.assertEqual(list1, []) self.assertRaises(TypeError, list1.extend, [ 12, ]) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.extend, [ "KEN_12", ]) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.extend, [ None, ]) self.assertEqual(list1, []) def testListOperations_007(self): """ Test append() for a valid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) list1.append("a") self.assertEqual(list1, [ "a", ]) self.assertEqual(list1[0], "a") list1.append("1") self.assertEqual(list1, [ "a", "1", ]) self.assertEqual(list1[0], "a") self.assertEqual(list1[1], "1") list1.append("abcd12345") self.assertEqual(list1, [ "a", "1", "abcd12345", ]) self.assertEqual(list1[0], "a") self.assertEqual(list1[1], "1") self.assertEqual(list1[2], "abcd12345") def testListOperations_008(self): """ Test append() for an invalid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.append, "A") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.append, "ABC") self.assertEqual(list1, []) self.assertRaises(TypeError, list1.append, 12) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.append, "KEN_12") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.append, "") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.append, None) self.assertEqual(list1, []) def testListOperations_009(self): """ Test insert() for a valid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) list1.insert(0, "a") self.assertEqual(list1, [ "a", ]) self.assertEqual(list1[0], "a") list1.insert(0, "1") self.assertEqual(list1, [ "1", "a", ]) self.assertEqual(list1[0], "1") self.assertEqual(list1[1], "a") list1.insert(0, "abcd12345") self.assertEqual(list1, [ "abcd12345", "1", "a", ]) self.assertEqual(list1[0], "abcd12345") self.assertEqual(list1[1], "1") self.assertEqual(list1[2], "a") def testListOperations_010(self): """ Test insert() for an invalid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.insert, 0, "A") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.insert, 0, "ABC") self.assertEqual(list1, []) self.assertRaises(TypeError, list1.insert, 0, 12) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.insert, 0, "KEN_12") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.insert, 0, "") self.assertEqual(list1, []) self.assertRaises(ValueError, list1.insert, 0, None) self.assertEqual(list1, []) def testListOperations_011(self): """ Test extend() for a valid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) list1.extend(["a", ]) self.assertEqual(list1, [ "a", ]) self.assertEqual(list1[0], "a") list1.extend(["1", ]) self.assertEqual(list1, [ "a", "1", ]) self.assertEqual(list1[0], "a") self.assertEqual(list1[1], "1") list1.extend(["abcd12345", ]) self.assertEqual(list1, [ "a", "1", "abcd12345", ]) self.assertEqual(list1[0], "a") self.assertEqual(list1[1], "1") self.assertEqual(list1[2], "abcd12345") def testListOperations_012(self): """ Test extend() for an invalid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.extend, [ "A", ]) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.extend, [ "ABC", ]) self.assertEqual(list1, []) self.assertRaises(TypeError, list1.extend, [ 12, ]) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.extend, [ "KEN_12", ]) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.extend, [ "", ]) self.assertEqual(list1, []) self.assertRaises(ValueError, list1.extend, [ None, ]) self.assertEqual(list1, []) ###################### # TestRegexList class ###################### class TestRegexList(unittest.TestCase): """Tests for the RegexList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################### # Test list operations ####################### def testListOperations_001(self): """ Test append() for a valid regular expresson. """ list1 = RegexList() list1.append(r".*\.jpg") self.assertEqual(list1, [ r".*\.jpg", ]) self.assertEqual(list1[0], r".*\.jpg") list1.append("[a-zA-Z0-9]*") self.assertEqual(list1, [ r".*\.jpg", "[a-zA-Z0-9]*", ]) self.assertEqual(list1[0], r".*\.jpg") self.assertEqual(list1[1], "[a-zA-Z0-9]*") def testListOperations_002(self): """ Test append() for an invalid regular expression. """ list1 = RegexList() self.assertEqual(list1, []) self.assertRaises(ValueError, list1.append, "*.jpg") self.assertEqual(list1, []) def testListOperations_003(self): """ Test insert() for a valid regular expression. """ list1 = RegexList() list1.insert(0, r".*\.jpg") self.assertEqual(list1, [ r".*\.jpg", ]) self.assertEqual(list1[0], r".*\.jpg") list1.insert(0, "[a-zA-Z0-9]*") self.assertEqual(list1, [ "[a-zA-Z0-9]*", r".*\.jpg", ]) self.assertEqual(list1[0], "[a-zA-Z0-9]*") self.assertEqual(list1[1], r".*\.jpg") def testListOperations_004(self): """ Test insert() for an invalid regular expression. """ list1 = RegexList() self.assertRaises(ValueError, list1.insert, 0, "*.jpg") def testListOperations_005(self): """ Test extend() for a valid regular expression. """ list1 = RegexList() list1.extend([r".*\.jpg", ]) self.assertEqual(list1, [ r".*\.jpg", ]) self.assertEqual(list1[0], r".*\.jpg") list1.extend(["[a-zA-Z0-9]*", ]) self.assertEqual(list1, [ r".*\.jpg", "[a-zA-Z0-9]*", ]) self.assertEqual(list1[0], r".*\.jpg") self.assertEqual(list1[1], "[a-zA-Z0-9]*") def testListOperations_006(self): """ Test extend() for an invalid regular expression. """ list1 = RegexList() self.assertEqual(list1, []) self.assertRaises(ValueError, list1.extend, [ "*.jpg", ]) self.assertEqual(list1, []) ########################## # TestDirectedGraph class ########################## class TestDirectedGraph(unittest.TestCase): """Tests for the DirectedGraph class.""" ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = DirectedGraph("test") obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with a valid name filled in. """ graph = DirectedGraph("Ken") self.assertEqual("Ken", graph.name) def testConstructor_002(self): """ Test constructor with a C{None} name filled in. """ self.assertRaises(ValueError, DirectedGraph, None) ########################## # Test depth first search ########################## def testTopologicalSort_001(self): """ Empty graph. """ graph = DirectedGraph("test") path = graph.topologicalSort() self.assertEqual([], path) def testTopologicalSort_002(self): """ Graph with 1 vertex, no edges. """ graph = DirectedGraph("test") graph.createVertex("1") path = graph.topologicalSort() self.assertEqual([ "1", ], path) def testTopologicalSort_003(self): """ Graph with 2 vertices, no edges. """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") path = graph.topologicalSort() self.assertEqual([ "2", "1", ], path) def testTopologicalSort_004(self): """ Graph with 3 vertices, no edges. """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") path = graph.topologicalSort() self.assertEqual([ "3", "2", "1", ], path) def testTopologicalSort_005(self): """ Graph with 4 vertices, no edges. """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createVertex("4") path = graph.topologicalSort() self.assertEqual([ "4", "2", "1", "3", ], path) def testTopologicalSort_006(self): """ Graph with 4 vertices, no edges. """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createVertex("4") graph.createVertex("5") path = graph.topologicalSort() self.assertEqual([ "5", "4", "2", "1", "3", ], path) def testTopologicalSort_007(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.assertEqual([ "1", "2", "3", ], path) def testTopologicalSort_008(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.assertEqual([ "1", "2", "3", ], path) def testTopologicalSort_009(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.assertEqual([ "1", "2", "3", ], path) def testTopologicalSort_010(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.assertEqual([ "1", "2", "3", ], path) def testTopologicalSort_011(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.assertEqual([ "1", "2", "3", ], path) def testTopologicalSort_012(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.assertEqual([ "1", "2", "3", ], path) def testTopologicalSort_013(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.assertEqual([ "3", "2", "1", ], path) def testTopologicalSort_014(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.assertEqual([ "3", "2", "1", ], path) def testTopologicalSort_015(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.assertEqual([ "3", "2", "1", ], path) def testTopologicalSort_016(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.assertEqual([ "3", "2", "1", ], path) def testTopologicalSort_017(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.assertEqual([ "3", "2", "1", ], path) def testTopologicalSort_018(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.assertEqual([ "3", "2", "1", ], path) def testTopologicalSort_019(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("1", "2") path = graph.topologicalSort() self.assertEqual([ "3", "1", "2", ], path) def testTopologicalSort_020(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("1", "2") path = graph.topologicalSort() self.assertEqual([ "3", "1", "2", ], path) def testTopologicalSort_021(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("1", "2") path = graph.topologicalSort() self.assertEqual([ "1", "3", "2", ], path) def testTopologicalSort_022(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("1", "2") path = graph.topologicalSort() self.assertEqual([ "3", "1", "2", ], path) def testTopologicalSort_023(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("1", "2") path = graph.topologicalSort() self.assertEqual([ "1", "2", "3", ], path) def testTopologicalSort_024(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("1", "2") path = graph.topologicalSort() self.assertEqual([ "1", "2", "3", ], path) def testTopologicalSort_025(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("1", "3") path = graph.topologicalSort() self.assertEqual([ "2", "1", "3", ], path) def testTopologicalSort_026(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("1", "3") path = graph.topologicalSort() self.assertEqual([ "2", "1", "3", ], path) def testTopologicalSort_027(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("1", "3") path = graph.topologicalSort() self.assertEqual([ "1", "3", "2", ], path) def testTopologicalSort_028(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("1", "3") path = graph.topologicalSort() self.assertEqual([ "1", "3", "2", ], path) def testTopologicalSort_029(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("1", "3") path = graph.topologicalSort() self.assertEqual([ "2", "1", "3", ], path) def testTopologicalSort_030(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("1", "3") path = graph.topologicalSort() self.assertEqual([ "1", "2", "3", ], path) def testTopologicalSort_031(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("2", "3") path = graph.topologicalSort() self.assertEqual([ "2", "3", "1", ], path) def testTopologicalSort_032(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("2", "3") path = graph.topologicalSort() self.assertEqual([ "2", "3", "1", ], path) def testTopologicalSort_033(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("2", "3") path = graph.topologicalSort() self.assertEqual([ "1", "2", "3", ], path) def testTopologicalSort_034(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("2", "3") path = graph.topologicalSort() self.assertEqual([ "1", "2", "3", ], path) def testTopologicalSort_035(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("2", "3") path = graph.topologicalSort() self.assertEqual([ "2", "1", "3", ], path) def testTopologicalSort_036(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("2", "3") path = graph.topologicalSort() self.assertEqual([ "1", "2", "3", ], path) def testTopologicalSort_037(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("2", "1") path = graph.topologicalSort() self.assertEqual([ "3", "2", "1", ], path) def testTopologicalSort_038(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("2", "1") path = graph.topologicalSort() self.assertEqual([ "2", "3", "1", ], path) def testTopologicalSort_039(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("2", "1") path = graph.topologicalSort() self.assertEqual([ "3", "2", "1", ], path) def testTopologicalSort_040(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("2", "1") path = graph.topologicalSort() self.assertEqual([ "3", "2", "1", ], path) def testTopologicalSort_041(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("2", "1") path = graph.topologicalSort() self.assertEqual([ "2", "1", "3", ], path) def testTopologicalSort_042(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("2", "1") path = graph.topologicalSort() self.assertEqual([ "2", "1", "3", ], path) def testTopologicalSort_043(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("3", "1") path = graph.topologicalSort() self.assertEqual([ "3", "2", "1", ], path) def testTopologicalSort_044(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("3", "1") path = graph.topologicalSort() self.assertEqual([ "2", "3", "1", ], path) def testTopologicalSort_045(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("3", "1") path = graph.topologicalSort() self.assertEqual([ "3", "1", "2", ], path) def testTopologicalSort_046(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("3", "1") path = graph.topologicalSort() self.assertEqual([ "3", "1", "2", ], path) def testTopologicalSort_047(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("3", "1") path = graph.topologicalSort() self.assertEqual([ "2", "3", "1", ], path) def testTopologicalSort_048(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("3", "1") path = graph.topologicalSort() self.assertEqual([ "2", "3", "1", ], path) def testTopologicalSort_049(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("3", "2") path = graph.topologicalSort() self.assertEqual([ "3", "2", "1", ], path) def testTopologicalSort_050(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("3", "2") path = graph.topologicalSort() self.assertEqual([ "3", "2", "1", ], path) def testTopologicalSort_051(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("3", "2") path = graph.topologicalSort() self.assertEqual([ "1", "3", "2", ], path) def testTopologicalSort_052(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("3", "2") path = graph.topologicalSort() self.assertEqual([ "3", "1", "2", ], path) def testTopologicalSort_053(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("3", "2") path = graph.topologicalSort() self.assertEqual([ "1", "3", "2", ], path) def testTopologicalSort_054(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("3", "2") path = graph.topologicalSort() self.assertEqual([ "1", "3", "2" ], path) def testTopologicalSort_055(self): """ Graph with 1 vertex, with an edge to itself (1->1). """ graph = DirectedGraph("test") graph.createVertex("1") graph.createEdge("1", "1") self.assertRaises(ValueError, graph.topologicalSort) def testTopologicalSort_056(self): """ Graph with 2 vertices, each with an edge to itself (1->1, 2->2). """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createEdge("1", "1") graph.createEdge("2", "2") self.assertRaises(ValueError, graph.topologicalSort) def testTopologicalSort_057(self): """ Graph with 3 vertices, each with an edge to itself (1->1, 2->2, 3->3). """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("1", "1") graph.createEdge("2", "2") graph.createEdge("3", "3") self.assertRaises(ValueError, graph.topologicalSort) def testTopologicalSort_058(self): """ Graph with 3 vertices, in a loop (1->2->3->1). """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("1", "2") graph.createEdge("2", "3") graph.createEdge("3", "1") self.assertRaises(ValueError, graph.topologicalSort) def testTopologicalSort_059(self): """ Graph with 5 vertices, (2, 1->3, 1->4, 1->5) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") path = graph.topologicalSort() self.assertEqual([ "2", "1", "5", "4", "3", ], path) def testTopologicalSort_060(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") path = graph.topologicalSort() self.assertEqual([ "2", "1", "5", "4", "3", ], path) def testTopologicalSort_061(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5, 3->4) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") graph.createEdge("3", "4") path = graph.topologicalSort() self.assertEqual([ "2", "1", "5", "3", "4", ], path) def testTopologicalSort_062(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5, 3->4, 5->4) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") graph.createEdge("3", "4") graph.createEdge("5", "4") path = graph.topologicalSort() self.assertEqual([ "2", "1", "5", "3", "4", ], path) def testTopologicalSort_063(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5, 3->4, 5->4, 1->2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") graph.createEdge("3", "4") graph.createEdge("5", "4") graph.createEdge("1", "2") path = graph.topologicalSort() self.assertEqual([ "1", "2", "5", "3", "4", ], path) def testTopologicalSort_064(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5, 3->4, 5->4, 1->2, 3->5) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") graph.createEdge("3", "4") graph.createEdge("5", "4") graph.createEdge("1", "2") graph.createEdge("3", "5") path = graph.topologicalSort() self.assertEqual([ "1", "2", "3", "5", "4", ], path) def testTopologicalSort_065(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5, 3->4, 5->4, 5->1) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") graph.createEdge("3", "4") graph.createEdge("5", "4") graph.createEdge("5", "1") self.assertRaises(ValueError, graph.topologicalSort) ################################## # TestPathResolverSingleton class ################################## class TestPathResolverSingleton(unittest.TestCase): """Tests for the PathResolverSingleton class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ########################## # Test singleton behavior ########################## def testBehavior_001(self): """ Check behavior of constructor around filling and clearing instance variable. """ PathResolverSingleton._instance = None instance = PathResolverSingleton() self.assertNotEqual(None, PathResolverSingleton._instance) self.assertTrue(instance is PathResolverSingleton._instance) def testBehavior_002(self): """ Check behavior of getInstance() around filling and clearing instance variable. """ PathResolverSingleton._instance = None instance1 = PathResolverSingleton.getInstance() instance2 = PathResolverSingleton.getInstance() instance3 = PathResolverSingleton.getInstance() self.assertNotEqual(None, PathResolverSingleton._instance) self.assertTrue(instance1 is PathResolverSingleton._instance) self.assertTrue(instance1 is instance2) self.assertTrue(instance1 is instance3) PathResolverSingleton._instance = None PathResolverSingleton() instance4 = PathResolverSingleton.getInstance() instance5 = PathResolverSingleton.getInstance() instance6 = PathResolverSingleton.getInstance() self.assertTrue(instance1 is not instance4) self.assertTrue(instance4 is PathResolverSingleton._instance) self.assertTrue(instance4 is instance5) self.assertTrue(instance4 is instance6) PathResolverSingleton._instance = None instance7 = PathResolverSingleton.getInstance() instance8 = PathResolverSingleton.getInstance() instance9 = PathResolverSingleton.getInstance() self.assertTrue(instance1 is not instance7) self.assertTrue(instance4 is not instance7) self.assertTrue(instance7 is PathResolverSingleton._instance) self.assertTrue(instance7 is instance8) self.assertTrue(instance7 is instance9) ############################ # Test lookup functionality ############################ def testLookup_001(self): """ Test that lookup() always returns default when singleton is empty. """ PathResolverSingleton._instance = None instance = PathResolverSingleton.getInstance() result = instance.lookup("whatever") self.assertEqual(result, None) result = instance.lookup("whatever", None) self.assertEqual(result, None) result = instance.lookup("other") self.assertEqual(result, None) result = instance.lookup("other", "default") self.assertEqual(result, "default") def testLookup_002(self): """ Test that lookup() returns proper values when singleton is not empty. """ mappings = { "one" : "/path/to/one", "two" : "/path/to/two" } PathResolverSingleton._instance = None singleton = PathResolverSingleton() singleton.fill(mappings) instance = PathResolverSingleton.getInstance() result = instance.lookup("whatever") self.assertEqual(result, None) result = instance.lookup("whatever", None) self.assertEqual(result, None) result = instance.lookup("other") self.assertEqual(result, None) result = instance.lookup("other", "default") self.assertEqual(result, "default") result = instance.lookup("one") self.assertEqual(result, "/path/to/one") result = instance.lookup("one", None) self.assertEqual(result, "/path/to/one") result = instance.lookup("two", None) self.assertEqual(result, "/path/to/two") result = instance.lookup("two", "default") self.assertEqual(result, "/path/to/two") ######################## # TestDiagnostics class ######################## class TestDiagnostics(unittest.TestCase): """Tests for the Diagnostics class.""" def testMethods_001(self): """ Test the version attribute. """ diagnostics = Diagnostics() self.assertFalse(diagnostics.version is None) self.assertNotEqual("", diagnostics.version) def testMethods_002(self): """ Test the interpreter attribute. """ diagnostics = Diagnostics() self.assertFalse(diagnostics.interpreter is None) self.assertNotEqual("", diagnostics.interpreter) def testMethods_003(self): """ Test the platform attribute. """ diagnostics = Diagnostics() self.assertFalse(diagnostics.platform is None) self.assertNotEqual("", diagnostics.platform) def testMethods_004(self): """ Test the encoding attribute. """ diagnostics = Diagnostics() self.assertFalse(diagnostics.encoding is None) self.assertNotEqual("", diagnostics.encoding) def testMethods_005(self): """ Test the locale attribute. """ # pylint: disable=W0104 diagnostics = Diagnostics() diagnostics.locale # might not be set, so just make sure method doesn't fail def testMethods_006(self): """ Test the getValues() method. """ diagnostics = Diagnostics() values = diagnostics.getValues() self.assertEqual(diagnostics.version, values['version']) self.assertEqual(diagnostics.interpreter, values['interpreter']) self.assertEqual(diagnostics.platform, values['platform']) self.assertEqual(diagnostics.encoding, values['encoding']) self.assertEqual(diagnostics.locale, values['locale']) self.assertEqual(diagnostics.timestamp, values['timestamp']) def testMethods_007(self): """ Test the _buildDiagnosticLines() method. """ values = Diagnostics().getValues() lines = Diagnostics()._buildDiagnosticLines() self.assertEqual(len(values), len(lines)) def testMethods_008(self): """ Test the printDiagnostics() method. """ captureOutput(Diagnostics().printDiagnostics) def testMethods_009(self): """ Test the logDiagnostics() method. """ logger = logging.getLogger("CedarBackup3.test") Diagnostics().logDiagnostics(logger.info) def testMethods_010(self): """ Test the timestamp attribute. """ diagnostics = Diagnostics() self.assertFalse(diagnostics.timestamp is None) self.assertNotEqual("", diagnostics.timestamp) ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the various public functions.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): removedir(self.tmpdir) ################## # Utility methods ################## def getTempfile(self): """Gets a path to a temporary file on disk.""" (fd, name) = tempfile.mkstemp(dir=self.tmpdir) try: os.close(fd) except: pass return name def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) ################## # Test sortDict() ################## def testSortDict_001(self): """ Test for empty dictionary. """ d = {} result = sortDict(d) self.assertEqual([], result) def testSortDict_002(self): """ Test for dictionary with one item. """ d = {'a':1} result = sortDict(d) self.assertEqual(['a', ], result) def testSortDict_003(self): """ Test for dictionary with two items, same value. """ d = {'a':1, 'b':1, } result = sortDict(d) self.assertEqual(['a', 'b', ], result) def testSortDict_004(self): """ Test for dictionary with two items, different values. """ d = {'a':1, 'b':2, } result = sortDict(d) self.assertEqual(['a', 'b', ], result) def testSortDict_005(self): """ Test for dictionary with many items, same and different values. """ d = {'rebuild': 0, 'purge': 400, 'collect': 100, 'validate': 0, 'store': 300, 'stage': 200} result = sortDict(d) self.assertEqual(['rebuild', 'validate', 'collect', 'stage', 'store', 'purge', ], result) ############################## # Test getFunctionReference() ############################## def testGetFunctionReference_001(self): """ Check that the search works within "standard" Python namespace. """ module = "os.path" function = "isdir" reference = getFunctionReference(module, function) self.assertTrue(isdir is reference) def testGetFunctionReference_002(self): """ Check that the search works for things within CedarBackup3. """ module = "CedarBackup3.util" function = "executeCommand" reference = getFunctionReference(module, function) self.assertTrue(executeCommand is reference) ######################## # Test resolveCommand() ######################## def testResolveCommand_001(self): """ Test that the command is echoed back unchanged when singleton is empty. """ PathResolverSingleton._instance = None command = [ "BAD", ] expected = command[:] result = resolveCommand(command) self.assertEqual(expected, result) command = [ "GOOD", ] expected = command[:] result = resolveCommand(command) self.assertEqual(expected, result) command = [ "WHATEVER", "--verbose", "--debug", 'tvh:asa892831', "blech", "<", ] expected = command[:] result = resolveCommand(command) self.assertEqual(expected, result) def testResolveCommand_002(self): """ Test that the command is echoed back unchanged when mapping is not found. """ PathResolverSingleton._instance = None mappings = { "one" : "/path/to/one", "two" : "/path/to/two" } singleton = PathResolverSingleton() singleton.fill(mappings) command = [ "BAD", ] expected = command[:] result = resolveCommand(command) self.assertEqual(expected, result) command = [ "GOOD", ] expected = command[:] result = resolveCommand(command) self.assertEqual(expected, result) command = [ "WHATEVER", "--verbose", "--debug", 'tvh:asa892831', "blech", "<", ] expected = command[:] result = resolveCommand(command) self.assertEqual(expected, result) def testResolveCommand_003(self): """ Test that the command is echoed back changed appropriately when mapping is found. """ PathResolverSingleton._instance = None mappings = { "one" : "/path/to/one", "two" : "/path/to/two" } singleton = PathResolverSingleton() singleton.fill(mappings) command = [ "one", ] expected = [ "/path/to/one", ] result = resolveCommand(command) self.assertEqual(expected, result) command = [ "two", ] expected = [ "/path/to/two", ] result = resolveCommand(command) self.assertEqual(expected, result) command = [ "two", "--verbose", "--debug", 'tvh:asa892831', "blech", "<", ] expected = ["/path/to/two", "--verbose", "--debug", 'tvh:asa892831', "blech", "<", ] result = resolveCommand(command) self.assertEqual(expected, result) ######################## # Test executeCommand() ######################## def testExecuteCommand_001(self): """ Execute a command that should succeed, no arguments, returnOutput=False Command-line: echo """ command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_002(self): """ Execute a command that should succeed, one argument, returnOutput=False Command-line: python -V """ command=[sys.executable, ] args=["-V", ] (result, output) = executeCommand(command, args, returnOutput=False) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_003(self): """ Execute a command that should succeed, two arguments, returnOutput=False Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(0)" """ command=[sys.executable, ] args=["-c", "import sys; print(sys.argv[1:]); sys.exit(0)", ] (result, output) = executeCommand(command, args, returnOutput=False) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_004(self): """ Execute a command that should succeed, three arguments, returnOutput=False Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(0)" first """ command=[sys.executable, ] args=["-c", "import sys; print(sys.argv[1:]); sys.exit(0)", "first", ] (result, output) = executeCommand(command, args, returnOutput=False) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_005(self): """ Execute a command that should succeed, four arguments, returnOutput=False Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(0)" first second """ command=[sys.executable, ] args=["-c", "import sys; print(sys.argv[1:]); sys.exit(0)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=False) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_006(self): """ Execute a command that should fail, returnOutput=False Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(1)" """ command=[sys.executable, ] args=["-c", "import sys; print(sys.argv[1:]); sys.exit(1)", ] (result, output) = executeCommand(command, args, returnOutput=False) self.assertNotEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_007(self): """ Execute a command that should fail, more arguments, returnOutput=False Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(1)" first second """ command=[sys.executable, ] args=["-c", "import sys; print(sys.argv[1:]); sys.exit(1)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=False) self.assertNotEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_008(self): """ Execute a command that should succeed, no arguments, returnOutput=True Command-line: echo """ command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.assertEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual(os.linesep, output[0]) def testExecuteCommand_009(self): """ Execute a command that should succeed, one argument, returnOutput=True Command-line: python -V """ command=[sys.executable, ] args=["-V", ] (result, output) = executeCommand(command, args, returnOutput=True) self.assertEqual(0, result) self.assertEqual(1, len(output)) self.assertTrue(output[0].startswith("Python")) def testExecuteCommand_010(self): """ Execute a command that should succeed, two arguments, returnOutput=True Command-line: python -c "import sys; print(''); sys.exit(0)" """ command=[sys.executable, ] args=["-c", "import sys; print(''); sys.exit(0)", ] (result, output) = executeCommand(command, args, returnOutput=True) self.assertEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual(os.linesep, output[0]) def testExecuteCommand_011(self): """ Execute a command that should succeed, three arguments, returnOutput=True Command-line: python -c "import sys; print('%s' % (sys.argv[1])); sys.exit(0)" first """ command=[sys.executable, ] args=["-c", "import sys; print('%s' % (sys.argv[1])); sys.exit(0)", "first", ] (result, output) = executeCommand(command, args, returnOutput=True) self.assertEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual("first%s" % os.linesep, output[0]) def testExecuteCommand_012(self): """ Execute a command that should succeed, four arguments, returnOutput=True Command-line: python -c "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(0)" first second """ command=[sys.executable, ] args=["-c", "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(0)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=True) self.assertEqual(0, result) self.assertEqual(2, len(output)) self.assertEqual("first%s" % os.linesep, output[0]) self.assertEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_013(self): """ Execute a command that should fail, returnOutput=True Command-line: python -c "import sys; print(''); sys.exit(1)" """ command=[sys.executable, ] args=["-c", "import sys; print(''); sys.exit(1)", ] (result, output) = executeCommand(command, args, returnOutput=True) self.assertNotEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual(os.linesep, output[0]) def testExecuteCommand_014(self): """ Execute a command that should fail, more arguments, returnOutput=True Command-line: python -c "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(1)" first second """ command=[sys.executable, ] args=["-c", "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(1)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=True) self.assertNotEqual(0, result) self.assertEqual(2, len(output)) self.assertEqual("first%s" % os.linesep, output[0]) self.assertEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_015(self): """ Execute a command that should succeed, no arguments, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: echo """ command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_016(self): """ Execute a command that should succeed, one argument, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -V """ command=[sys.executable, "-V", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_017(self): """ Execute a command that should succeed, two arguments, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(0)" """ command=[sys.executable, "-c", "import sys; print(sys.argv[1:]); sys.exit(0)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_018(self): """ Execute a command that should succeed, three arguments, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(0)" first """ command=[sys.executable, "-c", "import sys; print(sys.argv[1:]); sys.exit(0)", "first", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_019(self): """ Execute a command that should succeed, four arguments, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(0)" first second """ command=[sys.executable, "-c", "import sys; print(sys.argv[1:]); sys.exit(0)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_020(self): """ Execute a command that should fail, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(1)" """ command=[sys.executable, "-c", "import sys; print(sys.argv[1:]); sys.exit(1)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.assertNotEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_021(self): """ Execute a command that should fail, more arguments, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(1)" first second """ command=[sys.executable, "-c", "import sys; print(sys.argv[1:]); sys.exit(1)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.assertNotEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_022(self): """ Execute a command that should succeed, no arguments, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: echo """ command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.assertEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual(os.linesep, output[0]) def testExecuteCommand_023(self): """ Execute a command that should succeed, one argument, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -V """ command=[sys.executable, "-V"] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.assertEqual(0, result) self.assertEqual(1, len(output)) self.assertTrue(output[0].startswith("Python")) def testExecuteCommand_024(self): """ Execute a command that should succeed, two arguments, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print(''); sys.exit(0)" """ command=[sys.executable, "-c", "import sys; print(''); sys.exit(0)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.assertEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual(os.linesep, output[0]) def testExecuteCommand_025(self): """ Execute a command that should succeed, three arguments, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print('%s' % (sys.argv[1])); sys.exit(0)" first """ command=[sys.executable, "-c", "import sys; print('%s' % (sys.argv[1])); sys.exit(0)", "first", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.assertEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual("first%s" % os.linesep, output[0]) def testExecuteCommand_026(self): """ Execute a command that should succeed, four arguments, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(0)" first second """ command=[sys.executable, "-c", "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(0)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.assertEqual(0, result) self.assertEqual(2, len(output)) self.assertEqual("first%s" % os.linesep, output[0]) self.assertEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_027(self): """ Execute a command that should fail, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print(''); sys.exit(1)" """ command=[sys.executable, "-c", "import sys; print(''); sys.exit(1)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.assertNotEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual(os.linesep, output[0]) def testExecuteCommand_028(self): """ Execute a command that should fail, more arguments, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(1)" first second """ command=[sys.executable, "-c", "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(1)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.assertNotEqual(0, result) self.assertEqual(2, len(output)) self.assertEqual("first%s" % os.linesep, output[0]) self.assertEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_030(self): """ Execute a command that should succeed, no arguments, returnOutput=False, ignoring stderr. Command-line: echo """ command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_031(self): """ Execute a command that should succeed, one argument, returnOutput=False, ignoring stderr. Command-line: python -V """ command=[sys.executable, ] args=["-V", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_032(self): """ Execute a command that should succeed, two arguments, returnOutput=False, ignoring stderr. Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(0)" """ command=[sys.executable, ] args=["-c", "import sys; print(sys.argv[1:]); sys.exit(0)", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_033(self): """ Execute a command that should succeed, three arguments, returnOutput=False, ignoring stderr. Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(0)" first """ command=[sys.executable, ] args=["-c", "import sys; print(sys.argv[1:]); sys.exit(0)", "first", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_034(self): """ Execute a command that should succeed, four arguments, returnOutput=False, ignoring stderr. Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(0)" first second """ command=[sys.executable, ] args=["-c", "import sys; print(sys.argv[1:]); sys.exit(0)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_035(self): """ Execute a command that should fail, returnOutput=False, ignoring stderr. Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(1)" """ command=[sys.executable, ] args=["-c", "import sys; print(sys.argv[1:]); sys.exit(1)", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.assertNotEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_036(self): """ Execute a command that should fail, more arguments, returnOutput=False, ignoring stderr. Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(1)" first second """ command=[sys.executable, ] args=["-c", "import sys; print(sys.argv[1:]); sys.exit(1)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.assertNotEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_037(self): """ Execute a command that should succeed, no arguments, returnOutput=True, ignoring stderr. Command-line: echo """ command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual(os.linesep, output[0]) def testExecuteCommand_038(self): """ Execute a command that should succeed, one argument, returnOutput=True, ignoring stderr. Command-line: python -V """ command=[sys.executable, ] args=["-c", "import sys; print('X', file=sys.stderr)", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=False) self.assertEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual("X%s" % os.linesep, output[0]) # prove stderr is captured (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(0, len(output)) # prove stderr is now ignored def testExecuteCommand_039(self): """ Execute a command that should succeed, two arguments, returnOutput=True, ignoring stderr. Command-line: python -c "import sys; print(''); sys.exit(0)" """ command=[sys.executable, ] args=["-c", "import sys; print(''); sys.exit(0)", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual(os.linesep, output[0]) def testExecuteCommand_040(self): """ Execute a command that should succeed, three arguments, returnOutput=True, ignoring stderr. Command-line: python -c "import sys; print('%s' % (sys.argv[1])); sys.exit(0)" first """ command=[sys.executable, ] args=["-c", "import sys; print('%s' % (sys.argv[1])); sys.exit(0)", "first", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual("first%s" % os.linesep, output[0]) def testExecuteCommand_041(self): """ Execute a command that should succeed, four arguments, returnOutput=True, ignoring stderr. Command-line: python -c "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(0)" first second """ command=[sys.executable, ] args=["-c", "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(0)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(2, len(output)) self.assertEqual("first%s" % os.linesep, output[0]) self.assertEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_042(self): """ Execute a command that should fail, returnOutput=True, ignoring stderr. Command-line: python -c "import sys; print(''); sys.exit(1)" """ command=[sys.executable, ] args=["-c", "import sys; print(''); sys.exit(1)", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.assertNotEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual(os.linesep, output[0]) def testExecuteCommand_043(self): """ Execute a command that should fail, more arguments, returnOutput=True, ignoring stderr. Command-line: python -c "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(1)" first second """ command=[sys.executable, ] args=["-c", "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(1)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.assertNotEqual(0, result) self.assertEqual(2, len(output)) self.assertEqual("first%s" % os.linesep, output[0]) self.assertEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_044(self): """ Execute a command that should succeed, no arguments, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: echo """ command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_045(self): """ Execute a command that should succeed, one argument, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -V """ command=[sys.executable, "-V", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_046(self): """ Execute a command that should succeed, two arguments, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(0)" """ command=[sys.executable, "-c", "import sys; print(sys.argv[1:]); sys.exit(0)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_047(self): """ Execute a command that should succeed, three arguments, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(0)" first """ command=[sys.executable, "-c", "import sys; print(sys.argv[1:]); sys.exit(0)", "first", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_048(self): """ Execute a command that should succeed, four arguments, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(0)" first second """ command=[sys.executable, "-c", "import sys; print(sys.argv[1:]); sys.exit(0)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_049(self): """ Execute a command that should fail, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(1)" """ command=[sys.executable, "-c", "import sys; print(sys.argv[1:]); sys.exit(1)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.assertNotEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_050(self): """ Execute a command that should fail, more arguments, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print(sys.argv[1:]); sys.exit(1)" first second """ command=[sys.executable, "-c", "import sys; print(sys.argv[1:]); sys.exit(1)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.assertNotEqual(0, result) self.assertEqual(None, output) def testExecuteCommand_051(self): """ Execute a command that should succeed, no arguments, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: echo """ command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual(os.linesep, output[0]) def testExecuteCommand_052(self): """ Execute a command that should succeed, one argument, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -V """ command=[sys.executable, "-c", "import sys; print('X', file=sys.stderr)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=False) self.assertEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual("X%s" % os.linesep, output[0]) # prove stderr is captured (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(0, len(output)) # prove stderr is now ignored def testExecuteCommand_053(self): """ Execute a command that should succeed, two arguments, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print(''); sys.exit(0)" """ command=[sys.executable, "-c", "import sys; print(''); sys.exit(0)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual(os.linesep, output[0]) def testExecuteCommand_054(self): """ Execute a command that should succeed, three arguments, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print('%s' % (sys.argv[1])); sys.exit(0)" first """ command=[sys.executable, "-c", "import sys; print('%s' % (sys.argv[1])); sys.exit(0)", "first", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual("first%s" % os.linesep, output[0]) def testExecuteCommand_055(self): """ Execute a command that should succeed, four arguments, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(0)" first second """ command=[sys.executable, "-c", "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(0)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.assertEqual(0, result) self.assertEqual(2, len(output)) self.assertEqual("first%s" % os.linesep, output[0]) self.assertEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_056(self): """ Execute a command that should fail, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print(''); sys.exit(1)" """ command=[sys.executable, "-c", "import sys; print(''); sys.exit(1)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.assertNotEqual(0, result) self.assertEqual(1, len(output)) self.assertEqual(os.linesep, output[0]) def testExecuteCommand_057(self): """ Execute a command that should fail, more arguments, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(1)" first second """ command=[sys.executable, "-c", "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(1)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.assertNotEqual(0, result) self.assertEqual(2, len(output)) self.assertEqual("first%s" % os.linesep, output[0]) self.assertEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_058(self): """ Execute a command that should succeed, no arguments, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: echo """ command=["echo", ] args=[] filename = self.getTempfile() with open(filename, "wb") as outputFile: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] self.assertEqual(0, result) self.assertTrue(os.path.exists(filename)) with open(filename) as f: output = f.readlines() self.assertEqual(1, len(output)) self.assertEqual(os.linesep, output[0]) def testExecuteCommand_059(self): """ Execute a command that should succeed, one argument, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -V """ command=[sys.executable, "-V"] args=[] filename = self.getTempfile() with open(filename, "wb") as outputFile: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] self.assertEqual(0, result) self.assertTrue(os.path.exists(filename)) with open(filename) as f: output = f.readlines() self.assertEqual(1, len(output)) self.assertTrue(output[0].startswith("Python")) def testExecuteCommand_060(self): """ Execute a command that should succeed, two arguments, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print(''); sys.exit(0)" """ command=[sys.executable, "-c", "import sys; print(''); sys.exit(0)", ] args=[] filename = self.getTempfile() with open(filename, "wb") as outputFile: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] self.assertEqual(0, result) self.assertTrue(os.path.exists(filename)) with open(filename) as f: output = f.readlines() self.assertEqual(1, len(output)) self.assertEqual(os.linesep, output[0]) def testExecuteCommand_061(self): """ Execute a command that should succeed, three arguments, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print('%s' % (sys.argv[1])); sys.exit(0)" first """ command=[sys.executable, "-c", "import sys; print('%s' % (sys.argv[1])); sys.exit(0)", "first", ] args=[] filename = self.getTempfile() with open(filename, "wb") as outputFile: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] self.assertEqual(0, result) self.assertTrue(os.path.exists(filename)) with open(filename) as f: output = f.readlines() self.assertEqual(1, len(output)) self.assertEqual("first%s" % os.linesep, output[0]) def testExecuteCommand_062(self): """ Execute a command that should succeed, four arguments, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(0)" first second """ command=[sys.executable, "-c", "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(0)", "first", "second", ] args=[] filename = self.getTempfile() with open(filename, "wb") as outputFile: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] self.assertEqual(0, result) self.assertTrue(os.path.exists(filename)) with open(filename) as f: output = f.readlines() self.assertEqual(2, len(output)) self.assertEqual("first%s" % os.linesep, output[0]) self.assertEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_063(self): """ Execute a command that should fail, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print(''); sys.exit(1)" """ command=[sys.executable, "-c", "import sys; print(''); sys.exit(1)", ] args=[] filename = self.getTempfile() with open(filename, "wb") as outputFile: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] self.assertNotEqual(0, result) self.assertTrue(os.path.exists(filename)) with open(filename) as f: output = f.readlines() self.assertEqual(1, len(output)) self.assertEqual(os.linesep, output[0]) def testExecuteCommand_064(self): """ Execute a command that should fail, more arguments, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(1)" first second """ command=[sys.executable, "-c", "import sys; print('%s' % sys.argv[1]); print('%s' % sys.argv[2]); sys.exit(1)", "first", "second", ] args=[] filename = self.getTempfile() with open(filename, "wb") as outputFile: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] self.assertNotEqual(0, result) self.assertTrue(os.path.exists(filename)) with open(filename) as f: output = f.readlines() self.assertEqual(2, len(output)) self.assertEqual("first%s" % os.linesep, output[0]) self.assertEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_065(self): """ Execute a command with a huge amount of output all on stdout. The output should contain only data on stdout, and ignoreStderr should be True. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=[sys.executable, lotsoflines, "stdout", ] args = [] filename = self.getTempfile() with open(filename, "wb") as outputFile: result = executeCommand(command, args, ignoreStderr=True, returnOutput=False, outputFile=outputFile)[0] self.assertEqual(0, result) length = 0 with open(filename) as contents: for i in contents: length += 1 self.assertEqual(100000, length) def testExecuteCommand_066(self): """ Execute a command with a huge amount of output all on stdout. The output should contain only data on stdout, and ignoreStderr should be False. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=[sys.executable, lotsoflines, "stdout", ] args = [] filename = self.getTempfile() with open(filename, "wb") as outputFile: result = executeCommand(command, args, ignoreStderr=False, returnOutput=False, outputFile=outputFile)[0] self.assertEqual(0, result) length = 0 with open(filename) as contents: for i in contents: length += 1 self.assertEqual(100000, length) def testExecuteCommand_067(self): """ Execute a command with a huge amount of output all on stdout. The output should contain only data on stderr, and ignoreStderr should be True. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=[sys.executable, lotsoflines, "stderr", ] args = [] filename = self.getTempfile() with open(filename, "wb") as outputFile: result = executeCommand(command, args, ignoreStderr=True, returnOutput=False, outputFile=outputFile)[0] self.assertEqual(0, result) length = 0 with open(filename) as contents: for i in contents: length += 1 self.assertEqual(0, length) def testExecuteCommand_068(self): """ Execute a command with a huge amount of output all on stdout. The output should contain only data on stdout, and ignoreStderr should be False. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=[sys.executable, lotsoflines, "stderr", ] args = [] filename = self.getTempfile() with open(filename, "wb") as outputFile: result = executeCommand(command, args, ignoreStderr=False, returnOutput=False, outputFile=outputFile)[0] self.assertEqual(0, result) length = 0 with open(filename) as contents: for i in contents: length += 1 self.assertEqual(100000, length) def testExecuteCommand_069(self): """ Execute a command with a huge amount of output all on stdout. The output should contain data on stdout and stderr, and ignoreStderr should be True. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=[sys.executable, lotsoflines, "both", ] args = [] filename = self.getTempfile() with open(filename, "wb") as outputFile: result = executeCommand(command, args, ignoreStderr=True, returnOutput=False, outputFile=outputFile)[0] self.assertEqual(0, result) length = 0 with open(filename) as contents: for i in contents: length += 1 self.assertEqual(100000, length) def testExecuteCommand_070(self): """ Execute a command with a huge amount of output all on stdout. The output should contain data on stdout and stderr, and ignoreStderr should be False. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=[sys.executable, lotsoflines, "both", ] args = [] filename = self.getTempfile() with open(filename, "wb") as outputFile: result = executeCommand(command, args, ignoreStderr=False, returnOutput=False, outputFile=outputFile)[0] self.assertEqual(0, result) length = 0 with open(filename) as contents: for i in contents: length += 1 self.assertEqual(100000*2, length) #################### # Test encodePath() #################### def testEncodePath_002(self): """ Test with a simple string, empty. """ path = "" safePath = encodePath(path) self.assertTrue(isinstance(safePath, str)) self.assertEqual(path, safePath) def testEncodePath_003(self): """ Test with an simple string, an ascii word. """ path = "whatever" safePath = encodePath(path) self.assertTrue(isinstance(safePath, str)) self.assertEqual(path, safePath) def testEncodePath_004(self): """ Test with simple string, a complete path. """ path = "/usr/share/doc/xmltv/README.Debian" safePath = encodePath(path) self.assertTrue(isinstance(safePath, str)) self.assertEqual(path, safePath) def testEncodePath_005(self): """ Test with simple string, a non-ascii path. """ path = "\xe2\x99\xaa\xe2\x99\xac" safePath = encodePath(path) self.assertTrue(isinstance(safePath, str)) self.assertEqual(path, safePath) def testEncodePath_006(self): """ Test with a simple string, empty. """ path = "" safePath = encodePath(path) self.assertTrue(isinstance(safePath, str)) self.assertEqual(path, safePath) def testEncodePath_007(self): """ Test with an simple string, an ascii word. """ path = "whatever" safePath = encodePath(path) self.assertTrue(isinstance(safePath, str)) self.assertEqual(path, safePath) def testEncodePath_008(self): """ Test with simple string, a complete path. """ path = "/usr/share/doc/xmltv/README.Debian" safePath = encodePath(path) self.assertTrue(isinstance(safePath, str)) self.assertEqual(path, safePath) def testEncodePath_009(self): """ Test with simple string, a non-ascii path. """ encoding = sys.getfilesystemencoding() or sys.getdefaultencoding() path = "\xe2\x99\xaa\xe2\x99\xac" safePath = encodePath(path) self.assertTrue(isinstance(safePath, str)) self.assertEqual("\xe2\x99\xaa\xe2\x99\xac", safePath) ##################### # Test convertSize() ###################### def testConvertSize_001(self): """ Test valid conversion from bytes to bytes. """ fromUnit = UNIT_BYTES toUnit = UNIT_BYTES size = 10.0 result = convertSize(size, fromUnit, toUnit) self.assertEqual(result, size) def testConvertSize_002(self): """ Test valid conversion from sectors to bytes and back. """ fromUnit = UNIT_SECTORS toUnit = UNIT_BYTES size = 10 result1 = convertSize(size, fromUnit, toUnit) self.assertEqual(10*2048, result1) result2 = convertSize(result1, toUnit, fromUnit) self.assertEqual(result2, size) def testConvertSize_003(self): """ Test valid conversion from kbytes to bytes and back. """ fromUnit = UNIT_KBYTES toUnit = UNIT_BYTES size = 10 result1 = convertSize(size, fromUnit, toUnit) self.assertEqual(10*1024, result1) result2 = convertSize(result1, toUnit, fromUnit) self.assertEqual(result2, size) def testConvertSize_004(self): """ Test valid conversion from mbytes to bytes and back. """ fromUnit = UNIT_MBYTES toUnit = UNIT_BYTES size = 10 result1 = convertSize(size, fromUnit, toUnit) self.assertEqual(10*1024*1024, result1) result2 = convertSize(result1, toUnit, fromUnit) self.assertEqual(result2, size) def testConvertSize_005(self): """ Test valid conversion from gbytes to bytes and back. """ fromUnit = UNIT_GBYTES toUnit = UNIT_BYTES size = 10 result1 = convertSize(size, fromUnit, toUnit) self.assertEqual(10*1024*1024*1024, result1) result2 = convertSize(result1, toUnit, fromUnit) self.assertEqual(result2, size) def testConvertSize_006(self): """ Test valid conversion from mbytes to kbytes and back. """ fromUnit = UNIT_MBYTES toUnit = UNIT_KBYTES size = 10 result1 = convertSize(size, fromUnit, toUnit) self.assertEqual(size*1024, result1) result2 = convertSize(result1, toUnit, fromUnit) self.assertEqual(result2, size) def testConvertSize_007(self): """ Test with an invalid from unit (None). """ fromUnit = None toUnit = UNIT_BYTES size = 10 self.assertRaises(ValueError, convertSize, size, fromUnit, toUnit) def testConvertSize_008(self): """ Test with an invalid from unit. """ fromUnit = 333 toUnit = UNIT_BYTES size = 10 self.assertRaises(ValueError, convertSize, size, fromUnit, toUnit) def testConvertSize_009(self): """ Test with an invalid to unit (None) """ fromUnit = UNIT_BYTES toUnit = None size = 10 self.assertRaises(ValueError, convertSize, size, fromUnit, toUnit) def testConvertSize_010(self): """ Test with an invalid to unit. """ fromUnit = UNIT_BYTES toUnit = "ken" size = 10 self.assertRaises(ValueError, convertSize, size, fromUnit, toUnit) def testConvertSize_011(self): """ Test with an invalid quantity (None) """ fromUnit = UNIT_BYTES toUnit = UNIT_BYTES size = None self.assertRaises(ValueError, convertSize, size, fromUnit, toUnit) def testConvertSize_012(self): """ Test with an invalid quantity (not a floating point). """ fromUnit = UNIT_BYTES toUnit = UNIT_BYTES size = "blech" self.assertRaises(ValueError, convertSize, size, fromUnit, toUnit) #################### # Test nullDevice() ##################### def testNullDevice_001(self): """ Test that the function behaves sensibly. """ device = nullDevice() self.assertEqual("/dev/null", device) ###################### # Test displayBytes() ###################### def testDisplayBytes_001(self): """ Test display for a positive value < 1 KB """ bytes = 12 # pylint: disable=W0622 result = displayBytes(bytes) self.assertEqual("12 bytes", result) result = displayBytes(bytes, 3) self.assertEqual("12 bytes", result) def testDisplayBytes_002(self): """ Test display for a negative value < 1 KB """ bytes = -12 # pylint: disable=W0622 result = displayBytes(bytes) self.assertEqual("-12 bytes", result) result = displayBytes(bytes, 3) self.assertEqual("-12 bytes", result) def testDisplayBytes_003(self): """ Test display for a positive value = 1kB """ bytes = 1024 # pylint: disable=W0622 result = displayBytes(bytes) self.assertEqual("1.00 kB", result) result = displayBytes(bytes, 3) self.assertEqual("1.000 kB", result) def testDisplayBytes_004(self): """ Test display for a positive value >= 1kB """ bytes = 5678 # pylint: disable=W0622 result = displayBytes(bytes) self.assertEqual("5.54 kB", result) result = displayBytes(bytes, 3) self.assertEqual("5.545 kB", result) def testDisplayBytes_005(self): """ Test display for a negative value >= 1kB """ bytes = -5678 # pylint: disable=W0622 result = displayBytes(bytes) self.assertEqual("-5.54 kB", result) result = displayBytes(bytes, 3) self.assertEqual("-5.545 kB", result) def testDisplayBytes_006(self): """ Test display for a positive value = 1MB """ bytes = 1024.0 * 1024.0 # pylint: disable=W0622 result = displayBytes(bytes) self.assertEqual("1.00 MB", result) result = displayBytes(bytes, 3) self.assertEqual("1.000 MB", result) def testDisplayBytes_007(self): """ Test display for a positive value >= 1MB """ bytes = 72372224 # pylint: disable=W0622 result = displayBytes(bytes) self.assertEqual("69.02 MB", result) result = displayBytes(bytes, 3) self.assertEqual("69.020 MB", result) def testDisplayBytes_008(self): """ Test display for a negative value >= 1MB """ bytes = -72372224.0 # pylint: disable=W0622 result = displayBytes(bytes) self.assertEqual("-69.02 MB", result) result = displayBytes(bytes, 3) self.assertEqual("-69.020 MB", result) def testDisplayBytes_009(self): """ Test display for a positive value = 1GB """ bytes = 1024.0 * 1024.0 * 1024.0 # pylint: disable=W0622 result = displayBytes(bytes) self.assertEqual("1.00 GB", result) result = displayBytes(bytes, 3) self.assertEqual("1.000 GB", result) def testDisplayBytes_010(self): """ Test display for a positive value >= 1GB """ bytes = 4.4 * 1024.0 * 1024.0 * 1024.0 # pylint: disable=W0622 result = displayBytes(bytes) self.assertEqual("4.40 GB", result) result = displayBytes(bytes, 3) self.assertEqual("4.400 GB", result) def testDisplayBytes_011(self): """ Test display for a negative value >= 1GB """ bytes = -1234567891011 # pylint: disable=W0622 result = displayBytes(bytes) self.assertEqual("-1149.78 GB", result) result = displayBytes(bytes, 3) self.assertEqual("-1149.781 GB", result) def testDisplayBytes_012(self): """ Test display with an invalid quantity (None). """ bytes = None # pylint: disable=W0622 self.assertRaises(ValueError, displayBytes, bytes) def testDisplayBytes_013(self): """ Test display with an invalid quantity (not a floating point). """ bytes = "ken" # pylint: disable=W0622 self.assertRaises(ValueError, displayBytes, bytes) ######################### # Test deriveDayOfWeek() ######################### def testDeriveDayOfWeek_001(self): """ Test for valid day names. """ self.assertEqual(0, deriveDayOfWeek("monday")) self.assertEqual(1, deriveDayOfWeek("tuesday")) self.assertEqual(2, deriveDayOfWeek("wednesday")) self.assertEqual(3, deriveDayOfWeek("thursday")) self.assertEqual(4, deriveDayOfWeek("friday")) self.assertEqual(5, deriveDayOfWeek("saturday")) self.assertEqual(6, deriveDayOfWeek("sunday")) def testDeriveDayOfWeek_002(self): """ Test for invalid day names. """ self.assertEqual(-1, deriveDayOfWeek("bogus")) ####################### # Test isStartOfWeek() ####################### def testIsStartOfWeek001(self): """ Test positive case. """ day = time.localtime().tm_wday if day == 0: result = isStartOfWeek("monday") elif day == 1: result = isStartOfWeek("tuesday") elif day == 2: result = isStartOfWeek("wednesday") elif day == 3: result = isStartOfWeek("thursday") elif day == 4: result = isStartOfWeek("friday") elif day == 5: result = isStartOfWeek("saturday") elif day == 6: result = isStartOfWeek("sunday") self.assertEqual(True, result) def testIsStartOfWeek002(self): """ Test negative case. """ day = time.localtime().tm_wday if day == 0: result = isStartOfWeek("friday") elif day == 1: result = isStartOfWeek("saturday") elif day == 2: result = isStartOfWeek("sunday") elif day == 3: result = isStartOfWeek("monday") elif day == 4: result = isStartOfWeek("tuesday") elif day == 5: result = isStartOfWeek("wednesday") elif day == 6: result = isStartOfWeek("thursday") self.assertEqual(False, result) ############################# # Test buildNormalizedPath() ############################# def testBuildNormalizedPath001(self): """ Test for a None path. """ self.assertRaises(ValueError, buildNormalizedPath, None) def testBuildNormalizedPath002(self): """ Test for an empty path. """ path = "" expected = "" actual = buildNormalizedPath(path) self.assertEqual(expected, actual) def testBuildNormalizedPath003(self): """ Test for "." """ path = "." expected = "_" actual = buildNormalizedPath(path) self.assertEqual(expected, actual) def testBuildNormalizedPath004(self): """ Test for ".." """ path = ".." expected = "_." actual = buildNormalizedPath(path) self.assertEqual(expected, actual) def testBuildNormalizedPath005(self): """ Test for "..........." """ path = ".........." expected = "_........." actual = buildNormalizedPath(path) self.assertEqual(expected, actual) def testBuildNormalizedPath006(self): """ Test for "/" """ path = "/" expected = "-" actual = buildNormalizedPath(path) self.assertEqual(expected, actual) def testBuildNormalizedPath007(self): """ Test for "\\" """ path = "\\" expected = "-" actual = buildNormalizedPath(path) self.assertEqual(expected, actual) def testBuildNormalizedPath008(self): """ Test for "/." """ path = "/." expected = "_" actual = buildNormalizedPath(path) self.assertEqual(expected, actual) def testBuildNormalizedPath009(self): """ Test for "/.." """ path = "/.." expected = "_." actual = buildNormalizedPath(path) self.assertEqual(expected, actual) def testBuildNormalizedPath010(self): """ Test for "/..." """ path = "/..." expected = "_.." actual = buildNormalizedPath(path) self.assertEqual(expected, actual) def testBuildNormalizedPath011(self): r""" Test for "\." """ path = r"\." expected = "_" actual = buildNormalizedPath(path) self.assertEqual(expected, actual) def testBuildNormalizedPath012(self): r""" Test for "\.." """ path = r"\.." expected = "_." actual = buildNormalizedPath(path) self.assertEqual(expected, actual) def testBuildNormalizedPath013(self): r""" Test for "\..." """ path = r"\..." expected = "_.." actual = buildNormalizedPath(path) self.assertEqual(expected, actual) def testBuildNormalizedPath014(self): """ Test for "/var/log/apache/httpd.log.1" """ path = "/var/log/apache/httpd.log.1" expected = "var-log-apache-httpd.log.1" actual = buildNormalizedPath(path) self.assertEqual(expected, actual) def testBuildNormalizedPath015(self): """ Test for "var/log/apache/httpd.log.1" """ path = "var/log/apache/httpd.log.1" expected = "var-log-apache-httpd.log.1" actual = buildNormalizedPath(path) self.assertEqual(expected, actual) def testBuildNormalizedPath016(self): """ Test for "\\var/log/apache\\httpd.log.1" """ path = "\\var/log/apache\\httpd.log.1" expected = "var-log-apache-httpd.log.1" actual = buildNormalizedPath(path) self.assertEqual(expected, actual) def testBuildNormalizedPath017(self): """ Test for "/Big Nasty Base Path With Spaces/something/else/space s/file. log .2 ." """ path = "/Big Nasty Base Path With Spaces/something/else/space s/file. log .2 ." expected = "Big_Nasty_Base_Path_With_Spaces-something-else-space_s-file.__log___.2_." actual = buildNormalizedPath(path) self.assertEqual(expected, actual) ########################## # Test splitCommandLine() ########################## def testSplitCommandLine_001(self): """ Test for a None command line. """ commandLine = None self.assertRaises(ValueError, splitCommandLine, commandLine) def testSplitCommandLine_002(self): """ Test for an empty command line. """ commandLine = "" result = splitCommandLine(commandLine) self.assertEqual([], result) def testSplitCommandLine_003(self): """ Test for a command line with no quoted arguments. """ commandLine = "cback --verbose stage store purge" result = splitCommandLine(commandLine) self.assertEqual(["cback", "--verbose", "stage", "store", "purge", ], result) def testSplitCommandLine_004(self): """ Test for a command line with double-quoted arguments. """ commandLine = 'cback "this is a really long double-quoted argument"' result = splitCommandLine(commandLine) self.assertEqual(["cback", "this is a really long double-quoted argument", ], result) def testSplitCommandLine_005(self): """ Test for a command line with single-quoted arguments. """ commandLine = "cback 'this is a really long single-quoted argument'" result = splitCommandLine(commandLine) self.assertEqual(["cback", "'this", "is", "a", "really", "long", "single-quoted", "argument'", ], result) ######################### # Test dereferenceLink() ######################### def testDereferenceLink_001(self): """ Test for a path that is a link, absolute=false. """ self.extractTar("tree10") path = self.buildPath(["tree10", "link002"]) expected = "file002" actual = dereferenceLink(path, absolute=False) self.assertEqual(expected, actual) def testDereferenceLink_002(self): """ Test for a path that is a link, absolute=true. """ self.extractTar("tree10") path = self.buildPath(["tree10", "link002"]) expected = self.buildPath(["tree10", "file002"]) actual = dereferenceLink(path) self.assertEqual(expected, actual) actual = dereferenceLink(path, absolute=True) self.assertEqual(expected, actual) def testDereferenceLink_003(self): """ Test for a path that is a file (not a link), absolute=false. """ self.extractTar("tree10") path = self.buildPath(["tree10", "file001"]) expected = path actual = dereferenceLink(path, absolute=False) self.assertEqual(expected, actual) def testDereferenceLink_004(self): """ Test for a path that is a file (not a link), absolute=true. """ self.extractTar("tree10") path = self.buildPath(["tree10", "file001"]) expected = path actual = dereferenceLink(path) self.assertEqual(expected, actual) actual = dereferenceLink(path, absolute=True) self.assertEqual(expected, actual) def testDereferenceLink_005(self): """ Test for a path that is a directory (not a link), absolute=false. """ self.extractTar("tree10") path = self.buildPath(["tree10", "dir001"]) expected = path actual = dereferenceLink(path, absolute=False) self.assertEqual(expected, actual) def testDereferenceLink_006(self): """ Test for a path that is a directory (not a link), absolute=true. """ self.extractTar("tree10") path = self.buildPath(["tree10", "dir001"]) expected = path actual = dereferenceLink(path) self.assertEqual(expected, actual) actual = dereferenceLink(path, absolute=True) self.assertEqual(expected, actual) def testDereferenceLink_007(self): """ Test for a path that does not exist, absolute=false. """ self.extractTar("tree10") path = self.buildPath(["tree10", "blech"]) expected = path actual = dereferenceLink(path, absolute=False) self.assertEqual(expected, actual) def testDereferenceLink_008(self): """ Test for a path that does not exist, absolute=true. """ self.extractTar("tree10") path = self.buildPath(["tree10", "blech"]) expected = path actual = dereferenceLink(path) self.assertEqual(expected, actual) actual = dereferenceLink(path, absolute=True) self.assertEqual(expected, actual) ################################### # Test parseCommaSeparatedString() ################################### def testParseCommaSeparatedString_001(self): """ Test parseCommaSeparatedString() for a None string. """ actual = parseCommaSeparatedString(None) self.assertEqual(None, actual) def testParseCommaSeparatedString_002(self): """ Test parseCommaSeparatedString() for an empty string. """ actual = parseCommaSeparatedString("") self.assertEqual([], actual) def testParseCommaSeparatedString_003(self): """ Test parseCommaSeparatedString() for a string with one value. """ actual = parseCommaSeparatedString("ken") self.assertEqual(["ken", ], actual) def testParseCommaSeparatedString_004(self): """ Test parseCommaSeparatedString() for a string with multiple values, no spaces. """ actual = parseCommaSeparatedString("a,b,c") self.assertEqual(["a", "b", "c", ], actual) def testParseCommaSeparatedString_005(self): """ Test parseCommaSeparatedString() for a string with multiple values, with spaces. """ actual = parseCommaSeparatedString("a, b, c") self.assertEqual(["a", "b", "c", ], actual) def testParseCommaSeparatedString_006(self): """ Test parseCommaSeparatedString() for a string with multiple values, worst-case kind of value. """ actual = parseCommaSeparatedString(" one, two,three, four , five , six, seven,,eight ,") self.assertEqual(["one", "two", "three", "four", "five", "six", "seven", "eight", ], actual) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" tests = [ ] tests.append(unittest.makeSuite(TestUnorderedList, 'test')) tests.append(unittest.makeSuite(TestAbsolutePathList, 'test')) tests.append(unittest.makeSuite(TestObjectTypeList, 'test')) tests.append(unittest.makeSuite(TestRestrictedContentList, 'test')) tests.append(unittest.makeSuite(TestRegexMatchList, 'test')) tests.append(unittest.makeSuite(TestRegexList, 'test')) tests.append(unittest.makeSuite(TestDirectedGraph, 'test')) tests.append(unittest.makeSuite(TestPathResolverSingleton, 'test')) tests.append(unittest.makeSuite(TestDiagnostics, 'test')) tests.append(unittest.makeSuite(TestFunctions, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/data/0002775000175000017500000000000012657665551020242 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/testcase/data/cback.conf.70000664000175000017500000000061212555004756022305 0ustar pronovicpronovic00000000000000 /opt/backup/collect daily tar .ignore /etc CedarBackup3-3.1.6/testcase/data/subversion.conf.70000664000175000017500000000233312555004756023443 0ustar pronovicpronovic00000000000000 daily gzip /opt/public/svn/one BDB /opt/public/svn/two weekly software /opt/public/svn/three bzip2 .*software.* FSFS /opt/public/svn/four incr bzip2 cedar banner .*software.* .*database.* CedarBackup3-3.1.6/testcase/data/subversion.conf.30000664000175000017500000000047712555004756023446 0ustar pronovicpronovic00000000000000 /opt/public/svn/software daily gzip CedarBackup3-3.1.6/testcase/data/tree19.tar.gz0000664000175000017500000000165312555004756022474 0ustar pronovicpronovic00000000000000GEj0(}M,=NQ%}:KqG!C/ltnIBUkls8X׿__Ő릩Sh.6aw}W/ incr none /home/jimbo/mail/cedar-backup-users /home/joebob/mail/cedar-backup-users daily gzip /home/frank/mail/cedar-backup-users /home/jimbob/mail bzip2 logomachy-devel /home/billiejoe/mail weekly bzip2 .*SPAM.* /home/billybob/mail debian-devel debian-python .*SPAM.* .*JUNK.* CedarBackup3-3.1.6/testcase/data/capacity.conf.20000664000175000017500000000025412555004756023034 0ustar pronovicpronovic00000000000000 63.2 CedarBackup3-3.1.6/testcase/data/tree8.tar.gz0000664000175000017500000000022412555004756022403 0ustar pronovicpronovic00000000000000wA10 %7nudVnBVbȀ["KR.'הt9%5'YR45J1:Ѡ!8ڮռwosCuۻV p(CedarBackup3-3.1.6/testcase/data/tree1.ini0000664000175000017500000000040712555004756021751 0ustar pronovicpronovic00000000000000; Single-depth directory containing only small files [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 1 mindirs = 0 maxdirs = 0 minfiles = 1 maxfiles = 10 minlinks = 0 maxlinks = 0 minsize = 0 maxsize = 500 CedarBackup3-3.1.6/testcase/data/cback.conf.220000664000175000017500000000045112555004756022363 0ustar pronovicpronovic00000000000000 machine2 remote /opt/backup/collect CedarBackup3-3.1.6/testcase/data/cback.conf.180000664000175000017500000000060612555004756022372 0ustar pronovicpronovic00000000000000 index example something.whatever example 1 CedarBackup3-3.1.6/testcase/data/mbox.conf.20000664000175000017500000000061512555004756022205 0ustar pronovicpronovic00000000000000 daily gzip /home/joebob/mail/cedar-backup-users /home/billiejoe/mail CedarBackup3-3.1.6/testcase/data/tree11.tar.gz0000664000175000017500000000120112555004756022451 0ustar pronovicpronovic00000000000000:An@`y Ag’[* l@11} +qD3'ϘM\b1*;rXvd\1q{c\#WN*7O{$^? sC7b"ZmQ\#c3{+!S-gwCOĆQ>B 'K{ i./_3埔JxRvߘ ݱ֪Nj1V!.M!;coոmhvb6_>9fOLJp3|Bl5:Y}amUo: ۡ?dCS JF]5Ck%_P9_oy̿0JrN[!@d??٧WWOR] u?ed$8`iós%_3/u=PhOY _og_"mvؿwy/FPwC  b+r[?r1uvS5?mE`PCedarBackup3-3.1.6/testcase/data/cback.conf.190000664000175000017500000000356012555004756022375 0ustar pronovicpronovic00000000000000 dependency sysinfo CedarBackup3.extend.sysinfo executeAction mysql CedarBackup3.extend.mysql executeAction postgresql CedarBackup3.extend.postgresql executeAction one subversion CedarBackup3.extend.subversion executeAction one mbox CedarBackup3.extend.mbox executeAction one one encrypt CedarBackup3.extend.encrypt executeAction a,b,c,d one, two,three, four , five , six, seven,,eight , amazons3 CedarBackup3.extend.amazons3 executeAction CedarBackup3-3.1.6/testcase/data/tree13.tar.gz0000664000175000017500000000064612555004756022467 0ustar pronovicpronovic00000000000000JBN0Fy\-7FhIAN#21N30L{6wӴV;%Ҁ󸯌ǴEg (g1QQ2z$>W l4!^Tmg#O_oJm@ZC ]*]_[r(~$It&4MDJof_c4A.otAΉ(aXR굀#NaDѿ v@Bf12>cwÎoElĊna|K8w&F1w.*cA.#گp5&l ]f<Ȃ7 rC00TklvO?'Ӎw3?    ;(CedarBackup3-3.1.6/testcase/data/tree4.ini0000664000175000017500000000042312555004756021752 0ustar pronovicpronovic00000000000000; Higher-depth directory containing small files and directories [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 2 mindirs = 1 maxdirs = 10 minfiles = 1 maxfiles = 10 minlinks = 0 maxlinks = 0 minsize = 0 maxsize = 500 CedarBackup3-3.1.6/testcase/data/subversion.conf.50000664000175000017500000000050312555004756023436 0ustar pronovicpronovic00000000000000 daily gzip /opt/public/svn/software CedarBackup3-3.1.6/testcase/data/postgresql.conf.30000664000175000017500000000043212555004756023441 0ustar pronovicpronovic00000000000000 user gzip N database CedarBackup3-3.1.6/testcase/data/tree12.tar.gz0000664000175000017500000022266112555004756022471 0ustar pronovicpronovic00000000000000Bstϳ.;m;m۶hǶmsǶm;9sw֬OWwWWUSUvؘegdcc303ҳ100~30137ߛ>>6_Gӿlcnhkd&$Lgfedh a´N646u? ?7?'%3c|s+c|Ws'3|}G/6NfFƆN[|koYB;;ٺ:~;K8Y;#ǿb-ɕDh)juv7v1vw0ַ20171Nfa-_fj|Golq#sLRN %nno-[>ӿ8~ΎV߇u2v"LDʉ V_N;20wtv0L{oW}{fV  ۹4Iw2aMmh+8[ۺuߓ9X+Fs̖̿FHqk0q"iorW88ZegƏJ؁;pmݾRKcwG|׿AM}C`m`n}o(Ǝv{t5374;]iַrk?;;e / 2-NM_fl(--!-۩VVOVQ;4/\-JP6 >2#++8H팍jOj}gwy6p' k1%4" q|W"SC/&F4ȐXB7ےÿ߆YOdb}Zژ~gt?336VߥEB@t#G2>]X"}}Ajm,rː)kBl'HJ=y*KcSm`6R=iٹ |(|knwu:F$7"MF~fǿD*}E|Q5`-VWZFX ϵZ)"7>}MYPKZT~.f[CU{MZ}0b;V>} Ι&4~<5~W&)v>IXDhi2=K.fGUX0ݪ0Lh#>uޞM׀|9RT1ߝid|~t=I63~&il¼{F9ul_9$}UP\4R;K>^س :F]h_[;ɩ,nviz6ca;'76Ӓ+ˋ<7?9m)<׷<9R|8XV9RAkF{]phm\zhEh9ozV{נKvO ThDnt[3߾[u|plux0Ыhc+Fդ+UfߵO $|KQ`o+1ߕDf" KפBɰHi[`N( ˡjC=HqRb@W'9w)Q%)`~WG;&veX> -xq ѻPU^BM9elaoQ=En$y]uubz"_ qVu>׿h[Ej5\空#?;)h/YƜl ^ &U$/H!NX`dztp` Ld{,BgI/J3k}wZM{T \B-剺{-Qd{ȎfVh397JwT)*k0i{ʺ$LƹiXZLĀB:eAzɹ)\ƥu9 kJs|ORW D1,󻝆s:~#|ё q]ujU9"|7Aܷ?35/bD yj}䢣A1Vh'}ģ7^wG}̠\1\i^~nmm."j01ʚq~~?$2*}(.QLp Zs|| }3Pj66OqM[sBbras51@_Uqʕq@++-J] ̭m,+Rx6G]]Ppċkgj]Oa]y@: ma:`KD6t;Xjt㡺#{/.*=ɐl:J݈>rr:} ^W ,Q+>yAʟjetoSl{_[S5=Yn!"v/uĮ=6 i,{՚{*@+;03"~b>< s54;pոeE3{ E! -(D*'x'6tEtx5GpFH TEl2z+4K;@神bBZ\է祜 oNK1N8*VP1k{~"#/bSV'Tr KUsF0YIn^ufhS4.PES"SW:˼hg./#z5.s>dI5zͬ6 -r^U S)v4Y[ԛ>7<H<+>\D//w_!ew6Bg2Eb#&hVﴱ%ljGSwÀ-rTL܈?F&+گ{ձ5Zm@j"8<é{iըWTwѵYU&("bUTrk #>ֺŠY9q ܟ`Cj >ڈXø43Cgx^ bՏ tUʞ?ГI!R>fXbb IIFz2!=oцF~hb|{7h(F7>vU1|Tv3#W:Z'.sǝo3\jXhB4, n|TEE)t\@¥S &z0#}6}S3vαvYDj/<(62.NJܕ(AwtP~ش15|b&*<$i/D+u+(z$$ub§8RazE^w5Iо97tR6lhY>c.5 w+z#4,cx~;k l<JrzۑCpJٕ_d+Qz˧И۪]; ֡N{ɫ2vm|%n>eЭ%R ʋo+!+@-.O}{ uo>&lwS[,m8A!& F\g'xTոr> . $~bJkiW˜m[I!YoMF;_xF*یH.%͛D OBb*).lB֎R!]2g[nCn B_Z!ȹhzHڊOؓ }"ZC^YXrnQ%gjjF0R^1lǿ|#Οs*R#~_OiibľeXw/X  L4*#>瀹c?Ͽ>_{#S>iIyk)n$ǗЩ'\_`?uy.Nsnd,2ۜ.| OOy V.wbźt~B:,+8RLJ[o?R28J"6}KDF uȔ>r#ᎁ̈SEy? |sV?L!ԗ/i9aMQ׿L|葙 CЀ|s mk[*<:#djw*p+#'?kݴ$~1cSsc\{1C ʪh)L+giTꜴKmXE*[4cТ-0XrԢ/H55(xtG׎J44F5rr ضE)ßv+('B=M,O-Fʄ"pdž>,)+*0Xrm޾;0xj *Bo9 y\yHE`k*'˘RgݐqZ k)+$0{f&p" :R0oa)WQAfu8 ^cG.(gh,$-zAkj\TRłXIJ? ᣕ|4yLJ^BhwRtyq {ߋi6wZߋRun ˯U4uE5q/XWy Jş芲-Ҿ8>K;c `l 0^;, c@0Â)Cnm A}ӣ,X㿛%~2xe$]PBŎqEhn&}?BmtYhHhbzHT܌*Ka _F8/yߣoFtDRԸRz[…; 4n<˺ŀN@h,bu+ #ڎjZkY4o3z7G'. :, .a# H^-(ܦ&@D!^%=齸n,μ jBYDGB12S#/vtDԢ3nrq&Cg XKhDDt_<*1iʂdʑ1AJ`;_0tc_s+g4mp&TѝX /aa93H.vC\̄D,!Q~L*boBg #ɜyp+%$YXK"Lr y,s$!%MG K ];cQbQ>[ԷgK9ril{1me߹yag! cO5rl ;k3AgohnYFF!CޞN q(F͖YX-R=@8Ce$x shF~KQi+B+c(%: Iٝu_SAl׸eͪr)ma2M6kG×v*9CTMX+~@c,(!J0t%nXI=55>mwG"5 :q9 ȋOʒT24Z'_l[O?TiޟU/V.Hrukv{G_٠]-^?1^8#H9Tyag?b*33hBczVSMǫɂ7p˼`||dʈ6fǜ1_)6!?- 'ms__#Z5IquIuuJ31[>p \h0?NeG(JFtkkѨjÆ"`@49aRhւR`6N,!*٫?ktVߎ@@θFsoA NC f[Qlr &Қ 5&<`VUHb 3r{sMCʷ n#QXٗfԴ mXlv[ޯj| +ŲoQL;RWs,#vO'F0e!+B1gE~-aL;-s Y% rm 9Ŧ!3U"zq3h !#ޯZvQoE!;v%vŘaS1g Bq63`Yb5:{f3|Eދx13XlؿY쉉(JBh’w13YxB5eχrɀ%s;YÐcaMPK`@I<'-H:ve܃t9,zIWx PnV1MW.4v?^4aҥ{kzR!xfĐ=,B+$-jƫ,"VY'Z֝ V~47.OҧTyUoa %Ox_P7SD$ Kvӏ/j \`C2?{dDp82L \b{ZK4@A''Ԯ/]i/e@! h ٜBSDǨ|ـP3ۍlA&,/Y/1tή`Eތ/E ]V;@rJG|V 6 ?/5 TS'._vvlP1?Z P7[X/j}"]3zA볪Q5n=h{MƀRHmհ |bX]&="h &`OB55?C'Ͷ6sԁ_@D׃4g ۑ\YZ$@ǡp&40(@:׃=YJ&WhO%x?e$wcb fōvxtE[b`TDmӜjSh~ `с<(OaSkl 8|k[(qM;+^E)2%k bFk9b`\VmAwa7y-BYmFW2ЀS-]kg?:P`#B%=+M;:TrGA|;`۽UxU%9oiV= )Mo?yƻ?r{@?[{C"l"_LTtPnKz#mN8V)Qݎt`lKVbX&$thե |։F` -$HViK>Z0aΈ8-Ҕ.)@=2b)ЕTTyq=tpʈ ɊYآVKҝA`T@k-N #K@`r'3 )8poJ=\~ENgjU4FL+ɣ7_G.E{ %q2( .#Ź%pߢx_@kխ 9PiO݊Y MIόD3fҺ/7grۧ/oZ慲a[` 7%γұyzʼn7)00/;}.ΑM;:|0`8=0R:xwJ0 7qsK뮢+=RD?RjiP}Mހu7 t^)2q|͞ lfr"E T2 rFByyX86 ԝ74V\D_-!^nȺW;XI]drADgD)Q4QX 6qk)vP~d05a`'<> Wm:{偤. *YRM}U\ LPJP !)w5EPa.ݶ92N%1h _rѓ RJUC.;GQ _7O$%\a8 2(ǻ0b/WoF5sZ7ZMq'G=K-@)?F宛kPFlDcift~k1EM~jwGxDp28h`u 0ϒ܋9tB{% )Xwvʎcߊ^~2 vyЁ+Qy2 >W,fB˗Y3[ ׽1..̎׿1 L@{uz|]*'}GmK 8˟NG{9(fVd@~36 RG0m NƠdОӓ,%t]8VR5 rfi"eK2f}':zx:Mk!qWE܈?i'&V'#d-O '^R u%@h4|BPx3JR?c&N@4+8]íBXV7LcyN1}\lmR~R_ ;DXLh'XሎíH+MC[0]|R׾ʣUA7 /^_ Ut+m'W\]ߍ+e }E6ɭʌB/g j?rM_mлAlQ^V&OAoy"Xj6!wcd,$9ÏAU ag5&`2YP:l:՘BD,%DP3LJ 74"_BRf==d䥿 ݲ빆?-5 ԧv:4T_i2]U ޒ[E|<-hxTW(1﮿)l҄ XzB,̫@Gj>X:pE53)j<$ Y N4LT^*3u08aEGXMq:2%OgEtd3@>;fS%K& #Fڗ}ۃXdҪ0U(pRi#{ʇԙ+D1ίy dw`:o!,;=2Kc9-+"b=Kr0Kt%i?^$#Е8`""AqD\˝7!ܳCE[Yc cÃfK#-_p3Ӭ[{-pCXD޺gidx ԍB`'dzaf?OH;#.֘J0}Ң')OBaV)mY2#乴 olUɫ+CcOz[QR&čEZ[R!s2]iP{刺O* uC,Ӏ>LT)Q'(P[ODڄxIB(^7)MYlEeai{FG[V=ase!1b9ԁ0~#0 2A8×~䋱r/1_7/`D0+0FPBB#?fIPómw70nc,&Fn%Kpu8iMpu_k%n5?s$v8-ׇܣӕx_OWØN燀^#aDvUz7_TO>NU3,EN喪KzUخ0ǛXF4N71MP(ﯮ\pu`MR>vgWkP!UWA4MJ+ [xT" fCeMp[lgvIOq-z4dp+ƥ3[ٔ ϵ$츯oyq}u׹zGq iY^tXER[/MD8]0 ;vkU5ޛTCl=gUꠏCVC0=H4i_Ǧ)_.V+5Uܝw]&Q^\J#&җv4\WomEP\v6(nmEA52$PɥAjN[L3lkD,r: KyS~5- #{FF|\³v;dZV%7Tn;/|GSi(O]y^te|/TlWT ɇ ]H81鋹oefʓ <mGcޤj':mMK`1h~]֋de-h/t-zh(H?uW*Vv shzh&`ğ,%AOPyO 9:h˸3ȋi'$^'ᲰX{u2reav(塜3cU|MD{YUfrSN+qdO%cGo`҃s]в"6(/~l%S>lJ趠 OAq`ճ6jcٙ6?Y[7*s@J. ݣ,}OnC ٱmQWSٖ@nOs 7p`~TvWN*qJȠt^(o9kw{\J)S8f+{sG {-# U2չG2!3rce Nl#YM_Q䦖v¦+x\]F%~{ %-zΫuZ]j !kB QQYl2h7Ddښ%‚,s>+dx"@๶\8x务N4˓xE>&_MB)[W>MaoOX z6lS5 U޾ȁ.M'o{OX׬x X#/[hX׼Δ~"MJ-1M`PNzno9ߜPOxe3? @>,`a X@$fmIc@L@R~BhєjP#˗HzÍ.h;d] C ֓ & va @;#a %!0a*Iq(+PMQbU :P=|Oa[ۖծ5h)ȷgǟ<?wsG#!^^ Nj.~BG:`6ٸ|M5tOnkAKӎ2Ѐ ơB!,7sW%g(H΀ kUA'o׫ B@s˜XMj %] h$Qmy`FwMyb]dpwp%WuHBqkX"_`*K5M~!UId VHS#cq 4A Wu PG (&#ou#1IUICPcoe-Ob >$mQOvZ" ɏ^͏O['TTa2RgO_&1 4¢ uLW&?ê淢i(@(n&uϊ#b'[}. h V8?ln>.@2*#Sа#.ĶR}A`ċY 7Ep@0&B1g`kgmłS`Dl[K9He`K{n'a/˪Bm-dY`ՌDiEɍ L ؜F 蜽Ȝ`5 LѭМɢj=KAJκ Sm,.́g',2&lM_byok넯ޠ7;Th+.VFO"Tt[%Sٻ^ܻdmǬ5`uR2/d fDF kKC~a-š8BzhD("m0pς!h2~F +5F@}h3O/ [Ă0*ۯkT@tv5W@gwbCq\lFCq ^>ٜSC38:r)P~Qh"ZI(iߐ;S-`l}C"Cr6~g.fR`kzll{9=}y0+LvtM !Wϔs hJѮ7ob:N NRh̰)JE~Ox7:9yЕT]Z$FjEi뙜88nNe'W1 1y8˞yhG%WsJ#[`f#h$lB%jǮeoo̻ ?p1  9hNE6̰m{6Nt1(9|mo+Hk0  h7bY\'.條$^ UT󂵧?̛9T ~Ȉoak} lKC6 0 H0$--(qv{v\F^veU!m0l\?.nI[蚉|5[3ireEmyR _ :A.og]W!FL9@M;@LM{9!FD6kS{ rZpFߡJY*W:bu!RȄ8S8yV ĸ z@f~r#'57ĉH^ MPÓ ۙGvk9:]M%@0*qzebb4%+(()S֦<4(.O!b4?3ȩT;.26Phz.7 wl$o*^B\|{ʁ e#~Kz G pWf-WA0@ʢiq7Ʌx&K{};c낰 yhPuKwM;.\646 ϾɾK15vM@ 2KsǠaL5Xk?p]?4^[P(Y+@{R&,|]Gv+)XiZߴ0PNՋ_jr_BᯇFGI;~?{[nZ舥#Pq!orzknN!׻ݸ+@A@(PAqcC >jGn5 Jzoܹq3Fj0 -Y[oh aYLK7f\ i|}:ГD*p olĢ G;㆟ &'{!VgA)W{0sѝUڔoIAg!IC:RX` M6!P?f)~OdƵ(`:>ۋEǹAJR< 4I08Zf,k;״|O# I4sÉ3{QT9"7@ƶ?]+˙"z[ V ~ w֑Q A!^DAΦu_ ȟs}_  XslR:L:Tb:(#A ikc'V:ĮYs|ccdY `c 9 \䲋 ?mr1H[CoMI'e ~^P!m5"#o#fS:;N\[gP1J/0lTMAδrv]*聢U:lfbO dd+uY쬚j/Hdl#ga+H}c4MݏSz?fXBxe|=e7\$ CTRQJ fwR: Wfwsq1]lȇsK7#$2m %'?oXLCsG\ z7 _ayϲoA3a53w)(2{6aO ͕C lֶl!'MOCq8dP :j151oߣ\TOLTMIǘ/ކ{ njZyN- `ܶ6*xwl册,E7}!`f6l toMt|~/wEO`=<2Co{L~]]¿}`.m#C:Sh72z xOռ=XU7\x= 9DBc9;P7\ \ h|Ld=<5 : EoE^{ƿq`q`se[NoQ)gXuǷΡqFvJuNVv.*SĹ:Q߯Y^r/rCnɷeoVbr/K 7M=d V{'x _ Iȝ*o ! (c~F=co[ `%@ džKM HT^4;iRh?*hí 2z# cL+jߡ⯜s5Ŧ>Bf~E[J@j݅9'ͿPJ*o)Jz7)BmqrV`_A}rD()(AHb 7z}g hh !ʅ=o$s ~a Hʞʼn^UJr>Mq)X砓ɕ)?R"RƏ-e}k' rճ41 ch"(FצFPq{]ypb_4ȿdtYRfK'"լ4ȥ|27Y+*/~ LM -GQT96d(f.jEC_jr))#A46X>8W_Ẃ l̟D/yho4ٌb<%X:dP*n3:S5nqn*#]Nh-]r'^# Z$WG32"TuD |$$&ClhÅ)u o'( | To'(QQ1Arsְ$ |Le1z +:qc¤Kbʲ&Dmf35Aw8ykn"yi掲5EP߀1TsBEhiGo83 -j/i Fe̐ ^,@LMDdjb]fy/IYt.SyxyU8=wI:LL<`лd؞BZA\4p]Et&> p)y9ܹK{-o7UPR]_K4ۍ>@$qz8Ê>\1 _?bL>HT1| $/̶%}c>|bPr(S>C*IJCl8E'9\=V2]ɴ(2eܿ2rw&A2+&^}$RtS"F1^@ʚ}%92!?F"!r籆mA  X#hQT؎ pG^4K̹ ^|*Ŕa'9 fW\f[0zEopH^1(r4&P@'oZ bC_{ p=c\%f` 2^+-&b6G& Jm@EO?~8[_ 㓙5DFWqp c^ߛPbIt70QL q]ɳz|5,6Z&ct@덆}vωRP ÁJ ^~sZ#%[O,,-2@5:tJs;Hr";49uήZܞϣ\ܬțˣxwJsi#;ꎙn;cU93OK4YfQ.sx?f!Ot)DdQecWOّz8x~+Ņ UV1BW:_+qƝ_/ 7TS_6)ұ>|]U ȣ?9 Ah;WYL*M>9 Fr+3cIǬV UpnWN#qHrd2gLo(/˺ 'K0T}=v!fDR_JH:/Z' }VNHNlX?h~;n>ʠ7gR.QI~=z-o?wT|9 B*,SIR!g+䪥tjD6y3lv4#tuvhQG^~$?Dێ@dc^+IV<2H37yRPm/ET8"0« 1jERc.r)] eي݄&R8$2tI`)h鴪KŒ߾'9 {R?'1=2v)-\!O0Ic >}447d\D)-LgJpv۫;%Io>XfFQieVr֏aG< 6׃ dmS\0QF&zǽ{J斓)F{S2sʣi<髕(Ӓ{tޘ<I#9 '5H{|т)9T~|x DSYocFMhx;bϝ~qt-Թ$82խ!ݴ=0''dest!5,AăorCuEQأ z5'FSGs;Uy@qbJlmr!O.H ;ldH?Hm,SDrri`W<-+,GClȔ̩͝E* y<9n94$2Dvj"r$ϭ;ZV.j GIrKtbyEMr?Y"tvl7  .{oQB9-dM.#'XL^",>S#aXT!;G>Cȉӎ~0\(ê u*ؘ "~"?8,pӦ-F93:0PnByfg2-F +rVc^ԅ|N&5JM F  ܑgnh}Fdo~Պp6ad=:1ƥ>B8_ gL-~Oe {z<&aS_M| gX]Ŀ=N=œ21cŏ},CMqoMR;o9=G׿oLa_xѤqNIGĨxkeNɪ-LvʌBY e)zZL'|*D3ʅQBx$zK"k?Q u9Eqșc0 _ (,+'7P n?gXz0#PJ@jlMOwP@  ,7h4 ݲ p1 D 8߿  Vz8b#š@[0e-bB )K& LC0+'! cBP+jdE)‚D0M2#T_ t9z{%8 A;iQ+Xso:i? OJ%t6vmtM'b*n%HE${FAx!ڇYr=3b~kjg-*=..]=T/> Q4`[p*ҶLMۢL"5mv$S+S<>|0ӹ$q ԍ&hפDT N,KW,ZSI q-&t+p&T4E#K_F0=XPҒB˹*Cڿu+G)ӜZHrWI0sxTM\Z ( 87{,av /PmyiՀ*JWԑJ'{7cᄣ:!s@%lU;c1fb1\0֙jPY%Z)x^:|a?ԑ1:_&5QQo6>^m1sS u^t\sxD[m(w+bh%.EɸΝ ofwtr~_wdEp-?fU_!{ۼ FʯKޘ~M: ',Y&… 9Wr.V8U@VdEpS.DT7d{l\0c*50BS]BEbN/-|rZ$6v%чm۶] FZ+c漕PpM:Tr&ٓs H 83́K:8X颹c*d%7+LLMv-eU%9L|RJ7uN[YA\ɲ)v916s@4 K.GWW Q%,!s|RVL)]*kebt h{ pl'CQ9/RUMIPg*{\9 vY܈cO,sؔ"E%djjb֥db|ޖ'EDuս5ݛtG9|j\@m\*{sY3}xt)iޯ>S=|}{Yd >0'z[lӋf0t@[dA71B2I^q|Ass-Dtw'2)8ʱs秳Ŵkas+MX%2ڵ&tLnyGOލ-Nl,fT(+'!q\jVh<. _:5*oqtway+ioZ?sp˾ο86|Q /6laݥWgluX},p>t`&ٴeSZ(8֟g`rmŔދ>j_oH:bv85;OHMi/N4K&39C[$epE Oȏ `r47OV5!^J>Q)e+p!ԼΥE5^כ+E㯕㿅LPJ.r%,o0&I*3۔&b^P92y$ӪF/4=&/,o]d_NxS*6NH^vIюγx{JM eoi'W%$!m3x* p [_Kv13}eSg?s\&UXsdVy<=*,3 Ow!PA)G^R~(ۧ)W~D'QkO\2<4{,G>ꩌ9XNE䃃Z(?/ct3|2~Ը1|4R2?CJ+oڃ^ps7{z/7Usv5{SV(e7B*%B'K#pdR0=. ˞z%ُb d3/w7~ߑB=NN|_R6(N3G8NgtnjӇYȠEo,R=E 2U EĆĩliP&Jo:nCN'͒A龜-9DBY`3jy40sȘ6A)y@QE\xHuQmb웼/2^](;mP\j"nA> *Ei{tZZ 8ųA> oS<f!lrJ"f1Jύծi{Υ6p]d&jw8gPgA4&o^׀Xe*9tlG,F}ps6t w4PMB(fz ͇v:#Q `$Dȷ䐈ig}VN(%c6ycE6ݯ}6ǿe&z(Շg5;gx&w?NKre;JlZې;D>﵇O@J!|x!{GXf}6S{AhY^J:vhn/RVUݒ#+?"]͹.5]@71—⬨/BE;(>m*(W\_U/u=>sIh=^% x*bn'8 lytc;: }}_S|*4Ɏff)'ȫkǼF 6 _ZY my˅ȥ]}uU]LvP1&?d A0]ZN=T= g\roK;^8 =DB6c&7#)W$4Пk6* =فFRn#d+1$f%2fyn{Z|V[2WBP9bP pFFMX0l+MbOH!﩯LFaȼkhlxz4"3Wc<#a=ac8,k¸hx9Af*}BM'_ΐ-պ{|āDx9qYH;qr"N5#f 8]":S8?Fms9ױm[ ^) 9u;LK>  T33LhM`C,M:1$t)Qn-)bF)[#*ӄ&L byC 1I=nDhqT>iiRmқ榀#)G^5MZɮG(Xlsǫ4 6lj*[4<У([,Z8D58FVŚ},mH.ӀoTObHцuQwTШ<A,FA~1֨¾.aFVh޴5.LˌY;w#G[1ߙ,jp2z̪NYJ_3A-'eʞ"(_Jbɓ4?-jX$=oipA z<2h?@2əYGI-HQ %(2tuP_5ONЏRz|ҟŲNטKL<=J%k`&?`Dz Rj^_.V[iʄVrE!h>g v*l# '.k)u 9gWXWG<jpeq$LS=(I#x:-˦Yg9??$E"X^6'パUx &" Gth-{lP  7}x#{s} }FqX<ȻFM!wzwU\!1e|YeDAۮ+`93hKfQ:{]MQ%4LpdDžٸ6r0EB]mԖVSG~t57'2j>"UQ.3x= iq]!hqw9ײ: Z:Mϣ12 nlD6iyRkkb D8'1خ$nE橼3Ȩ+TÙv^Kjd;`ۦtm;ꓶAAA_m OgMs /,!B}̉h o4Mj?c(h]({ٓ PECC[ۘWs0aڥ^;ިyes#$LбKUQ7m~䍄W3sHbj;JF>8UNT$5dɗRd?Zf,Z+kN~7;$!֥pҴT-V+iJ=g7!ۧ! 8өz  DAƈrQw)V>q{4;Wي%Q W9O8P4֮e9c$QBsaC- I4m=ß2011n pHCTi+uGMCBiW}`,Ћ|pB,a \b~NOTH#WXd]L80'! ]3Y>*xKx!*o(ϡ'PiS=+UZQl4gWӒ@Sl$G ϐ9h]鱀eO@1_XYayٚ3s]&,h5S,1{yo $;dWsnQ$Vا/>1d-8v凊,M 猈܇n&}0MA9sb!$A8Q!QK4Cu?4\6!&uΡWA4!lm9\"Y k^Oфvmi=5W7[]Q2T"2Ⱥjw=:6E6!1t.~QsVكmQ@;ӺK.!@a`_q{T˟_vX%%!':<5>`K,U4ddd`J7JьYJ/Ofc+;mm/ zpC9$hI] eBh+Ό('ˠ ҕXVCbn#$&7ZޱnšdWb}@ :4bZ$ !Z eI܍ *Mg P?_ { } gdCO=YNaw`ݪ`R힄t X,,Ȕ5±sc/sn%w IXq/w"9b8AӧNOGZndQD_{$.7Y {ߎ (ﮇ2IO-ͪE@/"%ɯ7pϿ u6: LNfZYa. < ~r}90ýu x@D_"B ϻ = h% tagDq\}/ˆ)]/n(|QZp? Uz:, Qǘ1$-xO$p.*sGD{ ꗿR;=l єi&Ef33Q+S<Omk{V2HDgjSRQ";yH+eN7dQF5D qTBZfxۚUf V{(yy'cPN"]2o!tǦIW sNdE}2@s!ȸ!O.Hc^%nůn*N=YL9kpA {ĥ3eqYvvQ_68)Y4.5y.гn(qe;RPx#fϗ--[YXD ;eVi8L>N"M= (za!4p2Zt%ȫvס:`ABCtf$Ba рSF.70wS'qócK3[bbY#?-V q0 ZDOQf9*GCU$z+⬙bxpGg]iL :#Ud4ΟBH*xLǁ_lA$@İeU56%B=pu]V/[6 kѥiXMɧnCഇ4^0c.Dcnnȡ W'G}fO0k7np|\xW]kUᄀKykԞDϾ\*ήΛD\`ԓ%9ћMCd^.bۿu2M (핗g\̓.%=l݂y`ki49޳"Ϭrfrun`뢉|ʴٿd Ǒ7i5XV B(_ 2hAu7 }'- <\Yi>a݆ 2M]D.x-gU:Qg rHu/"og=nmL?qtAY:\۫d0U3^0o4VRί-v7O|$7?iamW[W.68 SN@QJGNE񠎣cs#~͂;1NA[r]=c>/m E}HXMZ ـm&$*5o“"БZeB`8v"U7S6"u p~i#@ܲJVzl)P\0I`"$0Po鸘W&56]e3>|Z+ߜ`; snaA?G4BzŽx?eg 6z_Y_붶QtDaBmhBbJ#MeNf\mȀ4 ,g¸sM-rE3ʊ&ޔ໛&– <]8=O$.)r8{6ph{6ܢ "BRJ|IثQ&}쌏 Ȓi[h,(8ͻUwp;h~̥ Imh{tWACp΅p6.P╦KS3y]LhqnBF612^<H-wҪUNAHoóeGct\6ݔM I70Q QPH71qF>S ]0O7.g a)TMP8] =P !3.)1q7V&H^f4hSdrfr>`$E'NyO_`ΝہszB RؒRc93pU/@.ׂo=F`5iڏ>;_t%LpXA,Tfl|颜_I,UwH-39pa,5e(O`\֨-׫p=kFɾ:QZtՌjU ̅ԻNqʩ帚k^) 8u&X`|A>m2Fg;R qh;Sya,;DAv q5晧R-nd xǫN͖āIP4oۈЬe8 Aϐc[٫K =Ȗ9gYV HDDO!MHH &"R)o)I&JfC[0 BP_`4ð{HHq}7zI}7xIÀ)QؘPJf]䲳2k=Zv"ev CO@6LKe{@;|*~,%|@jXx6X&Z1ۇ| ^HP{Zd3T+z3(ioAh "8B@7BHgdSwG8Dь|JzaC +rA ,DV'K|PZ-D9&J"piTnDj֯jn{5AZbM&z^ hG`:{x,E$A1\MƼ\=+;JTk7<{4{ 7;=Dj!^Z )@skt| t \䷖gGxWwWu3!u@Ͷ ;L֩w;+38]LX79U/v濁<+M%&a^,ǙZa. :¸zSN,5P' _*2Pl5^_9@RM p[G`dV $zOʂ_Pabm9VkHTy@tkk,D8k1waM|6E@MS?tě[ۑ-s?o.<[$€ې~7oIie%6, xxEi6 x9wɳhu7m9^mk=@hsLW;T]lg{YXo7[|cP J<8g-s|7)ͽy5MޅuN|/uB]oX¬Q mv$Kk 5s-o}Ot,vt^U8oԶն Jw 4!)l2J 0~rma6xlW2WI׉h+O0t{N`.r>N,BL7w"":GdeiLSNyX"fCx哈E1&Ƶ0CjtΦ<_,*l;'tO:3p"毿e}]N.axt7!w9ή:XB1g3AB* 8[0,0`࣪or@7 O^FMyCa?ga B'w 1-"3MuY=qM&4pFH8k3CN&?IEdMs p8 Pҥhޱǖֽ+:Bޝd '9pOG⯒m_L2q I N"I |nG>x 8 Zղ Mj\D:>|l@STU6e\,' tF:,ZL7Jڎ7 ;Uoo2 LWބ!j$9Occ)OtqVhh1't#e8DT5w6A5 2LxEr +g5b]mg dPND-I-CZ$8nɿCT7[>dnݝ3mB}R♪PeJ>رd ݣKcBWntmWH[:(\ϝZ=hFg>o__;d*>o^-[ßW*>IaǷH=AA<-~wШ}a+0,bJy20nɦR32ʦ}E@&s2a 2VHcBjɌ My:`n N2և:i #WQYWV=pWrEm]kf-gFZE,K]ёK\-d8$RtZPdZDݝU{b?N괮`(6kӹ\a"[sgF)h _#ڜ'y3cz_fAXnm3 (1Ps-Zs7u!x-(ƃ`qflsoѵRV \~XIl҃>paCc{*6;e%29aKwv?lv eyT U8h4OE vEZJq EMJ&{35_'gneM8˦,X <0pqjtCWq:YB U< ]-y\F%z2Q5{ͨc?8H"^N2nSS \NYXru۝(֪X\܊gǷ Q7KwAVE݁j7Kw&Z$ЩP6:x ֊{D;7E0df?5ȥ[NaR3"}ev,mwQ-j1i'(rVdKh N[c-ݥ9p¯9Mj;s:`xW(+mJwt\9{P?*(-0=X1uQ`,E'S}npNi+Dَ)-npl`>ݧр+-xlaD,ǒݤSHiUzO W6QׯnRr$)ry4.uҶU8`R^N"<HAc] aIN<9 nUv>Ӌs#[[Ipm`:: X(] M&w$,h8CS>٣$fEA=AIP^(3 kn[&9uv$0=O߈rЎq)<ɻ;Ii6fONpDŽUb,/;CxHy,Yjj@@`)3y}kk"-iRM[̼Oヺ=KR7AAso Ȅy?RɃ*;-F3-tOUhetn*Ocu 5,ODn 4}1.m 1f riKJmEsp-KU׶ZWΰrc,Z4ZEvGyjN$uS-)VLz[ UR&DЭ3"#1f ~7n3M1%C.'(\9@^.[1hq6M&:Ŧjבo nߘ$hHwtS#ݛ-+o1wSHo߽qEƢ%__hйm LU)+?E03tyB Y֚,0)'ѧMrD12tfگl,vO.7&nڮ,)O]dibX)EYCD~-1n`ʧj [=?K;EqR Wc35ÈvDٳ-+ඇ.y@mMm2E/HHĞh_û95YP&0w[Eٙm#cͱnGrz4npI96d*vFKU!EE!u?^dG=s;CK{L؎{s4@6=aYOD!~#Q>(Z4`OljWdO M gWnJBq%+,i"gx_3{p8?_}DC 0.nr~oǵ?bfs9Sqto;p~\rL[mCt9lGUV $Bús96q8c\-Dޕ?.8oϭ"ZA,rھq-N ΤշZZV2Kc,MMζȴjⵎ?rUG0M}dyNo[oxǒY4J~۵ΦƔPU&wٝfUDpy'-}vtV(wlVA)9m5?tbڨ/^Sእ-An$m#i&d-sNm;$FttJ B+"!Kgzy ;CfpOQ v11Dc$7l?9} SOU'䃡?Ѻ5' ֜b(c5'&6d@dl=Ӭ_ݏhqyRd PnߏkqL:.0j9\siz[r 0?D_u,5d$HrCV@@.d2׊&rPLg'K̩EXZ$QfZ R2ӶY1oFn4,!1&M%o!G!_{n^Ũ VUx$^:\SR@+beRV>@^b3iX3&$ MI e>x.fQ[s5}йssE+ӷjOOWCw^O%aN~L}.4tv" ج$<@ϛg.ѕ%y/p~Sx w>˞ LsORIy>~+n* )D~<*)Xx2#ZOT'ZVtv:bН"ZAr,(jsOP8&7qȇ:{RբtτW*7&àB`꼔 rbh؇ 닑QZ]ԻaAO篶&e^G? #̢axi ,u Uɾ {Z4?zT DTy ->y H=,EE$jul%e& Jăryjʏ٤5ﷻ[T)\I($|ݾ%oE[ΗKV{>h˕7]_c>*^ݷ`S~akC_/ .K3vN2 \s_RNb1ᅘ*@RM쪖{&*|Jñ{Ü~N}mxZ(v}OΌ-_+I%CKKO:q-TC jO_%uۧBǁP6䗽kY.EJUeE嚋zhRgQa:Mx~FWE4  5B'VU{2㿺-y!~g93~Xl.g9gD{lȃwDgѷK̜˒Ns/YzS/%yJ A5|(r2G?H E;dž",w{"M5 A(iyfH&@CXPJ;w*.k |'mj8q74 wϟ8G(?@ QhL 0l 45țL%ݧY+Ļ kT0/&Qz#斶 F"(#䆶 !MwB;ڼ gԿzHCq"BozMMI7t0 O$e_e:w6|'~cE`Y=!kS6viܨCaɒ4zi>;ٽF}/ǟL?ŝ+f]\E}E+c%cC6_}SguCIv$0-+oMMM =D 6|Y ϲb& m`Sn24i`SSԺK%lj]vl颦u},Xٍ'rW^]oGAL/:1D~cD@G[p !BئJun7&^s24n_ؒ4ǿE\tHrn`čiL|K˖r +sRLc ,i˅#Nϖ1z%T&cCbzxHp;' Y-f٪W!ߤ9U.]BT>_X:E5lAYC3lRD[&rZ̵QIV_+DVy-{a}V?eө#¸.D@i3fiGLXgZF} ;zZ]gďwǞ43+G8sdu[MWka$х <ب5KPR]gb b"2B!brv qB⟈b23Ab7v1bB1) wiHyGA$փ!3w2Ah9]*mmvw|k\ڮ?i5?J7>B̯^D(3;mKS/-O~7GV*=ptUv7 X!hXxSw#|̔( #,(?c_k>9M7H*7>Rjz_,$Y\q E 4:5WxC\_ LD)5i=\! "d[ŸL >mD>dLX0M;!q_2rtdѦV\W46bТ#UD nnvdta1\q {;ю-gAѾ@'Zt'[Y.kq3\;2A8d`XmLl%<[I- 6v60տY/`e~; vj&F/հQ׊ʵEg0ۭFBhkc#H`;a66JYR7(Axg 8'?M]qv>41xYd'~>Pk!pAh0T[Np5{Vb*]J(^GXل`X_ΕYC({mM%I&.nR/ g#ǤZYm T؋-+-[۶.\ߴm27rWvTg _"nυȶHhgdpPL1!(✔ [dWH828_[M9g Hw4-jd6|pMW|2b%R-qg7? ?D; IKB%-2l<"?MspP ?x;gyz!mLSMͶDՊ\vl*e/?܋$Y䨮^-`uZ !G76jr1btk6& &XT,\r {5 L wv]tWU*KU#WA `Л)T\'Κ\yY>T%zgW~$  +{cqw\ND˷OtGpK`' ƺ$p2G{geC|Lkhv,eݏZ1; 1eB&@2ENF _N9JjQvtNhPe͆ݼr-vbD(mиsx5uxwx:;mGEbO5\ަZަmn=aD/uɸHKIxMZav|@ɴĪxTCʵI~9OUPnV\r$b ޫ ,.,\Btݡ ͩ5wo﫶G8خ֕wa|Tib31Itu~wύSr`6? 58i@d;Ɂ~LSIFL+P[DfР˫0˧\M@[JCÌo.cjP%@߀C%BYBɸyCa[9Fفh<.Lm{|ԟBDڿ&$2\JQBab\\L eDUateuSz:ي1c!L[9̑yVn9U> .ie,S"3oNj)|tzRY'S&Y?nBƇTtkL'T۴TkZf$w10+U#kQh*Taəۊ q'C3GmzCy$QN X}D.+R134"JUyMCP\z/5|Ή7&nDu)F9h3P\gGwfiED71^JG *аҌi񰿕s8HR3s^HWB@_nkIwmx^ B^QkmG;{՝F@D5w|);J;K!Yp jcsV-2Nst$_\Ќy|չw6]q3u$lghcuɲ>opFX㆑Ai7W%,-̳}q J#{~pY݂z/УFFQI?jWWlI3Q>DaFB "n,nF] o/t#bXF@dKL'N $'6یَ+2#3wٺ/1[{ :UkFJjK'己Gا ]W-k|*z0[Y 6Y s+cٟpJdɢ!Zg Zu'JcZfWnTw,) `7j{WN}u66M&W6^mD\:Κk/:gLln'<%"`1N\]*7RpjoR񰀖{j#;ꃵQݓuhewO3cJo0,c|_iϘ;ɦ׌;Wk}&5gQYL# s7΍ޠ\ bJo-N%bo\dpHc@F1A I1qKFsm0J +$#޺}HEwh=º2#^S]!IY~ ?M-'b;C ͕DI]@dv !tɻ*S m_(͡/dgٓp H&_`kʷh_dZ(ϏS:qW#o;|s=b&?#Bt*3"g?#||6>ZI'<"z$%w 刡uZHn8i> SBF3azxL|d I!Dܥ{w;oG`兓#pV|஼Sp 2b7C!&qv_㲤蓓!#CSd鏊觵@mGNj?O|VWYOo;/Ĕ{-ЩAw/ ilAo;N8b; W;1ҟwnN$q_Dz l}QZ_γRs'a^?ab=rs7>+: <6a},?ǑrEOZj+-n{}<O"Ro{> /?aq6m%:ݏ#brba3|;NL/YȉGOYHGUސ$ ԑ"uAeŧ1&GbXxU曎 d?~ C0)8d >_QJ.֞Q%z"NA_sp35M~'xJ/6dUpgȎ a#O4J,ܷڍM{֞- >ՠR`aL#T+2G*_ s!CA-wX,@<ְaTHW#덽]Mbb Y 5Q.C83% E+_s7EqONvT*CE:f6 hOȃUyº4B,]@RW/˯:0iʦeMzWX&%PnZ'zO2%Ӵ3E 7E_ϊ:󮉷3`N1g>mtD_*zеV&7a,@PI|-p8g qtsn>vZAA5DH[FrWySQ׃uyoÀA3W"K>cפ5Otb~:N,"8#40ugf5/ayC\paic_Z Bpޘpi -bX!廟 |wCkC'ySFK*ɲ˪s6lSf]0lN_ե}伥 ?0uACCpP'RsR|K; ZSq9b7u "ڑ5תSsl9"ϑ9+rY f>X:keg)me 0zѼdI@j6sDu1 a jdɊ> ⯋&[ [S%mPLoŚ5]iUPG-pf+&m@E(p]EIGfpU2Cm`PY<5YEbDl7 m|]%-~PWdƸ7x7z@KCV#jUUI+4ȋu,G)&7*|IjH6Li+i)&iY1N ZE;S8?>a>+CaP3ol'|f |-mzh2A͗֡}&sqr &0QU\Нe홃hzt "AaRFwCۋ+]@cb:".@L+ d3OB8^Jos=H7>Zݦ1Z-wXyFU)*[z"^hQ6=`Bd7'krjԝt7]F"A7&yK8zi`͋*x.^}#ǗFܝk3vya-?Uڠ~BhPm ioV!8op'.\CiO M ɜYtִ7qu޴NrJ{Gĺ*@]) n4ٞ39Qc;f * @ j>LgDQĆ6ZU(7jgΣVTğRr3qX9nX=UԘ8%\js&bK}tɱٺ63itKOSp\3Ft\noI]:xHZ'n5s؞Ll𢲞N$#=cc皴;ݦۋ^j>s╚q1lm*A0E۠2ӥtt߫뭼"'j H]*60,۞)ߞfKl'#.Yb& _yԫ^x& zW3}"?CjYGYTmu%~tw\~Gxv n_o]Gц S|9$qE_O@> ;tXDQn|P+eINYagg+rBO;^BWP/&yj#'Qps;=.ޤa_cj\gmHAsP0^ϹMB .XVmp)]^ieRW'is݅d'ҙ¾ҝP7j\s^?aaߋ7)UN:m|3B#}){G-tuGRZ@L ԙTk CdYK,fV)}WԷTm4ɔ}[_(%umwv\^= |7>i::ua^`? npq\4ʼ{Y9bof=æwkmPlwaq?l%Sx~!+.n[q/]kc׷.&b-kg~`dXmz$ʮv}a?`>ܰ&uy~(5)jM춚٠<1ZX,%ŒY_!Vݑ0ћ|g0ٚν[ď EG 7?כwy')+Yqdt8M/^u8x )OJ {:Q}*4l8@2^2LGRV^GGEE)-2tؗ(%CVޥʐ;e/;sN2xg4H|EpwZtU-y0e T%A բtwlq:vIҩJ[+b-ӨʶnQeT):SfrYUhR@%=\7^W+!&6N:M\㽢Lμ6~I{]caO|2^v h3V3JQO N97+DwDGֱʙJx.Rawm^?t\֑O{:VGA綠:b*@$wBV sOH/+ 3:Jfl'gp!l&(&0NwU!#Rk؃y-LFZIdsY?HuW_Q=_om $X0 7]xsEy:4ݍȎg$ce/܁8G÷AͦY؅n3|lc@;-d0x.YForNwR*9 LT+7 D]޽/-.\O=.Vu>'*p1 Ο4_)fR4ZE?c/Ps)f|8Jhvld \N/a 62Yoh/3]q@Oq.;=ϻ [ڇѶtj@pIp3Zk~'4ZՒDCvIj#XrA^4*!k-XPZ4~ lfrvlTF[v4>m>v﷋gxlhS6!O2]rCti?cw CI틹>œ2u"? +~ߛ'qS ;Z1ϻ=L3*.L3GB1L*SK O'1R˗v0,D,b .B(XާKj Y >G ^O嶛e+%#J#hIPullQ\`遜^CC3ZVD Dׯce{~g;TH3@>>䅺ِ)yP/&֒'=(6'ޠ-َ"V[A=?e!{Hז\/OƓy%c@Z;$@SpWB|o$oo@~G+n51 ob)okG#,6p"f _%&wO#slvkq'WQ]^OX:hi*Pru8/OV !ޞ{\\Mk684OT+X[\=^7]]6=TrGXZoĊ.0tKWCGwUtiFU}=df ?ںXU~[ΰ|`҆fJ8 ME,WQr嚛9=YRvji ;ɢY[6H㯤lV7k~@?#v@zjv6)F ܛ,)3 ς+2o3:5 vox`cB*R)Tc"uCVfWo#N'xX4B nxZ;MZJ Nѻ.^ѫ/tC@ /i{],GAU2FMdd"SȵOa`Xg*c+:_/'oXoyXX*čUO-H&DN܎@3%h)S_Pn8K@4pDUInx36p_Nhh'!]kdAd~8kc<$0:YeٻP:*~!#\AȀAKwKY:XT `M$\B&lYO}s ruQ*]ptZl5'#ÓgBqw;{)8p=b}npqhl/Oafച5m|uɊәЫ,˟Ů9]w]Ư/UTe+FTf?vw|rlw6(kk2 #sτޯwi Pu3"`Ix9|"&{lxIĢI #ޒdo>%m$Yn='[.@XUn)kOw݄LClp݆'`8~'* T׋e&2/2TqN{@5O51"h=,f$`3iLw ~7&:*ʀs9`~D-DͫW)-cb(ed~Ёv%=<X:dFz?4H%WH#pCC(ݥ\IXC@ Q-*\]+z@vFn{8 Eй.W|OsH6 =V:bzyKns|iYH |w#χ|\'! wV_ =U>B0DB'|?r^Y 7ݷzcYsp뼦Nί~ݥW u\_t/nm; pnΙ[[9Hw4VՠZSRgi0):Dy)-vzuHHa}%#$bSk~]@0./@nҁi+;{%ÑDǖĤD9΁ EEfV@pG<="%C 0z{(f ~a$3L*EQT}ݣ@.EW~[q? &cX f0Pb(^;J0Wd)Lve+ S~^(fWs4S9zLY2AQpNN2w.WQ WтMB䓨i'}飬j.n/]ߑLMD6!ûV5pn9ped)qCSQWDz7N`fllwA2 ]ÀG`eC>IR )e׉`rb.KyWʹ3h~DZT !ٖ 5FjGZ1$:YE7;\8@ 榍rǤ> B7fH ] EhfKCoaxDX1nl i]$oZ1"~4+xD. _jǯ*_C׿*ԡб AOȠ{v:2nۢ3AS_[roȱH17"IuC(pt7| qO.{t=!J(kߨC2 $ʁ#t;'e49Y=pJDV Tv|`O7Vs[^q-grCLGY^e3uYE~ofXz,s(eP%md_2Wc|J:I+(eodQVH$Cͱ.z/+>aG0 i^8O.2zRvN)dwgg7Uqjb0q)"p D+Ɇu+xĎơl{e&Nu5:"/qFIKgsT1QQkuvc{ˉSKvcH=q9lp.d;0T(N'Vc7s7m;,`) &db-D`3&O; 7~}DIC0mmO86uP;=,0 Bݝ7+l􀃞{g]0u}jm|-B1a%4k* Դ̩Hjq8W2'N OQe9 F=O $o$(PiV^FlK7ScO5 !;AB0-B>qoX#0-uv"DSCv+]dnhYHs/g!)9 *% `ghH@!# S{6L4ƮKj}&oILN}X_QtK9l쬭MgrB`Xn=@5-"D=#n5QRp*pDfS[ğw:n׫"Ϲ760ʻ)eʗuIO-\*KikՍ 6`=M-8G釋:rMDNIWOIEj/0uV]MDW֮*L"$~5QK׻ؗ^2"0wK͙kiZN6UX~\\N3X':RAx@V-iiX8s~(ttsX*1ӲhUzte2x5Ne~J{45b<%ˎ2CCxǤIfLڧNy#W8)hkA7>ÿ'm@ <(3@_ة6 vy7[-p.C0 bF#=RHGR'YƝZnBX[wvkG\B M#4KʆRQ@X+{xUR8O gyU@ Gt^oWY_m.okywbꀞg`Q֙d-XuBmmȖ? p8(ScBcS|֬E/ u&14OtHW?b!16A~" 5XTgc+|H]Cq .蘊|ס|ؐc+]3GΔSPE ¡tIUa`Lrm0b꾯=4UiMcVi C1s#rn5@ '*˞2} 킥yJ&S?IuZ۰YL(oߏX?1#'!/%({5*ttx)vǼ#"FؠM?!Zٳ2| qsRcUgc|qN?S0sxx RK`.hȃ=4,g 7.e~B4µ8bn8{D.D jSD}Ζ5"ߘJHŗخ+-VLBTRưӢ܃` .ߔOC;Qﶦg'pb>lq$2?O+(َE {)xB*VYRQ\'θZcrWgyiol/OVIMc؏C;<CbR9|X'|XsSv TR`9+.0$cqns+)Ǹ]!|yqP`T p Lbi@~1Av!D?R4XfV3]Q.7IrRHr.\Ii?$ T| "S; , ?/+ϭO7L(j50]߆o4.(GFI;67ٿȨR%|) N|7=G+R!r v/fPc X\g|20Y_f. Dଳu&ӝMNeuU l \EmYZh,Ϯ 񻨃\qb>PG*QyV7<!>j#2C'!d?%!J@gh ܨs+xtw`Ckƚ~FyF2~ܖ1y|E>A^`] Mm2iC)~^pd .^Qe))=bە/A"'wDr=MI*^\X;"L0 0DAJycpdP5 ^xW ,34 cО l-cKT$Rm:tl-tb@O1q6O>q6O9qW5y:?ӜNYSRBi| JB/Ye'^(5}N9Sӻ1ҒڰcﲙGSh)q۩%-i|Vϛ%4d^r_zMagnK+su~nH+]įOc`ћd!yO1wRڏO:_3pSLMb>s!G )pGV yVcE8Y.o6Ɏa%Y.{QZXn^WK'f&|BhUi[HG,tl(L[Y7-IXZzuQI$Ml"{ÚZBD$`T=m~Ш$=ݨ嗉R;LN;7| ™ G+܁ ?텪:#DŽS/`jgD u:bQ`tpwE7a|^@0Yu#SW@?{DT=PP Ma@/Csi? dP4Pz}j&POX?~JqE5QT%Gi:J } #ŐU6,VS9zlVF ᜵;x(vH:i*zk[9{N^P tr_C[,xܨJ'&GSg-S`ۋ<Ǣ> LG) DzTR63)()S [vwEw<5Xj"wڠ sWW+~>`ZC헀j>{NbIh3bM .9*GY{!?|`Qp7Ӻ⬠׸9mՎc.y4yTZvxV_%? {ܷi?t-1Boxh쨉:y>zg]()HU&:_>/!{E]-HF:FiWucuvn30/fo OYa|rV^aBEHZ=FIFz7ЈIv:c*r%pvC3!.ʼn%@O xgO<7xlWz &`vW<+ ÚVP֞<RjU!bX4|$=$Z Ed\8 `E ;珈VJOSӆ Gx Q#3_IƷ(wȬ4*P׫T:&ڝZbdWh0xD<&{7'&4 7i`4Yw~-Uη'9>)R䡷ඟ3E1Ғ@wv.r8'Q D\t%OUj`'ضfNKL0/f䅀:\t=w,?u*ɘPS+#ҿ>x+2Qf~ T0=/)B@rيљBB4a O':n׉IE8=Oq.c..Pq3t!%Oem,IYɋ ,^R9|n"Y'MԃxZ{V[}l͏9(O QCUzIgH- "n\[OJjKKfPэG?fG;j9k3-.NPa݉+Ӓ$Gz~f?f?(4X[$ZM>(:(ܮ_°u>9efl~KTA PxeoܛipZgoIbr^߲p-qv4%g+[[6k.,:(3OFZK,& w(k[r%ɇNTYd"A `NI3=|Ll~oyeYdu1$VcC9>0 p%%FQ X)WŧRvk !+k\˯ST'l#6vϡ7Ot/s&N[R%\3Ǚ R cs[xv{)MovB*ns0 mfLL^3D%C/1oLoȺ~`a ሤit<÷B {ظt1]*ɹF|'6Ş9~!gB.l^bݏI2HD*m1}YWv9?֞" e45X,-N]|9Tv-ïL~芼=ݡ)x4+5 = " ư=Brj,B%Ccxvݭ9BQ&e0d5Ƹ̎c4]e-ӟAjMc(3sј_]p+UhL<K d iI_#TTVX@2)D_꾂L,{EO([gEI铻E(;|o2]/TGTt7j*!0.>`_mY HNQy?| Dy">VV2kI_ |zc/Ӈ!.L6,Vߪ) g$ھOd.d#;EPi$3H\S7D'#Rbz\I3[Qnzr< a0h*ǵT-m*'1ӡS6|?wb&paK@ ێ*bvхx!U*H'E湞&y[zq0*}/  z Xd/BJQf ҟx0n0eBC[b\ "CKײ1Z~7ϫın"C[O@fumoわ`Aɕ>1yX VsA;)#~kqUP TwW;$< 1vކwQ[b.ri츞[{3׉?,HlN}8si5xh_)3ز~-z 959ηtېѷ( <(Kz/p ܦBn&}?v/+A7 P"V/P-p:$gyǢ%3H)ݺѰ}r18ai^!Aݫ6[1Һ$ qxWiNeGT*"Rтy<ؑMz,;)+J!X*K3v(Cc<g?Q_z2ŒK竦o60Q N2E UO<3G@2~-f\Is4^aqe԰F%.BszYS'IӇ(!w_#}_l}Eһٝlsl73|쩘bA-s&[6ƧK6+DrQ2H,1 4I{!5j؅ʨt;879Qn? n7dlNu.7}O`j Q i\+`pOq,Q; @oFL"~G&R:i.]1=XC"bσ=O4$<æ/u +Bv27[lOO1N78 ya\Qem nJR@&-c-`s>ܯ8R?ao>&5Nve7ӯˢ9+& I_Hn7R|Lac2(Z2 ~iQQYD?-ÈAfo+ \ݎffDS~|߃"kHGiA{.HciP8;f0^5Vv0 Blڧ&DEY!2QЎnV}c76Yb?V(dgL:?K&V~gMEF]W⵶uL7Z2إy)IE~#ce)Fә#ƔlPC.$rh&Z2679^Jv0%Or+OTg=0j eqXaط<=khKGqP !=x\+CcW/V9Zy@u~ш lBbK#N*24u¸KIJ+pHWl~N$PӦѽts݅ڗC{m(Ɖ:3挓 .A ^Tfbek(!B gH]] gg8"'E]$9& 8#HiIWEG&$Pבp=x6lrlX> xvd v8Jauu mbGsx& q5ƍ~0 eRqB)D<6by3J,h`ay4$m2 l+N2` Z<F("vR+st(Í6&2p};\=WQm8g#dBS%4Yp-/@.$uX.ID{3)9ܕ 1T$Or ibN8^n2}OJ3uoLREVdYz(A3'ɥA~к̽-QP~Iarޞ"z#K[t IK"%#)/vV,F QKH‚^Ü%/?."bPXcOG))JepAےCdV;8 S; 0P\Ɛg](W*_f\wAMZRW};e2*N|3O`j:Ee$V͕}LJt(G#`ʺ^p}{s<O0lLn_E_Dm(-a]:v]o$\+78eK?y>'wk^ Oix6aU-D/ʈdtK졀:^qrx5x>Blk~?[/;6ݮ=[] =Z1 RC=SH]8%D?$Fa,zHt P!#b9 .GEFir?>Ig& jZ{V9+OJB2Wu#kخԒF24xhs tgD ?JNFC$1v>?V]JWtD ,ĭr8N[05F]ҀY VB,@m2P8 H"SА;O-h;b%c9r>#/MRNHeIS/S]zhCbaseBΛm#3{;<!d7V~$x$e컕%_ Bc34g=vQ e7xXk\sE:5Q1KbrlY~y|l {Ӷ](*z=އҽ\߀2)?+56{?~ZBbiV}޶jʏA[s<sՙo+wR;b#A kxg J;r{Gx=ر7x| ث:5άwy*3F9AP\.2Qc 1.25 MB 0.6 MB CedarBackup3-3.1.6/testcase/data/tree20.tar.gz0000664000175000017500000000172012555004756022457 0ustar pronovicpronovic00000000000000GEn0Fy&gUu|!Wu]9"nXY?nW<>qBZNW)z`ʪƿ[1- L_uƿeW`ː }hgzYpv؅!7?w+ _wq^*Kvfxkfx]3<.5XZ$=ۿg0m_AU_H/aM#GבG ?_ØM0+9K{_yK{_yK'?g_ W`wY% $/7n c,??_oܦ{`w?_A5k\Ϙ)"/7k[ab|װ+0W0|_J_9KvO0G!S%X#ۿg0mO }K ^꿂$ C+9Ksۤ?_˜˿{O'O)?`wo&}&Yߘ^P $g'K_I_o&qs`w$z6=lۤۂO9%濄[w _JIs !zCedarBackup3-3.1.6/testcase/data/tree3.tar.gz0000664000175000017500000000113412555004756022377 0ustar pronovicpronovic00000000000000;AA@}܀nqd1y<'OjM%*>nÇTJ)4:ZjkoׅZ~JjxÏrߞ>}n?_o>ͿujI~c#Qoa?i9-㏁3#ew =k߁cAoao&6Coa?h9={P!gAHYP!gA +?-Ll'? ?,GCǂ ?,GC_Ko&6/ς̿HYP!gA_HXP!߂O~RYP!C/ς? ?,KC/ς ?/<:xCedarBackup3-3.1.6/testcase/data/capacity.conf.10000664000175000017500000000007312555004756023032 0ustar pronovicpronovic00000000000000 CedarBackup3-3.1.6/testcase/data/mysql.conf.40000664000175000017500000000053212555004756022405 0ustar pronovicpronovic00000000000000 user password bzip2 N database1 database2 CedarBackup3-3.1.6/testcase/data/tree4.tar.gz0000664000175000017500000002164012555004756022404 0ustar pronovicpronovic00000000000000<AɲV=)A=+] ߼J+]|RhKZ1OA _AEy qAa ~ܷ,?c~-^6Nai9~: (+Av^?«&rc}8C ?[r|YZC8 DYqKCxcwlR16pi5p=Ʒ?*e0@3}~-h>BZ"!J9d0^<AAq5ʁvu.5cfSHz<XĔt͊@=<,2}g"hH6@ ׋gh5s?_7@^#LBS-ɌU{8JEeD8L̛P<`"/6߿+7«-90x(V/eQ)d#М Ϥ/{2:y끈r``Y},⎈Y@WO832T>p`O9<' L]B.5JL(ք=|B$@qm} ,0Z{0v݁pvF+-M-JpDCrKLHJ XO^[X0d4@*,k&X"g\ڤx2*oK;tLofI&Ƴ(]_ƫcffK~TED\kE;-4˔rGN Ӈo/a9d-PQn K+AMo3])8/5qZ/@bG=S; 职۸(3=)4.Dx*@y\i\6e5>c݀ h{.,99Vt#! tl2|$ބ$o7;BW{'21t \Ev%_*}>! E)'JPF<  \)1 цEb]+7 h]{w85j}n%T"7d+^r6Z?%u缎qI N5MQR6M`ŗYvdtoT:p|LFE< TI.%_NhGR @P\8%}hOIHD<ɟO~1_xX)?9:w!"*=B%/b%@3G| ސ#wCwO:C7*4B]>hJ\.M A;Ϻ܇1j+HK >&n<Ԡ̅|fsr Frl k(|Ӆhե&j/97~ o{Jϔ;_oUϑ;^$?wO9CO|«z _?`> = =E \&m̭fj0*Pf`Ig_,sbnK!Uє2qҪfޑtqPN_+40"!D(̰[f9XaGwvRnk 0zI7(BXeG! +EGL]f<8u|$| @^2kEȺa7ln˜:&4w-ZjXbU+??_g-x~: ngzG8z7M9ԩF,@QV!Y%t9-x"{v3l+0$eE0/l>??j|Rn;0qv!Ojޣ {wĈȸ}\sv4G_H%~9،ؚ*D%{yfikyĐTP,6<5  7}5E '%aWX1]7B-5z7 U iQ`Ų\53LLۙ}#"_%dY.Ze-ؗ5@Z֩Ε gY`Y"SՌ)* 1rgfA/qih,+5sK:,k9s83e<'# yj. #Yw۲J2k#3;U}YSf rq<\1֋/4РhUeUm WB{ب. }h?=}=$z.fߴ&8W~˳"LFjLt:tõ= bEzL^=X} 'Y\L. i0]޽slZXrPY·ByxYr+@8C{u;_IA$gLbY_ۀ~6zmpPUuv2( C%!1- x*u,~A7"i/i&Ǖp,Sf1PDvB EYWɭ3X͂$61\Q̕w-EG'_O!g-|!=?d3~K~~8l$=zVC$("R AP8jm9ӆo۠Ep8:J6haZH$7b> }VGMbGajEbh<8H9u Sa{> 6X&w-^A<-Dٷ{n=Ϛ *0{j֤?'O;I? pGQc =oɉr$XKEgW'X8Py$@;,hR6IӒtt |}~6Wb}7;3 Nx25{T8i['/M=T?Ϥഫ:(4gJ*G*VsgYN-[Thi:#m-A\-:/F 4SWzbM¾ Sp *m?߃ ?f1Uc%Tɳ=TiK9lvs J|$ LtALtr\b㋢XHn\SjAE)B!Ψ)3qs3lnKGQv0anD(H䛄-UyqTYкFOPtϭOrGL*c0ۃ% y2pO39TʥTf3Ǩ)(MC6:hְV%**^/>Q42d! dVn &.FJh @Pʒ4;:T._?ASi\&9> քGdA%BO=ԡ RL[(79jUG+?1?_hTd]gL$ظQ/o! hŒ__RW y9GFu|7jO@7M\O.@ӗ=ﵜ~DIrmZrdJCuVШΆqr:8ڑ'0)4hXC;u1a|qvQi]srQBtd&jF)0KLb^N!-W ֕uY9iԝT]{`ɖʒYSq*LtM QyUf&TkY^Nn͗^D^8UQ ?-~GhMHE:cb<uUs5-2yL#{פD/N4ot-"YAѾe_?o@0Ke0`)hrnڕ#8}V"6lЪh@TӮ7ON֞tsHShSLLֆɗ/ؐY6)+[]Ў0[Yf 8] MP %?37jrߛRO! ~g9U1H7Ur q= 8:Nal)he}I]!lŧ?kȏFvDa;Aؠ`fA5T*nE(ϦpEeMH!-tE?)}rA/dS[xxW2 =:&Bç.C Emr-igMui~h*֡ Z™NPh0BCԇk.+dO̬ v%[@0>)2}Wo!NjLɹӹ %4 ExɽƓ.`rϵD 8ZݫK*+1p6Q:"ORp}ɗ $( t]mgѮޣq$9̧*]02ZeKGIG͙1Aⲱ%z\L^"NY3mQ 2BNwU"x54YKԳdryWwie2;3YuN*1`$N; ȲB_R%0 ͒R|ʢ1ߟ CgkKT~Ψaʕj_U`hQ^yYsT zfa^Ryt"ʎ,q_Do5mETzSF\y$l ~Re!L1/(x]z:xhCPXiZuΑ{Ӗ8aН. t ^ yn C/8ewS8t3C8`+XIL,$jCrꧡUs-Ѳx65 AFhT M^7r`[xe?Q؛ kr@2é*0L7#nO'g;I7Qܭď(Oy~ƿÿ-PZ}Ȗ٭v+ L JacFP554xkmд@Q%ޑSwaTԎpQʵ;ܸ)^$vXy8 ~xU q6}?ڻ%Y+ {W%{_;L c:BRH|+,d&hXl)T@2pXvٯqvqnr-YY``d 8̤zhw切QX񥭯7:fxҴ^mfS;d(\A[3s -Kcћl"i%u(Q";|2˖ol2XG} 5+hځaCMMgWF ͳ)%*&G(ڟ*9K<[w@k~[C;?B #'D-4I|ZJUޚ&jHx+X]+bFxin: g9s ݙD\,e&O[ 43iIxN7Fz`MGg<,;(h_}>=DX/.X8#1z)Q}GƿO~ĄkىR&Jf{|N 6KPhf 0p/_=vټPşӯ]L*Ksÿ0xGtJ z2lZrO2pd;[ `{1/Hۂބ1TnU0\<+ff3n_/5 Y*Ҳ́_˲L(g0l3'7{L )f7w+'}oo#,JݡNx5͵vN??z)%$^aKE1ӱ"Hܹ!mw_CNhwlsFWXUZ3]7ƧVB$W aعnC%xڇm=;, sMI\a5RUϿc\ ; AFPhB nE6w㛤f=!zYKk6fث[?mǚZΗջWkN'w> y5I:by^Yf$!qհ ێƱ Lde]MW}2lJ>aşuO =*'A$'WݤU&yV$&;Lc6iݑ \@ҧ@p:Uv4"h 7[9:g_ LT) N5 y%5=1$1<,5NҒ-,>/~d"&P1'dUJ#ύLJ13] p>wI«=e%s 75F~N1{SR۞B]'Far<.lI'' F(ks8Dӣ>]|'xq8M5HOID-lfDq6W}nhGFD{|]e"/Y|_4,u<ڑ;[N"vDg, -d%#c mS)Vi҇E\&NOد~myN-CC K"Z52`?.4&s$D)̷q.!U .e^/WzۛS"GC=+@lPe1x83Fj\6= ҇aX1 !yynZnw? U"p3p`p >t8>RKB~;fvj"A,L3va=u2Z\P"䞎H_[ O']&x,t:Qǻyv3Ɲ{`6[J h *lU^ fX㮖zWBvnvnvnv7 ˳ACedarBackup3-3.1.6/testcase/data/cback.conf.90000664000175000017500000000053612555004756022314 0ustar pronovicpronovic00000000000000 /opt/backup/staging machine2 remote /opt/backup/collect CedarBackup3-3.1.6/testcase/data/mbox.conf.10000664000175000017500000000007312555004756022202 0ustar pronovicpronovic00000000000000 CedarBackup3-3.1.6/testcase/data/mysql.conf.10000664000175000017500000000007312555004756022402 0ustar pronovicpronovic00000000000000 CedarBackup3-3.1.6/testcase/data/cback.conf.110000664000175000017500000000044312555004756022362 0ustar pronovicpronovic00000000000000 /opt/backup/staging cdrw-74 /dev/cdrw CedarBackup3-3.1.6/testcase/data/cback.conf.170000664000175000017500000000061412555004756022370 0ustar pronovicpronovic00000000000000 /opt/backup/collect daily tar .ignore /etc CedarBackup3-3.1.6/testcase/data/tree7.tar.gz0000664000175000017500000000026212555004756022404 0ustar pronovicpronovic00000000000000AK0.;>z6 _@8ЄDH79{ڗJD\JqLMQ aRN꼱qk=ݺX56>}^Ke!sD?PZ`QD[x뿾4m??cGU!d?s}(CedarBackup3-3.1.6/testcase/data/capacity.conf.40000664000175000017500000000025412555004756023036 0ustar pronovicpronovic00000000000000 1.25 KB CedarBackup3-3.1.6/testcase/data/cback.conf.200000664000175000017500000001222712555004756022365 0ustar pronovicpronovic00000000000000 $Author: pronovic $ 1.3 Sample configuration Generated by hand. dependency example something.whatever example bogus module something a, b,c one tuesday /opt/backup/tmp backup group /usr/bin/scp -1 -B /usr/bin/ssh /usr/bin/cback collect, purge mkisofs /usr/bin/mkisofs svnlook /svnlook collect ls -l subversion mailx -S "hello" stage df -k /opt/backup/collect daily targz .cbignore /etc/cback.conf /etc/X11 .*tmp.* .*\.netscape\/.* /root /tmp 3 /ken 1 Y /var/log incr /etc incr tar .ignore /opt /opt/share large .*\.doc\.* backup .*\.xls\.* /opt/tmp /home/root/.profile /home/root/.kshrc weekly /home/root/.aliases daily tarbz2 /opt/backup/staging machine1-1 local /opt/backup/collect machine1-2 local /var/backup machine2 remote /backup/collect all machine3 remote someone scp -B /home/whatever/tmp /opt/backup/staging dvd+rw dvdwriter /dev/cdrw 1 Y Y Y Y weekly 1.3 /opt/backup/stage 5 /opt/backup/collect 0 /home/backup/tmp 12 CedarBackup3-3.1.6/testcase/data/postgresql.conf.10000664000175000017500000000007312555004756023440 0ustar pronovicpronovic00000000000000 CedarBackup3-3.1.6/testcase/data/encrypt.conf.10000664000175000017500000000007312555004756022721 0ustar pronovicpronovic00000000000000 CedarBackup3-3.1.6/testcase/data/tree22.tar.gz0000664000175000017500000001401612555004756022463 0ustar pronovicpronovic00000000000000sHǮȑ{ͧ Y{c{oO?ՒZ>-̜oQI GF$E 0A` /D _@"`α28MWWLkެMe(| +G˦/@1~??y?A!40* se8$v 馍 #DYR($`RM*NJPGĉ1a@u:fFMIE zCTa\8(Wf=b1>T=aKfJxpm\R9Pj('f?bSٚ9ےV"j=䏱?zb9!ese`z팔5֥4I٤[7s';_^f~u g.K,!tYL13Ï_!?{_Amʮg1J \ "ORvQAT!CsAؖÄ``:#p ZǔA#?9G? 2.J9,D ɯ wq$V5-)lλUB\,NEqZJ1β뎊WǤ,7{|^@' \m*+^ή({7b ]t#R+l86##kӉ3#C'ZBk ;Ϸf{-$c ol7aKuH/uB./T͞Ti]ft?b{T!QDӦLȅw6Msn0 ]kaaȝxoIA>;hK8ʗku o`O ž<&_21I@]cw%.С*4jtuE(IvJ?Ɉع;J7@0yAo " Z1NJt %kj[Yp|O—;+jZ v毹4](pIdwTSO^T2z &DQv3Buzp$Y;v{n*N>ezIv|@`NW!rEc8#-{ 7&`Q~~3]V X?n}dV+x4' ܯS" R ʩ2T0~<#0y x5izq!~Kb!iJ:MM1W=Lhԭ8ivV78p~o+`~ki͖X5RX$-:lue: _NU:қG[y=Ri2c9ɥ `zXI_hh=zdI NG~w?A_|t_Ti~ȏ?%:O˹bThg 0L W 4բlQ+&,ʮ17X${2IπISY{k+n3X7l^b@Cz<Z.J=&\Oڗ;77mzD5 C6l,{n=4!˲H7Wڹ3ka^fED Xw]"wN"Q:r5jT6°V&kqҟ,3m NҒ;;c\eΈϱXBa3K(^8xWu˵/ĺ;O|άǎK`Y-ec $]`PAl{ gjy:;Dl&H~ʆ#&Czv,*FGDl^ٞ17bu\b,Y>d&ܷYՁg%gEF&"l/zQz+<n#ᨱgs3E{DD}ͨ1Dpt  %2)j @FꭚYK{ | >1҅^:m(T' +#,VH%>5Do0Zxe6rr/\OlOokDA2یّPau?P' pjԘ2}X(cRO&ӹ4>@r.jo!-'@Q# &1&esһZߵQ rWD͛2n.W"L[N- R괶4Q>S),\X;RINv<#W>‚ (M\^#sW?c^'1Y1ħD_3ǁ?m6gl SvCa5nݵn3j e+ġ7 Zqz+$7̴M[mN2#W@:7Xʹg`0%e#pdAFk:&AF$CeޘJiۘh f}CjɷD^xGo(] CJ9mh;Kzlj;Y>1#+yT}+j{S+yw@{ĕԵ _SfWtJ)X~z4>wIk. ˍ}_>W u7.&^@;)s*">e*jXi}ffbpD Qf^qcjNjV rS]}0 # M. bs z?/yY:pIs%SQ榬:01lM<.¶t蘆ÛϠ^Lr1r,itAtήkd(:3%c jħ2eW_ 3+RW65v:jlƄ&\^|z@2/!Q?c^ (Z# Yi؟×ݶ$,ZȮ:ie_e3tѥ 9;Ea0Zξ%sKx/VQDBZ6BvI1覄ת$}7߻.QnL@ie"w{g({=ĮKE c e>sC̪+/j&dHd 'dpw:Sd 2qSo)pd͔%YMJb$T4bAqR]𥳐X%r*8t7̮zml Gw l,!n\vE-01YUhvYga':-lOFLCP[e} qʢX?X\ĻBS gIcEHl3 Xo,L!+HB"@xis';c>]MgXI~YglZKfClEH\ w vm wc%Mx 唺ei]}ʥiR^ Y$cJY D9GL:sS&f,d] Eэ0&0cY[!Q!Wq IkVߘd4H2{5EV3 O%7,'-8V>]vq HEo^O :E!40B9Z!Ǐ?\V}i|+r"TCxоC"edU'V1[! D9z&8T34!;z$[uGݳtYMن_b3Q{^;~(?!tlڄ@"=z NQ!H N8 ^)Pژ!fϚPg3gs ]:5'WNx(RkQ[RM;_ 2Ύ"7̖3հ4fn&| f;X U_J5"⾣i&gM6U#Un饫Im${8f=*QG۔([\?:1VDQTeSRS!GvKI}???Y%Eel0`_BtsGS$D 6Aղno^%ɳ)FGzn^FJ3;{o}xƍ( P=of(:!jې2 O^,Ӕ]ɹb'Ge1ګEI뤾rϬ!L~qYӺC8K^WT•(!'-ǍE~ܧ)-R59՝SXBr7O04\ͦw^Z.tK \h pJ%*}5;ՔoE3OncFÐi[ĩI}ɖщcU 3C9+3ON/]_ylcSHT5ET2f̹H,H }pKI2CfD+5FG9]:]XZhTP zsR -ijH0ԶF$B?XD5,rh;{Ys\c ;爚|9!V?t1"#ZtdBpBpo0?ȩFwRt쉡3]Z)MWc}tVzTǺ{u6$b:y4R6m'\*;.lj56[?bƈ|E^Y_waԳ㪘,-CludvƤht~$")l+SJ۪H9vĴh6C<<Cx/PA?/?PPC?`P@ut'2&}7q_s/i^? ''^!O}?xv? >ċ$zo#y?@ByG`CO۬Zkk?f`Q?~_?|abCedarBackup3-3.1.6/testcase/data/cback.conf.80000664000175000017500000000372512555004756022316 0ustar pronovicpronovic00000000000000 /opt/backup/collect daily targz .cbignore /etc/cback.conf /etc/X11 .*tmp.* .*\.netscape\/.* /root 1 /tmp 3 /ken 1 Y /var/log incr /etc incr tar .ignore /opt /opt/share large .*\.doc\.* backup .*\.xls\.* /opt/tmp /home/root/.profile /home/root/.kshrc weekly /home/root/.aliases daily tarbz2 CedarBackup3-3.1.6/testcase/data/tree9.ini0000664000175000017500000000042012555004756021754 0ustar pronovicpronovic00000000000000; Huge directory containing many files, directories and links. [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 2 mindirs = 2 maxdirs = 2 minfiles = 2 maxfiles = 2 minlinks = 2 maxlinks = 4 minsize = 0 maxsize = 300 CedarBackup3-3.1.6/testcase/data/cback.conf.50000664000175000017500000000060212555004756022302 0ustar pronovicpronovic00000000000000 tuesday /opt/backup/tmp backup group /usr/bin/scp -1 -B CedarBackup3-3.1.6/testcase/data/tree17.tar.gz0000664000175000017500000000174512555004756022474 0ustar pronovicpronovic00000000000000GEjPE)E?' n n!_IGJTEebP\:8yS))8<f~^oөqy~rEAXnp"Cx߇c^?Ax l~]k+;]c<7eE Xz;;ց-؍ Eikliv6O?pmc?mL_0Ypa?d55`'?w#KL'k]`g>OOS+OZ_B?|?m!_6_?*GGpϹ¶_BXgs+C!Y??H?~  \!Xy9Zi@%p'Dֿ;۸iW;6Ook?9{BXU) 9?WZ uZ{rW`g-4`'ǿ?V0 a=uϚkT -?Gbs??oZi?@g_ؖ?!(C(CXgk7|HZFb\!Xy@lWOM!8ߒy#CedarBackup3-3.1.6/testcase/data/split.conf.20000664000175000017500000000025212555004756022370 0ustar pronovicpronovic00000000000000 12345 67890.0 CedarBackup3-3.1.6/testcase/data/lotsoflines.py0000664000175000017500000000111712560007330023125 0ustar pronovicpronovic00000000000000# Generates 100,000 lines of output (about 4 MB of data). # The first argument says where to put the lines. # "stdout" goes to stdout # "stderr" goes to stdrer # "both" duplicates the line to both stdout and stderr import sys where = "both" if len(sys.argv) > 1: where = sys.argv[1] for i in range(1, 100000+1): if where == "both": sys.stdout.write("This is line %d.\n" % i) sys.stderr.write("This is line %d.\n" % i) elif where == "stdout": sys.stdout.write("This is line %d.\n" % i) elif where == "stderr": sys.stderr.write("This is line %d.\n" % i) CedarBackup3-3.1.6/testcase/data/cback.conf.20000664000175000017500000000007312555004756022301 0ustar pronovicpronovic00000000000000 CedarBackup3-3.1.6/testcase/data/cback.conf.100000664000175000017500000000162312555004756022362 0ustar pronovicpronovic00000000000000 /opt/backup/staging machine1-1 local /opt/backup/collect machine1-2 local /var/backup machine2 remote /backup/collect all machine3 remote someone scp -B /home/whatever/tmp CedarBackup3-3.1.6/testcase/data/tree5.ini0000664000175000017500000000043212555004756021753 0ustar pronovicpronovic00000000000000; Higher-depth directory containing small files, directories and links [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 2 mindirs = 1 maxdirs = 10 minfiles = 1 maxfiles = 10 minlinks = 1 maxlinks = 2 minsize = 0 maxsize = 500 CedarBackup3-3.1.6/testcase/data/amazons3.conf.10000664000175000017500000000007312555004756022770 0ustar pronovicpronovic00000000000000 CedarBackup3-3.1.6/testcase/data/subversion.conf.20000664000175000017500000000046712555004756023444 0ustar pronovicpronovic00000000000000 daily gzip /opt/public/svn/software CedarBackup3-3.1.6/testcase/data/cback.conf.60000664000175000017500000000176212555004756022313 0ustar pronovicpronovic00000000000000 tuesday /opt/backup/tmp backup group /usr/bin/scp -1 -B /usr/bin/ssh /usr/bin/cback collect, purge mkisofs /usr/bin/mkisofs svnlook /svnlook collect ls -l stage df -k CedarBackup3-3.1.6/testcase/data/mysql.conf.20000664000175000017500000000040712555004756022404 0ustar pronovicpronovic00000000000000 user password none Y CedarBackup3-3.1.6/testcase/data/tree6.ini0000664000175000017500000000042212555004756021753 0ustar pronovicpronovic00000000000000; Huge directory containing many files, directories and links. [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 3 mindirs = 2 maxdirs = 3 minfiles = 1 maxfiles = 10 minlinks = 1 maxlinks = 5 minsize = 0 maxsize = 1000 CedarBackup3-3.1.6/testcase/data/cback.conf.120000664000175000017500000000136112555004756022363 0ustar pronovicpronovic00000000000000 /opt/backup/staging cdrw-74 cdwriter /dev/cdrw 0,0,0 4 Y Y Y Y 12 13 weekly 1.3 CedarBackup3-3.1.6/testcase/data/subversion.conf.10000664000175000017500000000007312555004756023434 0ustar pronovicpronovic00000000000000 CedarBackup3-3.1.6/testcase/data/mysql.conf.50000664000175000017500000000046312555004756022411 0ustar pronovicpronovic00000000000000 bzip2 N database1 database2 CedarBackup3-3.1.6/testcase/data/tree2.tar.gz0000664000175000017500000000034012555004756022374 0ustar pronovicpronovic00000000000000;AM081ʂ_QDM'c=yK2+W<9''qZ3&}8mKӸcù߽+,/?1rMRk|?2e W_Lg GS=k55TD-k7ٟR(CedarBackup3-3.1.6/testcase/data/amazons3.conf.30000664000175000017500000000046012560005300022750 0ustar pronovicpronovic00000000000000 Y mybucket encrypt 2.5 GB 600 MB CedarBackup3-3.1.6/testcase/data/cback.conf.160000664000175000017500000000052412555004756022367 0ustar pronovicpronovic00000000000000 example something.whatever example 1 CedarBackup3-3.1.6/testcase/data/tree3.ini0000664000175000017500000000041612555004756021753 0ustar pronovicpronovic00000000000000; Higher-depth directory containing only other directories. [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 2 mindirs = 1 maxdirs = 10 minfiles = 0 maxfiles = 0 minlinks = 0 maxlinks = 0 minsize = 0 maxsize = 500 CedarBackup3-3.1.6/testcase/data/tree1.tar.gz0000664000175000017500000000177512555004756022410 0ustar pronovicpronovic00000000000000;AɒHkSt&$RdVɌ̈>}Watm{cUW\pN|ٷq xy* iWQ(@8$^$ /s?/?/u[1\v?~O"~ɓ2AQ'hG&^^?0in/PS u|g]얼0va|0l戧8WӤ[AVxl 5x` y!o՞f}V%"t%;e4 վ|,2.85TI̛kװKɫgpSM@sEurSݴ&Ɔ HgM<8AWw2S_rQ)aHjkr=r067 #z?2[Gyr-0 6me阤8\2Ej"y't#VئlH2Klj)^I9F ^%n*ZkTX[{m&--8ig"Kǃǁ1I>JO<)8 ?g?Q5`\BPx'k0ɶwY0cN˅\-͡boSj:8ږa]!St3ޖs6|5K5cZ2DcSfh>ZFIN{+.B5+GOx?ĉ`oN<]eit#G%zÆxz8m4S$ Tv]@'ROs6;q[?9(MG2hlF @ڈK?T[=:)8,Teŕ-q[uC4fi|v)scxWkso9ʃD}i2U# }mك~5LrX_ 1 㳹zR1 Bi*[tL.(غеnI2$u Z>&jh1Z,xs?G<@A -Q+(CedarBackup3-3.1.6/testcase/data/mysql.conf.30000664000175000017500000000045712555004756022412 0ustar pronovicpronovic00000000000000 user password gzip N database CedarBackup3-3.1.6/testcase/data/tree6.tar.gz0000664000175000017500000007050612555004756022413 0ustar pronovicpronovic00000000000000AǖHsS |pZZk#LfN<'*;m k\:g__P  ڂ@ ]C@e~2C?Uy~_D䏚GǾ^ao3G{{q?~#?B??Uqaۇ 8t?pރX0G#UܧmSYfBdE6Ss=:"a؜)7nkQ?M\O+s}3YxhmK/*uN팣ȳYCVtPͅ_^$˼&0S$1ō`{cx ({-rѵyD=?{cbз>q)yzű\sv Z#~< aH$1Ba4f%7A^(8iІE&@jI3˷. D&>-"ܫ7mCWm"mOHRp802+6,3"K xt@1~1쁅F6(3!Ӣd%eGj_,#b'DjK1GޭG3rWwiYk2ù@inKJ3ۭֆ g &=87u,X (5 mH^e<ZCw'[< tn3=Hf;i[$LfcaCn$ v\èbjj!G,Q|3))hJ@FJR| eHB졪i$H_F B/Ւ ǽ&fRMy{.Xt, ]@er¦ yUS n^8,M=T—&U {A)`%[c`GU֮42.yPEZX/[+a&$[K]ķ)zLQ CSYK^KW݉2Ng٤ d 6؟ VQ(>D՘o +xIjv@[qA+J饺#c*1@s*Cf4[siFW_B ^x)r@\KPp鬉h-ާ12?0dHٳ^Lz qCSnwE% ҉z|V05Iь1TOh>1w }XePv(n6_}]qG߭&a3eS$'wC56<&: N|<8||hU`RcHr]IIMOfInҼ(&caoJښa2LX؝) 5M?ꋃp -@|,rJ,"N:9~Ln~:7; E/P9+qE5lK7pq; [GS*`M.̴[,9mzv^]9$yוb} QhFJHQY;;y%?(pG~Ԩ/Ǝ> gkFLC:5gsm9Lӳ~$݊"M6D͕vG|[ Y%Vu~X4[G< e9w@v,NY)T]M$_q-NCoň(C>!*Aw8?$98D1ܔX-+nR2|u9?[??n}0HAK0ڄFb@cFOUql3c{ff#[`eD qᆓ`1Y_H  YHP'Z4EBG@{*iŹ/3|~YGȷIy /C{M#PiZ&\j,Ҝ@xT4bv:)k64*6Yl.L!.MF]_-^WF[vɑBV"9d)ز&,l"z UESlTs0 uAuM쯇J/R::b&GXjp%y;BSOU/?L@lR(OwB<2Ϳ?D_=|~s~Y9 +eNI >'Ji\DP*Vؓuq>ө@tFȇhYaʴ`K.GZbX'[SȧvQެkSK m޹;} >urTxI'@zj'Z[4;hoBx(m{O>oɳ;!9MYǙNU>܅sշMd1]".c#q}p][<P*.!8IW΃"5 P Xuh4}M6-5*dg.ݠX#G7";O+VGjk \!BP.OV|ce5(E:+GMhǥ74>;Zuݒ8# asnlhMk2< ۷])U+RYUIlx-e $Raw&~̓L#S졣1ױ;w0Y -! ϐ%if|rK(IŠΫDC"m'#1_Ȃ-s-^v`WXMėL W^@s֩JIrp݇J#^@ e@jIsC{Gwëy['9YKKMk2P~ܠ /+l-$ͳ~ne#=Y9NkM xES*h'|DH?bj_oQSDW(éy)Y!{FbUyTq0ý;Zi'=NxqC+_̡8oP{"rUڥ43{ؽՈy{DJհ;8RBJZ";.Y>QarOlܓS tc=Ϗۈ~OnZ~ʷNʟ_^"+V6<ʉUEؙlyB wuxwd& 2 y'@@Zu& c?^𡕑Ex3Ǒ_Ddp ]V,SyqDL]Glɿvmm6Ue!r̈WҮ 첡Y>j m8]Wv9#x`˖3  Rf6:k )yZupiB:牗@ wKHْǢ0@| ]ا!J9Mܒ=|:2@W' Ru4yR)vnCN(;V[i:xk@TY[9m7K_iڊpV>KZ849_ KCk}DG3&h-3㨘q?o'w; B y6zATw YZK <#ۖe;zLԥe|4.@zjz rI!9%;+wc9yk9RB=&ڌzb?hpۈPN)hj*}Dwḡf!!`zyN5htP!;7 ;ϝ^ -9Tt@c<w2n[N[n 'KVS 0мC+AU.WJS(7ѱ)%o2V>Eh呭\EQDt˰)0yP(}ze =UQ\\LzV"|vom- ya)6V Gy[G)z 45<彍HՈ4`T&q\Hk9n4տ<~s7`ګ?ɭ0Ue MyWϱ\mPOop}F>?g??Ϸ)tu vƚ!E1uHKt@oοNʿáe/S/7~ ?=K||-麟K߮|/3 '>ytߑFi:>&Kn<>ZH=f<1Xhsa0&) ϳI'Б nţVFȋE5"3ю h̵_ )Ar # y1: 0o|ҢoŇNIBUoG r)}2bBB9viǖxȏ^ٽ(e&uFd߾ .F,ܯͲc!i#|t"VNR\rx9LbtSy;l&i:+wNqD0!rT`Q>?Ģnl"., ^d)'kQMjQR)8TBXjvR0G;'t< T{YŹ唓m|[[ֵqF`J/G2B%^w֔vI둮ͽvG侮kDO֖qO$wCT9O؍NثM;@W!Uy$*aK-t T0]?7R@DSYㇼRWQUcPn`O8gB4@5~lɓnEB#êmsf3wyf}>Wvq,IL}'Rp!o %p(i|m$|*Fjds-&psz ?"9 @| !_ ]E+&D;^G3ӰuDwj%A]Wt}Saglij0nH͸̕\17ZyDK$ҧnK\.)@fe@As|A3;)@ӡm÷C^"^ }3^2[c4 wp)ř ׄR(dxp?_w̏%̭j+zwbᚢa}SLO2/6u4N(>!5{a\"ؔ^'I6ĉsm)l{`L8%z'#_E_TDvK?2Şsjiֶ#;|ҭ.>}'YaGfgM2RL3%,Ё5 & C]@{*Zj تzKmܯpD~\Pbȧ $AʕdN<ѧW]:@',:+k? ߏ8E3o(g/WNjP_/UVV0tl|nsv{"0])J[CܰBñ hyn~]T 0 8%L|Yf#T]V+aR3I )G 1+}lݞsML; Ě=SV-NQPe3\8v4A1l%EN DV/P[;# 0Zv`?U,&{7xɩ Gxk.,C>1\n>s~dYڶ&YYfʧ͖L!Tolh(=\ !p[{NI2G/?q[>?i`*YqWP%{6@Bcs8^UŋRxuf wc1$FBW,<` Ŷ56d#GuXfM4hM6P k}oAȹ+HTV^7@5>aa\U&Cڀ4K"Ξ6eHC @T!8Mhqۓj{l(XMv!CȈhꍈ@p7]\yAlg' ]!m;w˜uz@Z[W75f3U ?;@ݶ`~}L$VȗI/aEX&<3/\qeۉd]/IvUs)ʬkX0F`)KfhXc\o]q.2" v2ޙÛfl$q$7{);+Y0'bbN~>A`/*n^jk uT`XE2p>֬<(CljvTι`~aIbV2+uM TF:ȏM\gq.86jbf 2wRf >$z$ 2rix~+?3?-*~?QdK}7Eg P Bh)b;+rkGބ-޼o^_o1?+??cv_k?.?+D ?{oF~|dOW?߂/[%v?wWs zbO[gc/ \-|^ɀQK%Y:WP"I|uZnHkdy7ƂOߊ.o2_B???=rTE`(c0 X %zX`{' 3+}oeEt<á+/*T"=1!a@(똜}dozyy_Cǐwp#%F.5QN8:ѸW;KTKNSK#"Q9镛X :J]VO0Ť5ky.IwNf$C:PK-q> у鋭.7Dp1" ,J7!GpNǞU:ŅBAE療?Su}Y݈L ʁiWrtQ ۛyطFtBO13b <ie@Mʙګvmavֹ9ބ{fL(S)AfSxA`{ ]gܫւ:[33N˞ y[kzC%Re3q改LP3cT!s ]pT{/S($"@̟OA_?h >i˨1X4(+";kqX- O`@/IkҸFCdo^=#Y݈Y_B  3P^!<-lBPF뚺o6u1OC?l8x^{:! 9QF5Ձ{ޚrM]xaThmOUmX_gFE{%+scnmyhmcJw=jJ'l% ʦB=*'a.6-BuEc&Qd>>{A$ock0 s'O7@^wӴftΦ |ZMQp]; TFWiVyVٍXuI ҵi}9;%.?5MrL rVT~P oU^9X)1]a+0Xt{44`vNDdL{p03Eb]@!e^;/3r$x>`bsXdI1YkG>HA%Fh9[_[ZH "tUNjxս8XrdFkFS'ִS={:"I_(5Fg<ląW \%<[7 WN%y? s}ňlT=>U: :,+]XRIrB,$c<3D294l&EzЏJAyz.?߰j(3>SG?w$`uR~EK} 1Līe͔2}Z6mV`&#EI'O]d l2y8GN+t&brD.2w~U<ډ5cS[g{ |<=WGG_GUߕeiYL,~}S=FȀHSYlPuX e8;nҞ746o=bccNUޯgV |k<>^(@-a~@~?=' I9ղurAύh}KReq9gɵ ÄqU[[L`Ol񹎋;Mn2=+MX%8Q1 3 h~tDeg݅ 5r" W@ցfOdŀwڔLl/ V0wdިc* hSA"uaǼu3@͊{xW~bTK;pHӊ zbץО,A;]dL1O fv(}L]-f]99TQϣ6td6ى Um"4.4/Bn"}9y ]rYأ ;Clm F_}Ĥ3bR^C_EW^~=QQmHw-)Wi>>>Emiax㗝+ eb.1'{xBP@+""%a׳:3#_uQ\U~!TI0ޡyӲ-+}LU_Q5 1wbuԏ'~2~5Ժnj_] [ᷩ RaFxfĉ.8!4Xjc 1[lZ|_k;j1AEH٤Y#[mF`g,W4q|:CVN#|gĢNa%ow$0"@%t5 Z@_hE]tWыUAʼ VOP.cf5w5X W+Z:ΝA gg&1c; ֝@Un0# hNM _|pex8.;_ةmpZgX8܁@WHhvCP EemCI '_m\a2Cuܹ'2z*g8lPk[o.73 zh#E"F\>?(_KNub-uJ }1>v46 W>$nџut-,p3@`wKۅxg }g\ jJ9ж^tT y-Vk;IVr7HRN3xf!ejUi x-H*hPPY@@8kj^J(0(~j4:w^?$Zq{ح[QIWFTG7͛6 l"ZҞD#_²V 3LtfJJI=psצ n4˶er!v"Àƨ=hsr|~ʰ:15}kvAjdMc!lV r/iTRd׉Ⳃ%Jsj͏eƘ1Tڕ|pʞp;yd2vF/FBtHo7/SwXvGb=YEAo8~]ߛ/9!'/[~]ߚy:&Yxϧ7Kz=w}L/+5Tެi[ #8 WV/wƯ?oGG?oO ?wuy3-"+Ci+" Cp& L[JBvk6:AQ-"hw =.> C1dbhpL z} ooFA@ub_|oȼ~#M)ʑ\fKkL3 +Y\M0.(zg%iNoCqiUԪ_4Qrs7dfQ2n t<ֶ;˺kԢzŮ>?h([@}u'Q{ !FZ1E&Vᵱ-G)&%>$xo9pĒ#DO%(8KmvlEt+;]/CpϬpI$VR 1#4`E$YDXo/Q,rcYq}Z-mm[s*A$ENؒe( c֋ܲˈb[\(-4Ж FD#C&bgj65U}u!l6M'ȯufhҳC%6WԖ24-2ca&e1۪]:[lؐBjU df5J$TD"v^N8Qzmڏa?Oh`x{*7\94q'ݸ6\kl)9r3=j{6:P\`-Ăo֢v4$rT*|㲼r2]![za 9|F zݨݓv0B1Zہ=7&'@d/u|CxJXGn }ֈON_Dwp +L^2ʭ)&!0O]0fnj%tЃBҊ{ELe!#ratbk%eB\ f4r8} vQjK[)=A%@&4đpT"u: =VD[y3-T 'z<9ѝǻgaLG)|g¡`uqO6;+CC >zhʩ>П~~ա2 DN[}}!LrU# 1Q:bma ?o}go"xmꌠSU_2~ O]/\ 2Wt)xN-w^6V0.Lj\(x1-E,~ s4K"1V^Wį;e$h/͈?Kםpኀp(ɱ׎3kߩPEn~F%RAkoDY$7t ƺ{F !'iMRZe8FV]RUX"qnuXN_'k{oEs3B:djt3r.`h| 5Xu` u0 RM(.wA `T~ j@Ys -@ WdLmveZiH@9;qG;Z1,)9n= בwRS~?~__ yÑ솤}ҡs wS(XdNB}PZަ_FÛԓs$KVKuwX;#vUZcYjYCNdsNAu?fX;Srm.^]{BX\AO;|ϬLf`&S8Q& [ו\Ŧr 9V%m |kf*䗌Q^{elZ[a2F'6p݅~ct:Q6^[*@RZ\S&8y}mepJp!ᷖҷRq}roQ允 CB],Kž3aZٵsB|#? gxF?{ kCAwʚa#szd.`7Xv'̌&kInk#փLXf<h"2s>EuSר6tj;^Ǒ, kE.=׏ ( z] O+A$(N8!}Go ;tE `5\ۖSJeu}3 ]D?3=?? ׳WaPZ١iPg-_Y-_Þoy3[\Qr8ݴ^!* zmFAލ}ʨԺ%Ẃ%X3Ԝ k2\/}UF%~M͹TZĠwH^)fZ^^mnDMސhŏM%!g!Ź2R~tL]Ҁ-dwdVi)TWDb~( Q%F-wT gx!ElmNQDJ E~O:w!TvR4ƭs?L](l iLG׎d m]W yÇ sϏzL]Iv?$Sj]TN(dVF^Mg>颮 HMKEY{SSw8d=ԪcNz\$*H9z5Ȝ:& wKwe*O5%VA; Gk-:vAX4:JL9hจǪo V)(G$6 R#:XFIY3p$@5e{_F>nW4f"ޔGӴtvԱ|Ȣg1חE>OO E^Bټ14SV8ScD04@Jj$@O\*W| k҇`|/M M@^0ׄ_4#;,Y/g˥%}UVb 7zfN'%Kf$^tx>;=4`q2f۩OR{.DQ8 b k d鷾y3يکQ7k7JA]ٳ@H:G8-nܸ)- {C]f :%%uˆ(vZz=}zl/"?^dtV[ NlT&'d\ucJa Yw7iy,mrL|~ma7gn-З̕= ƅrE bUraȿ^*&4χ\=y*υQljw·swE!JӢ `i:D8L(%X'ݭؐ0^1:QY6D:nΧ$]&?-"Տ4Jx'ӁmQ$~)IN)B024^ oI")?N%+3-E|wa9Ū12OnMT6.#7^DBTݻ%,D[O/ '~ 7~`_rd}qp:)m\w:TZF,陽d=[%.+̻aҡݤ[8Vzav`gwQEſg#$9d>;6!l3$޼kQcˎUbrzEb7B%AM\E B8?q0RQj͡[ 74,vd N%EǞ n&pZ hvN8V ]~DyrWeDvd!E Ksl׸UhϮ#w9#IrP%xԚQkyfhL&i 0C<.eQE5>2jOVW˦ke8æyGŚcmGwjR6bʐg$CJzTnTnMf LK7gkh=a0R:*by#˫z?)6<(ՙƝ qTХWTd2+v@=UJOH_)Ejא%0qϡI }6#nwd8~=QW8xB:N?#>}u׮;3{MMTp;h78Hc)꫱'},)Q7Ő S7Y%ȞP1FN^eZJ\s]; 7ۑmbCG}r$OA#mcPi2G=z-]5&=ϸy!MR rRSUKi |2TK.Ot ܮU5mLԠezHigyzׯ =`)} [d ٷ]dRRtHadq&X e@ w6>pH~$s{Ia(]մخnİ;R&DϯgPΨJ@޳tNN'VTQ a=JH)JVcH<&zg-AwӖ13~/(OAzqnjŐKX)rx<i;nmX1 [%H˞#<߂.ePy*e-o,*7~SUm/,scS1!.H`ZtdTJYrpc6=
    䣛-@V稍!=$9kk>k"lh> tvpKU(&&3퟼7P.5ID(dtx}m_ΔedA{'YGx|0Ĕ=i)jc ߭rޜ _ZAx,7ݸBw|l$[sagoQGŏۙ}R ]W/}2"OWO?o俾ߝo/9N 6G; [L\Y JRrQ~p}#'#y>"u++?/~޽j HRSUc)(b+b{+!+F".s:K3L0Ѷ CX.1)~+; |xoAz,PKnEz#ՒCrcnѣJ}0T(D{ hN K:x X#v09V,:2cMV< E;|9u z׭~2㬲eMʘ^ [-g}x7eP&|;rc:gTŌ6Kmdjkrpvfӆu;67Wf"Hem9Q?8u!mDg M]ȷt&R$I=[ˆ_?8Yٵٖ!@Gk8c(hrPRH,!]E#תzju\א >\I 6z+>62C'&T90;I\{'J} Kx_r vO_܇y>VEz7QvOqʜsӬ}j#b.鄴Xz%8@gAAEUT%>"C*zk!6$u=:s3,y<{^a&]$l [e, v4l2tyF|YȨILWN#:l&5 ްRhh`92֨8d1~Fvpҿ3/-pG̍F?w\kQͽ)?4R5j K8v7_]Ǯȑk͒{u4鵴Qhn,pɈʬ1|' I~ЮaN/꿟߂xGyx'fpȢ>n%y&ПO3i{3vɿ˹#H=/p;e9H"J hg[gh3Mt"4&8qUφo)>JܞPM(!T76e(ngM1]eR b"^]Fdbcb+y肁h(PxJnL``J8XTip;@m)!G t`6/,o/l*`^uGji&P޶(&y 9m`a/%~}ClrzSH}+ oG߉|~?77m?I0 o<&_21I@]=gL1С*4jtuE(IvJ_dswn܁`x+ " Z1NJt %kj[Yp|O;+jZ v毹4](pIdwTSO^T2z &DQv3Buzp$Yfdެ}bOY^:+Pk4E|t\X7k *ȍ X?,afokڪp>+Kh%'_Iz@99%@ 7WNyw&/΀WӬQvϡSz AMSin6 բ'.Su<rlpؠ4qF_Q D>V;P󉊙T#=lzBE:-NI.eMROF3F nF#ԎOBgE?tq:km%߲_w-w@>1 3#u_Yg,ƧQq{h5do^Ô2u>)HE-آV6LX ӕ]=bnCH dOכFyh+n3o-#(Wxy +V\ zv1L7\oڗ;77mzD5 C6l,{nؽ4!˲H7Wڹ3ka^f<".CS=G\;չ0!`')F:p[BN&W0}3{,ցpPXL>` vK'oqZc#XP}R֙x ,Ely]cȶxsAf*wN)l*>h2[mɢbpDts)V'e+ƒC&i}]xXb|Vd`"fmr +s+  ,) 5x 1p!؁yx678^WDј]&nF!KE)8HQˬ5z`Tjd-tZ(E\w8j\*}kFC3ԦX *"WQ7z5=k.s`Fd8A.+?4o?.1u8DfO %sOǞR{I_q{z3ATnE0aV-W.0 E&u?G>1҅^9m(T' +#,VH%>5Do0Zxe6rr/\OlOokDA2یّPau?P' pjԘ2}~X(cR/&r-&)^m}\j7`In6[A5ݴ10a$BBt֘(K*ed A 0II 86~0Q,I4]0 ܪHdF1-B 1@d`ՒD^xGo(] CJ9mh;Gzlj;Y>1#+yT}+j{S+yS9:q.?4u(Hה٤R Mm7ٌ5z V?9%LH%Hj &X>@ѻA֔?/KCU'6 vp*9ܔZ>%_V5rxsԇI#2OD|vv]#KGAԙ),}X'P;Ю&2Svz:~xDʦƎ_U|`XӘ$AÖ44녯XV%GJſ Es-Ѐ5ѽ 2ޕI:|mJ2ˍ슩VU6>A*]Sf{XB9$rkaLDQ &S@T.:ݔ0Zd|{|C>ʍiH>l2TD2=Fl@Q{wNZ jV+`͟eg)إˊ>ڴ弐^EL8)o/lc䎝Lk_ s9ZkUnaG;)]C"Ku:-]=zukݳ1hp3Nq63pyHeax!xx!'K]RWwRfjJ2bpLITM?Hbo3VԻå$xhOeљTΦ^X[_ i0x$pUQl|kW۳‹&tz;cton\٦3I VM N ;]!p6s(Eʼn&1I뻦sYE.b5N؜#KRzQ MxeYqՂ2`LC0Dܱrҭ`۠tpT8S<{B҆K I&Z8H$O8<@v[I'&,E>{UV$**p 04Иhu=zKbikUx@\ИÏ$UA<\q!<އ#QMV#]JFdۚc3e;DƝ-BDi~I뙛=,]gmCn.'hH !uYGBRCzk/ ڸ3E&\ x~#/ 1 y%kyxigS**;"% U"ی® 1gք@`/~BSyzSag?:Țy(1 VM/@OP9&B+6K7wbrSDhFsHm)]M&oٻj*۩7nb$Jo'VKܢ6 TI&ڦD!|s~( 撥AW"Gx#?х?~Sީ_^}}>S?޽cwx UÔ}5ݏv߯/?;'}"Bj@S{S$lq ~@8GA7U%(\+! a^Tiݪ\3M^}p/H-8 .\N{=?b%Y2ao ,OYDF'C,XC<ܨvĖ{pݐE2 jR|?*92Zb֕sRZljVktʭ4Rmc8y ]wV\nCKT +tjNAwD3I71 mo8fZ3O*k^42u d7l\("fv{Y;gnU\Ujc H uuy;'*R&UIsb#)SsMmܜhY1jXQ&gPֲo>|ȥ!5yLaAO!QJyCGjM^DplH ]yaWzѓ]6\Q bl;Lπe2S 2:=`~GoRZxb*N7V :L͡y֧t (ya瘎4Ø *ቋѭus-"Cy9bvHCXZǣduy,糏m+sJH9NČj7gߩ^/^xŋ?^CedarBackup3-3.1.6/testcase/data/mbox.conf.30000664000175000017500000000076012555004756022207 0ustar pronovicpronovic00000000000000 /home/joebob/mail/cedar-backup-users daily gzip /home/billiejoe/mail weekly bzip2 CedarBackup3-3.1.6/testcase/data/cback.conf.230000664000175000017500000000260012555004756022362 0ustar pronovicpronovic00000000000000 machine1-1 local /opt/backup/collect machine1-2 local /var/backup machine2 remote /backup/collect all machine3 remote someone scp -B /home/whatever/tmp machine4 remote someone scp -B ssh cback Y /aa machine5 remote N collect, purge /bb CedarBackup3-3.1.6/testcase/data/tree21.tar.gz0000664000175000017500000000674512555004756022474 0ustar pronovicpronovic00000000000000ENVa4s\O  w?;j[RhҦ<CN=nM6R^nJԕTEZUu˞8u8ϧi'?Gw"x.Oic_[?Wﱿ| omvy\j"z~77|ZrUw={Xx~8?n?n=о~a9 p߷|M4)ZϿ/:6p                                                                                                 OTa#!1LpnOwCD6..S{DۨRt})96W4U\=Mqq daily gzip /opt/public/svn/one BDB /opt/public/svn/two weekly /opt/public/svn/three bzip2 FSFS /opt/public/svn/four incr bzip2 CedarBackup3-3.1.6/testcase/data/tree9.tar.gz0000664000175000017500000000256212555004756022413 0ustar pronovicpronovic00000000000000^~ArJsS 64KP C "OdkL2{*85ݴVYWQpSP4M>MO#ķ;4I` .XCYy&U/2!IuQr \mq[>•8ɢs<]"~J=<rr1rn@ls.Li&ÍƗ,bkysRVr%zwxa335ALDlmLL5L8b]c.…I a#*IK2|! Xh;0M|Gcoh%)I*9Uפ&\d.)k*U&flgZe9;ZMMK~Ws*F??<ࢺNcFYز"!=&fWn WϒyrYEI^m6 }N]B6(zjM\ZQ`l "&Yh\٧~,.'xJtt6f*C{ax=?ߞw_=ǧ?o_?jP}5xZPI8&ӵ[1,#|Ds(X7H[Nqcb@ܑ 4nj[6Qd$x^cv"L .6(1UE XP F`zxp쬍(Yei34Ϗ @K>_Zؙu TϹ1MUKT{dq6b܏Fٛ(=lȶw—j{Qϕs9ҘC(õ=2MplcڵN)9יɮkmwٛ@O:ﲕyi4+!wu8]]+;?rXC @ @ |DQPCedarBackup3-3.1.6/testcase/data/tree15.tar.gz0000664000175000017500000000127412555004756022467 0ustar pronovicpronovic00000000000000ZEMo@q|~vf 妼~uv7ͦue I]:N0>!WhLGso459/cl/1y^(uZvϝvxoۙf;@\m|sL s`Dxs:XYq[퇀ҬRNkX8A6j;/b`7\mem(I.ĩɿ      D (CedarBackup3-3.1.6/testcase/data/split.conf.30000664000175000017500000000025212555004756022371 0ustar pronovicpronovic00000000000000 1.25 KB 0.6KB CedarBackup3-3.1.6/testcase/data/tree16.tar.gz0000664000175000017500000002141312555004756022465 0ustar pronovicpronovic00000000000000i2Eǖș{''#H;xAVwF:QS} 0F#r 0A`߮߮ ]PG@ ~+CK?vm?~_2N2\M~oC0o_c &:0@+s &x7j򼩪mZ~K(0E?ni5)>GZՙ#!@%SOǹ&ﷂS0fy^#gܳR;bL<<5{vf0755^\ o~sjʚv8o QuiT`$ಔVdx&*N(UBd_#ʕX ҅1+ -̤cl{a-ڱ\8g5RC Dݍŀ`IXp%4@x\m$nj}[ 7d[E^<@"[5C ˍŢ/|'G_iRFRtb4)"Lܡ; r^.E~2 ;)RMezpΔv#\m-QSkSL BhfTT#>G߳'3~_p'@ɞKժG/Ө)M=f6_c<,#y]lwgb XsQOgxI2/Xr|Ev;mHŚ9X…}n7@oʜȡi";񻰕*Xh7rhxIN%N:U?k{#%Db[!ߖ2"`65OvX&P 恣| rw-Qt|A,١MשrAج_}Μ0(Lܟ;=D_ 7 m1>qK:/*.f=fvDaz~hkl*4C)fd1Њ,+nptx\@1hhE1~ >k&*fm-s B`'ƔF.3+ M^XlOMGp!HB5f`YO(X}t;lȁrڹ*!xCIzFb#/wN?[9r.'_!ß܇O􇿮 Ƕ |5[r¬DY¬@r&PTN`O~[l4 ;{Oq/Q~PmR&%Y8TnyTW IE3*jadz+n!+#ul)4JK.mO'w"[c+%]tTc4窲 2 [w8ĻqpUӇprrs-_Y@O%и:Ӱ *m&BD,0H')q7g 5)ykعD''e11[Pv!{Ti*f2ɨrEq ,Xk>Za$B+ . ,I&fW'^p$8n=j᭷?|PwKm'}/o#Da_fGe64$l[_܏m ƇO@FֲPSBF&[etXD ZNp¹gk(GB]l jO,樭f@X/4-~m.0J7c@|zu:llܚ~-P\?1X>5$ gZ}fK=4Ob)1Ԃp$eA=R,@Z/*饲">Yf`o|`{܏x6]7!:4WC#r<: ŒU*z)Xژs"i5/ç[//{-4ƵuN l~Ё&tjcqs&s*6w3*IlBq( B5c$=ZG( SxZNLx&R9{Iф!{ uFFYNx;^煼8B[ˬOs:l]8M؇j=0"}i@.뮥dsNT:ep^H@V5  umU4K -qk/=Ȩ^ CY~S<݂ʇ3.A\Y 7wFͿf^n_-P[*@#UqX$(Fim'Ud%+ =[% \1y-})1zw?6Z#Je  F~E]ݙ-iW$UdhPse$(n'6i*>z>FDs{5k ؛KHr4̬aWOߥ,# aSˮCo q.gsnZa A5|T ܟ̚/wsx?W.wEX?|\ U?|w7??꿐o;_w[4oO@T=NVpH+y/qcTnTѯT_lrxuaA{a0b^ǸB3 QKQ[`C9iu`d=rB4->iVGUau%ř7x`qFDR3KCypQS^}oNb: #uԀO峝V^=2t.;-#y pvW7܄RD/8~;#CǦO yH΀dq^u"6~=qd&GyУ;I_m!~o{ﶁB/`}LHߓLIw7t ŀVӹfD `-l7-ИpLK1["+ʭǣzr(fQiC4bGsfGsF'6`NIvkEZ 2`BajvxU|-ѽU,[MihRXn&v.c"dL:vk]i8֗3C|w$./z,x͕im-љкmR`RQx!kIhMUd"ڃ8;_;ho _oP'v΢qA B*x[|5!<בPS1ye"GW!WZkO ^]+|*JNr^(1b^, ɚu| ~߷`[4(Zw[$f&^4-ڀFږHEa 8Bj#>Ϣһuޥ R: D EimZiNB,ṅ&7bxuBqRͳތ諉N°e,*xΦ`sQN+.cy%YG< L)vMH(Wp w! ^BP(-{n%1GKG1Ё 1»$JhԏHZ1u+#r5wCm]E:A뿟/g[H ӻ$lcWC|Ո1VVW~zˁz%{(d7pWJTьi 諣$AQ >rؾ<ꀬRunӪmV[+Ү`x)B)8c,ad?3Q :w'J Ӱy;Pz5nӏmv_ ޓej֚Q^Ak&rgU<O̞e](F eZ~=,PiU@]`4k[OߵxWN0\xXqwG&/Fub5u9+Em__[C[ cJԟyP!g&|ȉ[3\Lx'Sb42˄} w' @~EJs$v'm;M- > Dž\* c4";j.nV:||Ԝt[g}z (5(ʘyŒ#NrK4[a6Nszy*yȼ|v.J105X/{Q;j[OloqaV[Zܶ{5cCh7r6]'TJO}!x(?/KSX(?/q;0rڱ$3|xˈމ37SQod/OqI9c`djBS;z.mOAguWrIf1ѡI!YpaA9Sז]ݟ~ـܒUiN!;=9JPW'mZanh{e`?/K<,U 窞ȍIׄ2)JGDVN}iq6Ҳ'~+1食N4jÕsH_/I"mK5naB`Y"Yv)3u~<OĿgdPݾCedarBackup3-3.1.6/testcase/data/subversion.conf.60000664000175000017500000000051312555004756023440 0ustar pronovicpronovic00000000000000 /opt/public/svn/software daily gzip CedarBackup3-3.1.6/testcase/data/tree18.tar.gz0000664000175000017500000000171012555004756022465 0ustar pronovicpronovic00000000000000GEn0Fy`_t(m6o-eAw#ZKz}`N>n$:Ʀ &u܄ 6jֿ_a<Oߏ;{\{8* -B=ۛ`Ttƿc-p >?n\,BJW|k'Ypv_D'7 }Ǘ'Y&l_ q_2k&]% /&]k+O `w,p+; C Xiwxy>Mr1/!_?WB2ۿoI_F$}{j C˿w$?˿w$꿄!n_\)~o'P/$a/?l*% wK?%fV;I$꿄!n W`d/fY?/!?[`T7?$߄} 9W߹/aȿ[/aȿ[[_=- l_;' /aȿ_$X?rs|ܘ??/R/$a/?l*% wI?%fV=o ƭ?ƭĿwS^>/!?ecLOxKGBl? } %dVnV-a/`[%9=J~O'CedarBackup3-3.1.6/testcase/data/cback.conf.150000664000175000017500000001206712555004756022373 0ustar pronovicpronovic00000000000000 $Author: pronovic $ 1.3 Sample configuration Generated by hand. index example something.whatever example 102 bogus module something 350 tuesday /opt/backup/tmp backup group /usr/bin/scp -1 -B /usr/bin/ssh /usr/bin/cback collect, purge mkisofs /usr/bin/mkisofs svnlook /svnlook collect ls -l subversion mailx -S "hello" stage df -k /opt/backup/collect daily targz .cbignore /etc/cback.conf /etc/X11 .*tmp.* .*\.netscape\/.* /root /tmp 3 /ken 1 Y /var/log incr /etc incr tar .ignore /opt /opt/share large .*\.doc\.* backup .*\.xls\.* /opt/tmp /home/root/.profile /home/root/.kshrc weekly /home/root/.aliases daily tarbz2 /opt/backup/staging machine1-1 local /opt/backup/collect machine1-2 local /var/backup machine2 remote /backup/collect all machine3 remote someone scp -B /home/whatever/tmp /opt/backup/staging cdrw-74 cdwriter /dev/cdrw 4 Y Y Y Y weekly 1.3 /opt/backup/stage 5 /opt/backup/collect 0 /home/backup/tmp 12 CedarBackup3-3.1.6/testcase/data/encrypt.conf.20000664000175000017500000000027412555004756022725 0ustar pronovicpronovic00000000000000 gpg Backup User CedarBackup3-3.1.6/testcase/data/cback.conf.210000664000175000017500000001333012555004756022362 0ustar pronovicpronovic00000000000000 $Author: pronovic $ 1.3 Sample configuration Generated by hand. dependency example something.whatever example bogus module something a, b,c one tuesday /opt/backup/tmp backup group /usr/bin/scp -1 -B /usr/bin/ssh /usr/bin/cback collect, purge mkisofs /usr/bin/mkisofs svnlook /svnlook collect ls -l subversion mailx -S "hello" stage df -k machine1-1 local /opt/backup/collect machine1-2 local /var/backup machine2 remote /backup/collect all machine3 remote someone scp -B /home/whatever/tmp machine4 remote someone scp -B ssh cback Y /aa machine5 remote N collect, purge /bb /opt/backup/collect daily targz .cbignore /etc/cback.conf /etc/X11 .*tmp.* .*\.netscape\/.* /root /tmp 3 /ken 1 Y /var/log incr /etc incr tar .ignore /opt /opt/share large .*\.doc\.* backup .*\.xls\.* /opt/tmp /home/root/.profile /home/root/.kshrc weekly /home/root/.aliases daily tarbz2 /opt/backup/staging /opt/backup/staging dvd+rw dvdwriter /dev/cdrw 1 Y Y Y Y weekly 1.3 /opt/backup/stage 5 /opt/backup/collect 0 /home/backup/tmp 12 CedarBackup3-3.1.6/testcase/data/postgresql.conf.20000664000175000017500000000036212555004756023442 0ustar pronovicpronovic00000000000000 user none Y CedarBackup3-3.1.6/testcase/data/postgresql.conf.50000664000175000017500000000046612555004756023452 0ustar pronovicpronovic00000000000000 bzip2 N database1 database2 CedarBackup3-3.1.6/testcase/data/split.conf.50000664000175000017500000000025312555004756022374 0ustar pronovicpronovic00000000000000 1.25 GB 0.6 GB CedarBackup3-3.1.6/testcase/data/cback.conf.10000664000175000017500000000400412555004756022276 0ustar pronovicpronovic00000000000000 $Author: pronovic $ 1.3 Sample configuration tuesday /opt/backup/tmp backup backup /usr/bin/scp -1 -B /opt/backup/collect targz .cbignore /etc daily /var/log incr /opt weekly /opt/large /opt/backup /opt/tmp /opt/backup/staging machine1 local /opt/backup/collect machine2 remote backup /opt/backup/collect /opt/backup/staging /dev/cdrw 0,0,0 4 cdrw-74 Y /opt/backup/stage 5 /opt/backup/collect 0 CedarBackup3-3.1.6/testcase/data/cback.conf.130000664000175000017500000000041112555004756022357 0ustar pronovicpronovic00000000000000 /opt/backup/stage 5 CedarBackup3-3.1.6/testcase/data/amazons3.conf.20000664000175000017500000000052412555004756022772 0ustar pronovicpronovic00000000000000 Y mybucket encrypt 5368709120 2147483648 CedarBackup3-3.1.6/testcase/data/postgresql.conf.40000664000175000017500000000050512555004756023443 0ustar pronovicpronovic00000000000000 user bzip2 N database1 database2 CedarBackup3-3.1.6/testcase/data/capacity.conf.30000664000175000017500000000025212555004756023033 0ustar pronovicpronovic00000000000000 18 CedarBackup3-3.1.6/testcase/data/cback.conf.40000664000175000017500000000054212555004756022304 0ustar pronovicpronovic00000000000000 $Author: pronovic $ 1.3 Sample configuration Generated by hand. CedarBackup3-3.1.6/testcase/data/split.conf.10000664000175000017500000000007312555004756022370 0ustar pronovicpronovic00000000000000 CedarBackup3-3.1.6/testcase/data/tree2.ini0000664000175000017500000000041512555004756021751 0ustar pronovicpronovic00000000000000; Single-depth directory containing only other directories [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 1 mindirs = 1 maxdirs = 10 minfiles = 0 maxfiles = 0 minlinks = 0 maxlinks = 0 minsize = 0 maxsize = 500 CedarBackup3-3.1.6/testcase/data/tree5.tar.gz0000664000175000017500000003245312555004756022411 0ustar pronovicpronovic00000000000000AɖVE=+D; @@ߪ(+UJ%{ b6].t 0A`__ _ B A 㿀!/be}ï]M4FM}7%0+Q mvOtqAQyA5g} 󟟛qO#ק|?ZKfyzwP=ZFm 5ap"#˶$:DJ:xS,*aQyF\NoUjNKf!M>b NVꄊ/wKA g03ALC6w ?!3X|4Evrm#XN1 bU&{Vդ= nl^aR@8A804tusz['Põh聛ws;UZJšxG@f:Y# .b01R4f3vBڃ/a@ʓ(伂88.'B^fL`N74.Xm (tB6< ZoQlTWM|`2gwU?#G?56iyK1Fg"}M7q/] pTK\On+%1<2Ttl@P. PI ׼Ԥb.55ZFٰNIu,Kx@k0D9ՠxy/D+]I8HS1[G>{1g 74So1k1x|xW ϗ v?_{~m_?oCoW_OWE_{)<y_£E_)<y _?£OvO}|}3`j2GWCIuP%\54e^+@huTq_U۽"D9W|!Xg7}D3 +YIVi`x1P=ED6Wȍ աX]rA[77>_x2* /.Pʰ!S#\csU T9S!(!!G*O/ pχ(`K`y]NѵڂuT5$/%߹;3] \SnlYhiubPvbepޡYQdHYv-luT6xy9jŷbbO%d7 9'I[ 0'6/IQ_B3[*ڦ#Q/S gY }l`Ȝ$D$H!P!c4_c)PiqH{lҒb`> VW23V: `mLH#a̫$vgqO3_+3٣z&S!E:ѕS2KecdIddJt,Rqd52x'QsmθZ1sln D-4p0z8meL12\x:,W?>?ط>Aݢ矗.?AoUBoaɱ Q;i]&YIKpYRPїux~pap 4VQSPb a "x{" |ʄLYui%Er [no^Ylhg!Ngrŏ1It\iS'[-G:Ȉ}#XtE#S[# 8IKivDRU R7&-AS|":;JZ9??O7)p5Dd+G çww@[;w_}sx"O>w_ݽ΢C4qa6Z;ÊPRFN۝e=;*֮yH%Wxϧ7Kz=w%]ORnԷ%D"a\snT PYӘ20fB[ uã蹏vOfT7[ #}A F^-IrF v,{5)(hB˃lEWBUF"DЪ'i\(+ֲzIa)̭qpq`2gw؎FQx*~9F7pCn%{vraUoo(4f󑌞q*{pNkK=_G ?`{H3$k:OUN9ln橥#QMr BZm_?nޝvmQphfԿ2a~- hx]ά9n7~?YDtؑa/isZZ#EiRUNyog__oq?sS#KV`)E>P:Ӎy ,&13k±䘦PDy: D3i\㳈#+BUe̚m(kp eXwBHy7V Yq> L<8q#nήJͶ= ɾvW:]L:K-x@8۔$t'Bul$lT[i%!U=Mi`K~æ߲K@K `4ڄt HXѯ+-! ې ]h+6F#'@T'C'ޯ[yM6Zb0M z}qg/zpek$[VLD%z+Ty꒺t=5/S6Tt x8ol`'O9)M̂<޲z@"5#G$5yX=6geA!6N<H7v=+ O3NOڏ&np_ '}/}_>fGMϐmB1Ф&{˨$GWU23m_xa AnݯeXnb~C;hhZf#qQ<[C& |_}ݺk"jtDa9Nj4 Ϻdf 6pՙv aʕjܬ;d aRVt#@V"78n|">NI~ $*|/K3; 6Il!x^ NJ-9%/)Gs4a9ƤLJFEj\k)06#,bX8-^y %GH\&S⾤Pͭu=_Njr2ٲrqaGg?gÏ? yzgRv 9CaY7?]F!7~D让)8YѶu^f] t}|x"v'J(|׋ {a}?f 8kR!ɎŨFCBUR"4rKonVrqpkAo( 1:Bŗ`7s(h=U=ee}1(] qtR^sif]?ֿ?BocU*,wqhvdCoX ƍYs*CkQxE&.Y ]ax{UP7ɼŇ ̂3뉡͍yG"` D’ˁMq8"qd]25(kfv: LK,k?7 Y؛kɍj{2X u2]O8b8NN,kַC_CaC^R{7:|s+e 5J4I[$~=Ϲq9K n˃魳tmχ, 6iK$si_4%E|q;Z8! XйVsƽQWMoA뎕ulkդ.]Xwu$R6ӮdF:bųxB|]-oyJM&|aoUD&toG]B jo"Oʦ8].ON$FvgLm(Ay 0D gE|yR= -gҷ6tQmF5 \v:!ީ[[p%JUc„1cEY>hb3M}ڻ"R׷h-dlAt/<{9֚'o=pP5b q\ ⵅiDUܟ^QY PZ㪄ߺnR/H/ #R^N>yan|ՌĹUE sU⍎?qB?=";I"+ǩ=^;Dy\'q,SJs6q'Cc fQ.5:qqiqӉQ0IO\cʸqP6&GI >M/ v[H&~;i]pU 5R^FEl܏$s~V/0g~F]GlA8MlKlV6b)Њ jh?dz^]̍2t>~vjc{;Rk[XPvj$HZ6ӧ _qzUbp+pԙE[B rA*V}X_'6;Q>3p PtD̾!sٕs\,,EyWoyyJ#e]¤},7Hcnݱ' "Rp`CąllxgW,g}aBBjR!3Te mI] LNa n];&L~oyx4D`Sј$+v_cXKO[jU{Eh\~rҫHy<0I>?Gt)$E,$sNj4UlL殨w[.Xm5cH驠K~,̕g4H4"0FM]JoW8Dlb ~O*ܶ[M/4ػIL. s]\>2Vn ًQSbJqAlёIiWw"U '!L'`p$_lR XO+à^y+NOm*jM&Pb*3Q$Y&ݩI}z|օt@/?,^aQ|:)A- ȭќOض3Urbֿ~KdV16lBavJzu[ YrLOEV+xcP3yq+?Ƭ H8dl󡲛[A; iQo4+#Z .;S Ax>G/_?msG _s!!S#xC??»??N] CG?Cxן3}_>C_Eozذx?S#]Y"T *\ 1yZ.dVjP^MQ2|w.y;%Yv-CM'BlA4*@ˍ޳mddTg''Koc>X%voqQ^^8*3) gޕ[?2hX5BiNO"irnfu4㩡=R4Gr14=isįsï6jppV~O hoyOU2D1}L L*y?/|voV! ;h[{0ABKYffh=JAroOw3//äδ]!|h@`s ڠ;vI*9n_LIkE I.zW>ݢa݅*5?(^;o++He1F@H|Q {]V yL\;G02)~Y>ȌD;"8X{FN @C%ĝY eᕫ["oy> Ha ixjI\ CNҰAD(f\@#:)_EҾ MU+A%2RIx?kE  Xʵ$t4#ڦ&DnB4oZz_vvaZ#&!M2K}tβd(7ng[8Ȋh*8yOunvX~Uo߇P/G2c&n%8ZTll``#DF~)@mV#Rwf\gސ*0ْ2<`FsÜHK 'OYObFL'/G-_+|y}"92@DAumoW! w0k]ܦ'$NeD_mu,U3\o([շR#5ec R'uU \\n'ج-N$)npVgS)P*C3"@^UGQ 4 c("N vMzWf iJH8H}>?"B| c|=iPĒ#Y`.u:tdq _x<_hlrO֣n%pT-3M%DT,L"L RԮNО.͐g t4]۵RĶӣeuNKVbj*kgr$+WzI@Z R9Px*B ?G_8[GOr?;_N»;_!!3uH»;_!|W3񫐉B 55JY̥֮4\Lqcɒv;i:IuvlQhQTgxQH[]S/bo s?Ob!|gY=w?]CxCY=w?!CY=+iE8F LHUW)[眪[K^Mc ƌ/DK8faYKI5HG S[;e;d&+zKRP|X^A QeQG5_B+[sAR&w8cKW`/`]}R!#Sb$tҳZ dtna  S U ƲrEffiͪ2l]㙉Boa@jynӸ=mމr}gf~iMVn`S 5on,{U^8լmA`g YȷvXFaB.`7l5}\aV#m퍐W."]:1[38/q.00VUU8M r4T&{/צrHXZtK`c,_5GgC_Sg3Ktv翀0_s1m|@#1KjQy-:w)IҢDrVP&;P5r"`lmէښP 4s^˺R]KDՠXLlwb|+d $ԧ; Rx4^ |λf}_ wAf_ ˼3afS`d˚Ќ-ѮY>^,3)>&q@~\ EE)"Pfc$,: U`6q J|( Hg zXTa=+zYuQNʅ#)DkO-Z1cYۙmp[ky@-…ƪQN^2|,D`8&%Uk5^XAtF듯-sT";u> !rVGzl?ǒZ< A=unGVeљ:]|i6GtcdWULUwU 2yG+MuBmLD84P%FozI@>O 5^4"H,VB8g!IO+7vςsCl\pSbKQb1v |Rz?_?ϪTrIRa:\gL-q?o|kδqT{ѱ:-]=sk.QB̞ގc쇶,2g(ԍ1Ŧt,g.QICrZUXO1C4X :/#98<'SB AQZḱX%K.uC `-d?4F!һA.u7N^Rr;hQq,scxOn~2&>BP@{k$0;ag7H] f-~rᛓȀK<̓o%>2q}bΪmt_U=Cuh]sɂ l}~WG {?_mH??"Gc^O'9>JGSx;꿞[HQEVQw=cEV/wPTWno)NϘn'G$LWk%tb f2DFvpaACύVUfUNZFc*-#~@%w3KV偠[nK xx)k?x0A۰a/"TüQJ#xbf^ʾ\#1Yj(&gZ s K@]=zsbT-r0k~eݐ /opt/backup/stage 5 /opt/backup/collect 0 /home/backup/tmp 12 CedarBackup3-3.1.6/testcase/data/cback.conf.30000664000175000017500000000024612555004756022304 0ustar pronovicpronovic00000000000000 CedarBackup3-3.1.6/testcase/data/tree10.tar.gz0000664000175000017500000000060712555004756022461 0ustar pronovicpronovic00000000000000An@`>oй2V4Dmw Ehb.Ʀ|aA`8JJn BDޭXpԭ3_{F aQʅ^ kU/i\jy>YmyCQ/*oSB3mZ0I mW=ȿ?QU[hMOu%Č  }Nzò 9Ø}1`^W? w7q#y[iRecC9J|.J %bj۔yr)Ѝ,T;W0uJaI-M~&RfX|IL sݱϥLA5bc7.`Ks8iSGWSP(CedarBackup3-3.1.6/testcase/subversiontests.py0000664000175000017500000031314112560007330023140 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2005-2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests Subversion extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/extend/subversion.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in extend/subversion.py. There are also tests for several of the private methods. Unfortunately, it's rather difficult to test this code in an automated fashion, even if you have access to Subversion, since the actual backup would need to have access to real Subversion repositories. Because of this, there aren't any tests below that actually back up repositories. As a compromise, I test some of the private methods in the implementation. Normally, I don't like to test private methods, but in this case, testing the private methods will help give us some reasonable confidence in the code even if we can't talk to Subversion successfully. This isn't perfect, but it's better than nothing. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a SUBVERSIONTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest # Cedar Backup modules from CedarBackup3.testutil import findResources, failUnlessAssignRaises from CedarBackup3.xmlutil import createOutputDom, serializeDom from CedarBackup3.extend.subversion import LocalConfig, SubversionConfig from CedarBackup3.extend.subversion import Repository, RepositoryDir, BDBRepository, FSFSRepository ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "subversion.conf.1", "subversion.conf.2", "subversion.conf.3", "subversion.conf.4", "subversion.conf.5", "subversion.conf.6", "subversion.conf.7", ] ####################################################################### # Test Case Classes ####################################################################### ########################## # TestBDBRepository class ########################## class TestBDBRepository(unittest.TestCase): """ Tests for the BDBRepository class. @note: This class is deprecated. These tests are kept around to make sure that we don't accidentally break the interface. """ ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = BDBRepository() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ repository = BDBRepository() self.assertEqual("BDB", repository.repositoryType) self.assertEqual(None, repository.repositoryPath) self.assertEqual(None, repository.collectMode) self.assertEqual(None, repository.compressMode) def testConstructor_002(self): """ Test constructor with all values filled in. """ repository = BDBRepository("/path/to/it", "daily", "gzip") self.assertEqual("BDB", repository.repositoryType) self.assertEqual("/path/to/it", repository.repositoryPath) self.assertEqual("daily", repository.collectMode) self.assertEqual("gzip", repository.compressMode) # Removed testConstructor_003 after BDBRepository was deprecated def testConstructor_004(self): """ Test assignment of repositoryPath attribute, None value. """ repository = BDBRepository(repositoryPath="/path/to/something") self.assertEqual("/path/to/something", repository.repositoryPath) repository.repositoryPath = None self.assertEqual(None, repository.repositoryPath) def testConstructor_005(self): """ Test assignment of repositoryPath attribute, valid value. """ repository = BDBRepository() self.assertEqual(None, repository.repositoryPath) repository.repositoryPath = "/path/to/whatever" self.assertEqual("/path/to/whatever", repository.repositoryPath) def testConstructor_006(self): """ Test assignment of repositoryPath attribute, invalid value (empty). """ repository = BDBRepository() self.assertEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "") self.assertEqual(None, repository.repositoryPath) def testConstructor_007(self): """ Test assignment of repositoryPath attribute, invalid value (not absolute). """ repository = BDBRepository() self.assertEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "relative/path") self.assertEqual(None, repository.repositoryPath) def testConstructor_008(self): """ Test assignment of collectMode attribute, None value. """ repository = BDBRepository(collectMode="daily") self.assertEqual("daily", repository.collectMode) repository.collectMode = None self.assertEqual(None, repository.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, valid value. """ repository = BDBRepository() self.assertEqual(None, repository.collectMode) repository.collectMode = "daily" self.assertEqual("daily", repository.collectMode) repository.collectMode = "weekly" self.assertEqual("weekly", repository.collectMode) repository.collectMode = "incr" self.assertEqual("incr", repository.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (empty). """ repository = BDBRepository() self.assertEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "") self.assertEqual(None, repository.collectMode) def testConstructor_011(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ repository = BDBRepository() self.assertEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "monthly") self.assertEqual(None, repository.collectMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, None value. """ repository = BDBRepository(compressMode="gzip") self.assertEqual("gzip", repository.compressMode) repository.compressMode = None self.assertEqual(None, repository.compressMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, valid value. """ repository = BDBRepository() self.assertEqual(None, repository.compressMode) repository.compressMode = "none" self.assertEqual("none", repository.compressMode) repository.compressMode = "bzip2" self.assertEqual("bzip2", repository.compressMode) repository.compressMode = "gzip" self.assertEqual("gzip", repository.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, invalid value (empty). """ repository = BDBRepository() self.assertEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "") self.assertEqual(None, repository.compressMode) def testConstructor_015(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ repository = BDBRepository() self.assertEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "compress") self.assertEqual(None, repository.compressMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ repository1 = BDBRepository() repository2 = BDBRepository() self.assertEqual(repository1, repository2) self.assertTrue(repository1 == repository2) self.assertTrue(not repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(repository1 >= repository2) self.assertTrue(not repository1 != repository2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ repository1 = BDBRepository("/path", "daily", "gzip") repository2 = BDBRepository("/path", "daily", "gzip") self.assertEqual(repository1, repository2) self.assertTrue(repository1 == repository2) self.assertTrue(not repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(repository1 >= repository2) self.assertTrue(not repository1 != repository2) def testComparison_003(self): """ Test comparison of two differing objects, repositoryPath differs (one None). """ repository1 = BDBRepository() repository2 = BDBRepository(repositoryPath="/zippy") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) def testComparison_004(self): """ Test comparison of two differing objects, repositoryPath differs. """ repository1 = BDBRepository("/path", "daily", "gzip") repository2 = BDBRepository("/zippy", "daily", "gzip") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ repository1 = BDBRepository() repository2 = BDBRepository(collectMode="incr") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ repository1 = BDBRepository("/path", "daily", "gzip") repository2 = BDBRepository("/path", "incr", "gzip") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ repository1 = BDBRepository() repository2 = BDBRepository(compressMode="gzip") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ repository1 = BDBRepository("/path", "daily", "bzip2") repository2 = BDBRepository("/path", "daily", "gzip") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) ########################### # TestFSFSRepository class ########################### class TestFSFSRepository(unittest.TestCase): """ Tests for the FSFSRepository class. @note: This class is deprecated. These tests are kept around to make sure that we don't accidentally break the interface. """ ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = FSFSRepository() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ repository = FSFSRepository() self.assertEqual("FSFS", repository.repositoryType) self.assertEqual(None, repository.repositoryPath) self.assertEqual(None, repository.collectMode) self.assertEqual(None, repository.compressMode) def testConstructor_002(self): """ Test constructor with all values filled in. """ repository = FSFSRepository("/path/to/it", "daily", "gzip") self.assertEqual("FSFS", repository.repositoryType) self.assertEqual("/path/to/it", repository.repositoryPath) self.assertEqual("daily", repository.collectMode) self.assertEqual("gzip", repository.compressMode) # Removed testConstructor_003 after FSFSRepository was deprecated def testConstructor_004(self): """ Test assignment of repositoryPath attribute, None value. """ repository = FSFSRepository(repositoryPath="/path/to/something") self.assertEqual("/path/to/something", repository.repositoryPath) repository.repositoryPath = None self.assertEqual(None, repository.repositoryPath) def testConstructor_005(self): """ Test assignment of repositoryPath attribute, valid value. """ repository = FSFSRepository() self.assertEqual(None, repository.repositoryPath) repository.repositoryPath = "/path/to/whatever" self.assertEqual("/path/to/whatever", repository.repositoryPath) def testConstructor_006(self): """ Test assignment of repositoryPath attribute, invalid value (empty). """ repository = FSFSRepository() self.assertEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "") self.assertEqual(None, repository.repositoryPath) def testConstructor_007(self): """ Test assignment of repositoryPath attribute, invalid value (not absolute). """ repository = FSFSRepository() self.assertEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "relative/path") self.assertEqual(None, repository.repositoryPath) def testConstructor_008(self): """ Test assignment of collectMode attribute, None value. """ repository = FSFSRepository(collectMode="daily") self.assertEqual("daily", repository.collectMode) repository.collectMode = None self.assertEqual(None, repository.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, valid value. """ repository = FSFSRepository() self.assertEqual(None, repository.collectMode) repository.collectMode = "daily" self.assertEqual("daily", repository.collectMode) repository.collectMode = "weekly" self.assertEqual("weekly", repository.collectMode) repository.collectMode = "incr" self.assertEqual("incr", repository.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (empty). """ repository = FSFSRepository() self.assertEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "") self.assertEqual(None, repository.collectMode) def testConstructor_011(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ repository = FSFSRepository() self.assertEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "monthly") self.assertEqual(None, repository.collectMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, None value. """ repository = FSFSRepository(compressMode="gzip") self.assertEqual("gzip", repository.compressMode) repository.compressMode = None self.assertEqual(None, repository.compressMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, valid value. """ repository = FSFSRepository() self.assertEqual(None, repository.compressMode) repository.compressMode = "none" self.assertEqual("none", repository.compressMode) repository.compressMode = "bzip2" self.assertEqual("bzip2", repository.compressMode) repository.compressMode = "gzip" self.assertEqual("gzip", repository.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, invalid value (empty). """ repository = FSFSRepository() self.assertEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "") self.assertEqual(None, repository.compressMode) def testConstructor_015(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ repository = FSFSRepository() self.assertEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "compress") self.assertEqual(None, repository.compressMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ repository1 = FSFSRepository() repository2 = FSFSRepository() self.assertEqual(repository1, repository2) self.assertTrue(repository1 == repository2) self.assertTrue(not repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(repository1 >= repository2) self.assertTrue(not repository1 != repository2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ repository1 = FSFSRepository("/path", "daily", "gzip") repository2 = FSFSRepository("/path", "daily", "gzip") self.assertEqual(repository1, repository2) self.assertTrue(repository1 == repository2) self.assertTrue(not repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(repository1 >= repository2) self.assertTrue(not repository1 != repository2) def testComparison_003(self): """ Test comparison of two differing objects, repositoryPath differs (one None). """ repository1 = FSFSRepository() repository2 = FSFSRepository(repositoryPath="/zippy") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) def testComparison_004(self): """ Test comparison of two differing objects, repositoryPath differs. """ repository1 = FSFSRepository("/path", "daily", "gzip") repository2 = FSFSRepository("/zippy", "daily", "gzip") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ repository1 = FSFSRepository() repository2 = FSFSRepository(collectMode="incr") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ repository1 = FSFSRepository("/path", "daily", "gzip") repository2 = FSFSRepository("/path", "incr", "gzip") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ repository1 = FSFSRepository() repository2 = FSFSRepository(compressMode="gzip") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ repository1 = FSFSRepository("/path", "daily", "bzip2") repository2 = FSFSRepository("/path", "daily", "gzip") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) ####################### # TestRepository class ####################### class TestRepository(unittest.TestCase): """Tests for the Repository class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = Repository() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ repository = Repository() self.assertEqual(None, repository.repositoryType) self.assertEqual(None, repository.repositoryPath) self.assertEqual(None, repository.collectMode) self.assertEqual(None, repository.compressMode) def testConstructor_002(self): """ Test constructor with all values filled in. """ repository = Repository("type", "/path/to/it", "daily", "gzip") self.assertEqual("type", repository.repositoryType) self.assertEqual("/path/to/it", repository.repositoryPath) self.assertEqual("daily", repository.collectMode) self.assertEqual("gzip", repository.compressMode) def testConstructor_003(self): """ Test assignment of repositoryType attribute, None value. """ repository = Repository(repositoryType="type") self.assertEqual("type", repository.repositoryType) repository.repositoryType = None self.assertEqual(None, repository.repositoryType) def testConstructor_004(self): """ Test assignment of repositoryType attribute, non-None value. """ repository = Repository() self.assertEqual(None, repository.repositoryType) repository.repositoryType = "" self.assertEqual("", repository.repositoryType) repository.repositoryType = "test" self.assertEqual("test", repository.repositoryType) def testConstructor_005(self): """ Test assignment of repositoryPath attribute, None value. """ repository = Repository(repositoryPath="/path/to/something") self.assertEqual("/path/to/something", repository.repositoryPath) repository.repositoryPath = None self.assertEqual(None, repository.repositoryPath) def testConstructor_006(self): """ Test assignment of repositoryPath attribute, valid value. """ repository = Repository() self.assertEqual(None, repository.repositoryPath) repository.repositoryPath = "/path/to/whatever" self.assertEqual("/path/to/whatever", repository.repositoryPath) def testConstructor_007(self): """ Test assignment of repositoryPath attribute, invalid value (empty). """ repository = Repository() self.assertEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "") self.assertEqual(None, repository.repositoryPath) def testConstructor_008(self): """ Test assignment of repositoryPath attribute, invalid value (not absolute). """ repository = Repository() self.assertEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "relative/path") self.assertEqual(None, repository.repositoryPath) def testConstructor_009(self): """ Test assignment of collectMode attribute, None value. """ repository = Repository(collectMode="daily") self.assertEqual("daily", repository.collectMode) repository.collectMode = None self.assertEqual(None, repository.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, valid value. """ repository = Repository() self.assertEqual(None, repository.collectMode) repository.collectMode = "daily" self.assertEqual("daily", repository.collectMode) repository.collectMode = "weekly" self.assertEqual("weekly", repository.collectMode) repository.collectMode = "incr" self.assertEqual("incr", repository.collectMode) def testConstructor_011(self): """ Test assignment of collectMode attribute, invalid value (empty). """ repository = Repository() self.assertEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "") self.assertEqual(None, repository.collectMode) def testConstructor_012(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ repository = Repository() self.assertEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "monthly") self.assertEqual(None, repository.collectMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, None value. """ repository = Repository(compressMode="gzip") self.assertEqual("gzip", repository.compressMode) repository.compressMode = None self.assertEqual(None, repository.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, valid value. """ repository = Repository() self.assertEqual(None, repository.compressMode) repository.compressMode = "none" self.assertEqual("none", repository.compressMode) repository.compressMode = "bzip2" self.assertEqual("bzip2", repository.compressMode) repository.compressMode = "gzip" self.assertEqual("gzip", repository.compressMode) def testConstructor_015(self): """ Test assignment of compressMode attribute, invalid value (empty). """ repository = Repository() self.assertEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "") self.assertEqual(None, repository.compressMode) def testConstructor_016(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ repository = Repository() self.assertEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "compress") self.assertEqual(None, repository.compressMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ repository1 = Repository() repository2 = Repository() self.assertEqual(repository1, repository2) self.assertTrue(repository1 == repository2) self.assertTrue(not repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(repository1 >= repository2) self.assertTrue(not repository1 != repository2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ repository1 = Repository("type", "/path", "daily", "gzip") repository2 = Repository("type", "/path", "daily", "gzip") self.assertEqual(repository1, repository2) self.assertTrue(repository1 == repository2) self.assertTrue(not repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(repository1 >= repository2) self.assertTrue(not repository1 != repository2) def testComparison_003(self): """ Test comparison of two differing objects, repositoryType differs (one None). """ repository1 = Repository() repository2 = Repository(repositoryType="type") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) def testComparison_004(self): """ Test comparison of two differing objects, repositoryType differs. """ repository1 = Repository("other", "/path", "daily", "gzip") repository2 = Repository("type", "/path", "daily", "gzip") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) def testComparison_004a(self): """ Test comparison of two differing objects, repositoryPath differs (one None). """ repository1 = Repository() repository2 = Repository(repositoryPath="/zippy") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) def testComparison_005(self): """ Test comparison of two differing objects, repositoryPath differs. """ repository1 = Repository("type", "/path", "daily", "gzip") repository2 = Repository("type", "/zippy", "daily", "gzip") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs (one None). """ repository1 = Repository() repository2 = Repository(collectMode="incr") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) def testComparison_007(self): """ Test comparison of two differing objects, collectMode differs. """ repository1 = Repository("type", "/path", "daily", "gzip") repository2 = Repository("type", "/path", "incr", "gzip") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs (one None). """ repository1 = Repository() repository2 = Repository(compressMode="gzip") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) def testComparison_009(self): """ Test comparison of two differing objects, compressMode differs. """ repository1 = Repository("type", "/path", "daily", "bzip2") repository2 = Repository("type", "/path", "daily", "gzip") self.assertNotEqual(repository1, repository2) self.assertTrue(not repository1 == repository2) self.assertTrue(repository1 < repository2) self.assertTrue(repository1 <= repository2) self.assertTrue(not repository1 > repository2) self.assertTrue(not repository1 >= repository2) self.assertTrue(repository1 != repository2) ########################## # TestRepositoryDir class ########################## class TestRepositoryDir(unittest.TestCase): """Tests for the RepositoryDir class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = RepositoryDir() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.repositoryType) self.assertEqual(None, repositoryDir.directoryPath) self.assertEqual(None, repositoryDir.collectMode) self.assertEqual(None, repositoryDir.compressMode) self.assertEqual(None, repositoryDir.relativeExcludePaths) self.assertEqual(None, repositoryDir.excludePatterns) def testConstructor_002(self): """ Test constructor with all values filled in. """ repositoryDir = RepositoryDir("type", "/path/to/it", "daily", "gzip", [ "whatever", ], [ ".*software.*", ]) self.assertEqual("type", repositoryDir.repositoryType) self.assertEqual("/path/to/it", repositoryDir.directoryPath) self.assertEqual("daily", repositoryDir.collectMode) self.assertEqual("gzip", repositoryDir.compressMode) self.assertEqual([ "whatever", ], repositoryDir.relativeExcludePaths) self.assertEqual([ ".*software.*", ], repositoryDir.excludePatterns) def testConstructor_003(self): """ Test assignment of repositoryType attribute, None value. """ repositoryDir = RepositoryDir(repositoryType="type") self.assertEqual("type", repositoryDir.repositoryType) repositoryDir.repositoryType = None self.assertEqual(None, repositoryDir.repositoryType) def testConstructor_004(self): """ Test assignment of repositoryType attribute, non-None value. """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.repositoryType) repositoryDir.repositoryType = "" self.assertEqual("", repositoryDir.repositoryType) repositoryDir.repositoryType = "test" self.assertEqual("test", repositoryDir.repositoryType) def testConstructor_005(self): """ Test assignment of directoryPath attribute, None value. """ repositoryDir = RepositoryDir(directoryPath="/path/to/something") self.assertEqual("/path/to/something", repositoryDir.directoryPath) repositoryDir.directoryPath = None self.assertEqual(None, repositoryDir.directoryPath) def testConstructor_006(self): """ Test assignment of directoryPath attribute, valid value. """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.directoryPath) repositoryDir.directoryPath = "/path/to/whatever" self.assertEqual("/path/to/whatever", repositoryDir.directoryPath) def testConstructor_007(self): """ Test assignment of directoryPath attribute, invalid value (empty). """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.directoryPath) self.failUnlessAssignRaises(ValueError, repositoryDir, "directoryPath", "") self.assertEqual(None, repositoryDir.directoryPath) def testConstructor_008(self): """ Test assignment of directoryPath attribute, invalid value (not absolute). """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.directoryPath) self.failUnlessAssignRaises(ValueError, repositoryDir, "directoryPath", "relative/path") self.assertEqual(None, repositoryDir.directoryPath) def testConstructor_009(self): """ Test assignment of collectMode attribute, None value. """ repositoryDir = RepositoryDir(collectMode="daily") self.assertEqual("daily", repositoryDir.collectMode) repositoryDir.collectMode = None self.assertEqual(None, repositoryDir.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, valid value. """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.collectMode) repositoryDir.collectMode = "daily" self.assertEqual("daily", repositoryDir.collectMode) repositoryDir.collectMode = "weekly" self.assertEqual("weekly", repositoryDir.collectMode) repositoryDir.collectMode = "incr" self.assertEqual("incr", repositoryDir.collectMode) def testConstructor_011(self): """ Test assignment of collectMode attribute, invalid value (empty). """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.collectMode) self.failUnlessAssignRaises(ValueError, repositoryDir, "collectMode", "") self.assertEqual(None, repositoryDir.collectMode) def testConstructor_012(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.collectMode) self.failUnlessAssignRaises(ValueError, repositoryDir, "collectMode", "monthly") self.assertEqual(None, repositoryDir.collectMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, None value. """ repositoryDir = RepositoryDir(compressMode="gzip") self.assertEqual("gzip", repositoryDir.compressMode) repositoryDir.compressMode = None self.assertEqual(None, repositoryDir.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, valid value. """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.compressMode) repositoryDir.compressMode = "none" self.assertEqual("none", repositoryDir.compressMode) repositoryDir.compressMode = "bzip2" self.assertEqual("bzip2", repositoryDir.compressMode) repositoryDir.compressMode = "gzip" self.assertEqual("gzip", repositoryDir.compressMode) def testConstructor_015(self): """ Test assignment of compressMode attribute, invalid value (empty). """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.compressMode) self.failUnlessAssignRaises(ValueError, repositoryDir, "compressMode", "") self.assertEqual(None, repositoryDir.compressMode) def testConstructor_016(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.compressMode) self.failUnlessAssignRaises(ValueError, repositoryDir, "compressMode", "compress") self.assertEqual(None, repositoryDir.compressMode) def testConstructor_017(self): """ Test assignment of relativeExcludePaths attribute, None value. """ repositoryDir = RepositoryDir(relativeExcludePaths=[]) self.assertEqual([], repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths = None self.assertEqual(None, repositoryDir.relativeExcludePaths) def testConstructor_018(self): """ Test assignment of relativeExcludePaths attribute, [] value. """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths = [] self.assertEqual([], repositoryDir.relativeExcludePaths) def testConstructor_019(self): """ Test assignment of relativeExcludePaths attribute, single valid entry. """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths = ["stuff", ] self.assertEqual(["stuff", ], repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths.insert(0, "bogus") self.assertEqual(["bogus", "stuff", ], repositoryDir.relativeExcludePaths) def testConstructor_020(self): """ Test assignment of relativeExcludePaths attribute, multiple valid entries. """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths = ["bogus", "stuff", ] self.assertEqual(["bogus", "stuff", ], repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths.append("more") self.assertEqual(["bogus", "stuff", "more", ], repositoryDir.relativeExcludePaths) def testConstructor_021(self): """ Test assignment of excludePatterns attribute, None value. """ repositoryDir = RepositoryDir(excludePatterns=[]) self.assertEqual([], repositoryDir.excludePatterns) repositoryDir.excludePatterns = None self.assertEqual(None, repositoryDir.excludePatterns) def testConstructor_022(self): """ Test assignment of excludePatterns attribute, [] value. """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.excludePatterns) repositoryDir.excludePatterns = [] self.assertEqual([], repositoryDir.excludePatterns) def testConstructor_023(self): """ Test assignment of excludePatterns attribute, single valid entry. """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.excludePatterns) repositoryDir.excludePatterns = ["valid", ] self.assertEqual(["valid", ], repositoryDir.excludePatterns) repositoryDir.excludePatterns.append("more") self.assertEqual(["valid", "more", ], repositoryDir.excludePatterns) def testConstructor_024(self): """ Test assignment of excludePatterns attribute, multiple valid entries. """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.excludePatterns) repositoryDir.excludePatterns = ["valid", "more", ] self.assertEqual(["valid", "more", ], repositoryDir.excludePatterns) repositoryDir.excludePatterns.insert(1, "bogus") self.assertEqual(["valid", "bogus", "more", ], repositoryDir.excludePatterns) def testConstructor_025(self): """ Test assignment of excludePatterns attribute, single invalid entry. """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.excludePatterns) self.failUnlessAssignRaises(ValueError, repositoryDir, "excludePatterns", ["*.jpg", ]) self.assertEqual(None, repositoryDir.excludePatterns) def testConstructor_026(self): """ Test assignment of excludePatterns attribute, multiple invalid entries. """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.excludePatterns) self.failUnlessAssignRaises(ValueError, repositoryDir, "excludePatterns", ["*.jpg", "*" ]) self.assertEqual(None, repositoryDir.excludePatterns) def testConstructor_027(self): """ Test assignment of excludePatterns attribute, mixed valid and invalid entries. """ repositoryDir = RepositoryDir() self.assertEqual(None, repositoryDir.excludePatterns) self.failUnlessAssignRaises(ValueError, repositoryDir, "excludePatterns", ["*.jpg", "valid" ]) self.assertEqual(None, repositoryDir.excludePatterns) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ repositoryDir1 = RepositoryDir() repositoryDir2 = RepositoryDir() self.assertEqual(repositoryDir1, repositoryDir2) self.assertTrue(repositoryDir1 == repositoryDir2) self.assertTrue(not repositoryDir1 < repositoryDir2) self.assertTrue(repositoryDir1 <= repositoryDir2) self.assertTrue(not repositoryDir1 > repositoryDir2) self.assertTrue(repositoryDir1 >= repositoryDir2) self.assertTrue(not repositoryDir1 != repositoryDir2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ repositoryDir1 = RepositoryDir("type", "/path", "daily", "gzip") repositoryDir2 = RepositoryDir("type", "/path", "daily", "gzip") self.assertEqual(repositoryDir1, repositoryDir2) self.assertTrue(repositoryDir1 == repositoryDir2) self.assertTrue(not repositoryDir1 < repositoryDir2) self.assertTrue(repositoryDir1 <= repositoryDir2) self.assertTrue(not repositoryDir1 > repositoryDir2) self.assertTrue(repositoryDir1 >= repositoryDir2) self.assertTrue(not repositoryDir1 != repositoryDir2) def testComparison_003(self): """ Test comparison of two differing objects, repositoryType differs (one None). """ repositoryDir1 = RepositoryDir() repositoryDir2 = RepositoryDir(repositoryType="type") self.assertNotEqual(repositoryDir1, repositoryDir2) self.assertTrue(not repositoryDir1 == repositoryDir2) self.assertTrue(repositoryDir1 < repositoryDir2) self.assertTrue(repositoryDir1 <= repositoryDir2) self.assertTrue(not repositoryDir1 > repositoryDir2) self.assertTrue(not repositoryDir1 >= repositoryDir2) self.assertTrue(repositoryDir1 != repositoryDir2) def testComparison_004(self): """ Test comparison of two differing objects, repositoryType differs. """ repositoryDir1 = RepositoryDir("other", "/path", "daily", "gzip") repositoryDir2 = RepositoryDir("type", "/path", "daily", "gzip") self.assertNotEqual(repositoryDir1, repositoryDir2) self.assertTrue(not repositoryDir1 == repositoryDir2) self.assertTrue(repositoryDir1 < repositoryDir2) self.assertTrue(repositoryDir1 <= repositoryDir2) self.assertTrue(not repositoryDir1 > repositoryDir2) self.assertTrue(not repositoryDir1 >= repositoryDir2) self.assertTrue(repositoryDir1 != repositoryDir2) def testComparison_004a(self): """ Test comparison of two differing objects, directoryPath differs (one None). """ repositoryDir1 = RepositoryDir() repositoryDir2 = RepositoryDir(directoryPath="/zippy") self.assertNotEqual(repositoryDir1, repositoryDir2) self.assertTrue(not repositoryDir1 == repositoryDir2) self.assertTrue(repositoryDir1 < repositoryDir2) self.assertTrue(repositoryDir1 <= repositoryDir2) self.assertTrue(not repositoryDir1 > repositoryDir2) self.assertTrue(not repositoryDir1 >= repositoryDir2) self.assertTrue(repositoryDir1 != repositoryDir2) def testComparison_005(self): """ Test comparison of two differing objects, directoryPath differs. """ repositoryDir1 = RepositoryDir("type", "/path", "daily", "gzip") repositoryDir2 = RepositoryDir("type", "/zippy", "daily", "gzip") self.assertNotEqual(repositoryDir1, repositoryDir2) self.assertTrue(not repositoryDir1 == repositoryDir2) self.assertTrue(repositoryDir1 < repositoryDir2) self.assertTrue(repositoryDir1 <= repositoryDir2) self.assertTrue(not repositoryDir1 > repositoryDir2) self.assertTrue(not repositoryDir1 >= repositoryDir2) self.assertTrue(repositoryDir1 != repositoryDir2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs (one None). """ repositoryDir1 = RepositoryDir() repositoryDir2 = RepositoryDir(collectMode="incr") self.assertNotEqual(repositoryDir1, repositoryDir2) self.assertTrue(not repositoryDir1 == repositoryDir2) self.assertTrue(repositoryDir1 < repositoryDir2) self.assertTrue(repositoryDir1 <= repositoryDir2) self.assertTrue(not repositoryDir1 > repositoryDir2) self.assertTrue(not repositoryDir1 >= repositoryDir2) self.assertTrue(repositoryDir1 != repositoryDir2) def testComparison_007(self): """ Test comparison of two differing objects, collectMode differs. """ repositoryDir1 = RepositoryDir("type", "/path", "daily", "gzip") repositoryDir2 = RepositoryDir("type", "/path", "incr", "gzip") self.assertNotEqual(repositoryDir1, repositoryDir2) self.assertTrue(not repositoryDir1 == repositoryDir2) self.assertTrue(repositoryDir1 < repositoryDir2) self.assertTrue(repositoryDir1 <= repositoryDir2) self.assertTrue(not repositoryDir1 > repositoryDir2) self.assertTrue(not repositoryDir1 >= repositoryDir2) self.assertTrue(repositoryDir1 != repositoryDir2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs (one None). """ repositoryDir1 = RepositoryDir() repositoryDir2 = RepositoryDir(compressMode="gzip") self.assertNotEqual(repositoryDir1, repositoryDir2) self.assertTrue(not repositoryDir1 == repositoryDir2) self.assertTrue(repositoryDir1 < repositoryDir2) self.assertTrue(repositoryDir1 <= repositoryDir2) self.assertTrue(not repositoryDir1 > repositoryDir2) self.assertTrue(not repositoryDir1 >= repositoryDir2) self.assertTrue(repositoryDir1 != repositoryDir2) def testComparison_009(self): """ Test comparison of two differing objects, compressMode differs. """ repositoryDir1 = RepositoryDir("type", "/path", "daily", "bzip2") repositoryDir2 = RepositoryDir("type", "/path", "daily", "gzip") self.assertNotEqual(repositoryDir1, repositoryDir2) self.assertTrue(not repositoryDir1 == repositoryDir2) self.assertTrue(repositoryDir1 < repositoryDir2) self.assertTrue(repositoryDir1 <= repositoryDir2) self.assertTrue(not repositoryDir1 > repositoryDir2) self.assertTrue(not repositoryDir1 >= repositoryDir2) self.assertTrue(repositoryDir1 != repositoryDir2) ############################# # TestSubversionConfig class ############################# class TestSubversionConfig(unittest.TestCase): """Tests for the SubversionConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = SubversionConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ subversion = SubversionConfig() self.assertEqual(None, subversion.collectMode) self.assertEqual(None, subversion.compressMode) self.assertEqual(None, subversion.repositories) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values, repositories=None. """ subversion = SubversionConfig("daily", "gzip", None) self.assertEqual("daily", subversion.collectMode) self.assertEqual("gzip", subversion.compressMode) self.assertEqual(None, subversion.repositories) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values, no repositories. """ subversion = SubversionConfig("daily", "gzip", []) self.assertEqual("daily", subversion.collectMode) self.assertEqual("gzip", subversion.compressMode) self.assertEqual([], subversion.repositories) def testConstructor_004(self): """ Test constructor with all values filled in, with valid values, with one repository. """ repositories = [ Repository(), ] subversion = SubversionConfig("daily", "gzip", repositories) self.assertEqual("daily", subversion.collectMode) self.assertEqual("gzip", subversion.compressMode) self.assertEqual(repositories, subversion.repositories) def testConstructor_005(self): """ Test constructor with all values filled in, with valid values, with multiple repositories. """ repositories = [ Repository(collectMode="daily"), Repository(collectMode="weekly"), ] subversion = SubversionConfig("daily", "gzip", repositories=repositories) self.assertEqual("daily", subversion.collectMode) self.assertEqual("gzip", subversion.compressMode) self.assertEqual(repositories, subversion.repositories) def testConstructor_006(self): """ Test assignment of collectMode attribute, None value. """ subversion = SubversionConfig(collectMode="daily") self.assertEqual("daily", subversion.collectMode) subversion.collectMode = None self.assertEqual(None, subversion.collectMode) def testConstructor_007(self): """ Test assignment of collectMode attribute, valid value. """ subversion = SubversionConfig() self.assertEqual(None, subversion.collectMode) subversion.collectMode = "weekly" self.assertEqual("weekly", subversion.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, invalid value (empty). """ subversion = SubversionConfig() self.assertEqual(None, subversion.collectMode) self.failUnlessAssignRaises(ValueError, subversion, "collectMode", "") self.assertEqual(None, subversion.collectMode) def testConstructor_009(self): """ Test assignment of compressMode attribute, None value. """ subversion = SubversionConfig(compressMode="gzip") self.assertEqual("gzip", subversion.compressMode) subversion.compressMode = None self.assertEqual(None, subversion.compressMode) def testConstructor_010(self): """ Test assignment of compressMode attribute, valid value. """ subversion = SubversionConfig() self.assertEqual(None, subversion.compressMode) subversion.compressMode = "bzip2" self.assertEqual("bzip2", subversion.compressMode) def testConstructor_011(self): """ Test assignment of compressMode attribute, invalid value (empty). """ subversion = SubversionConfig() self.assertEqual(None, subversion.compressMode) self.failUnlessAssignRaises(ValueError, subversion, "compressMode", "") self.assertEqual(None, subversion.compressMode) def testConstructor_012(self): """ Test assignment of repositories attribute, None value. """ subversion = SubversionConfig(repositories=[]) self.assertEqual([], subversion.repositories) subversion.repositories = None self.assertEqual(None, subversion.repositories) def testConstructor_013(self): """ Test assignment of repositories attribute, [] value. """ subversion = SubversionConfig() self.assertEqual(None, subversion.repositories) subversion.repositories = [] self.assertEqual([], subversion.repositories) def testConstructor_014(self): """ Test assignment of repositories attribute, single valid entry. """ subversion = SubversionConfig() self.assertEqual(None, subversion.repositories) subversion.repositories = [ Repository(), ] self.assertEqual([ Repository(), ], subversion.repositories) subversion.repositories.append(Repository(collectMode="daily")) self.assertEqual([ Repository(), Repository(collectMode="daily"), ], subversion.repositories) def testConstructor_015(self): """ Test assignment of repositories attribute, multiple valid entries. """ subversion = SubversionConfig() self.assertEqual(None, subversion.repositories) subversion.repositories = [ Repository(collectMode="daily"), Repository(collectMode="weekly"), ] self.assertEqual([ Repository(collectMode="daily"), Repository(collectMode="weekly"), ], subversion.repositories) subversion.repositories.append(Repository(collectMode="incr")) self.assertEqual([ Repository(collectMode="daily"), Repository(collectMode="weekly"), Repository(collectMode="incr"), ], subversion.repositories) def testConstructor_016(self): """ Test assignment of repositories attribute, single invalid entry (None). """ subversion = SubversionConfig() self.assertEqual(None, subversion.repositories) self.failUnlessAssignRaises(ValueError, subversion, "repositories", [None, ]) self.assertEqual(None, subversion.repositories) def testConstructor_017(self): """ Test assignment of repositories attribute, single invalid entry (wrong type). """ subversion = SubversionConfig() self.assertEqual(None, subversion.repositories) self.failUnlessAssignRaises(ValueError, subversion, "repositories", [SubversionConfig(), ]) self.assertEqual(None, subversion.repositories) def testConstructor_018(self): """ Test assignment of repositories attribute, mixed valid and invalid entries. """ subversion = SubversionConfig() self.assertEqual(None, subversion.repositories) self.failUnlessAssignRaises(ValueError, subversion, "repositories", [Repository(), SubversionConfig(), ]) self.assertEqual(None, subversion.repositories) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ subversion1 = SubversionConfig() subversion2 = SubversionConfig() self.assertEqual(subversion1, subversion2) self.assertTrue(subversion1 == subversion2) self.assertTrue(not subversion1 < subversion2) self.assertTrue(subversion1 <= subversion2) self.assertTrue(not subversion1 > subversion2) self.assertTrue(subversion1 >= subversion2) self.assertTrue(not subversion1 != subversion2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None, list None. """ subversion1 = SubversionConfig("daily", "gzip", None) subversion2 = SubversionConfig("daily", "gzip", None) self.assertEqual(subversion1, subversion2) self.assertTrue(subversion1 == subversion2) self.assertTrue(not subversion1 < subversion2) self.assertTrue(subversion1 <= subversion2) self.assertTrue(not subversion1 > subversion2) self.assertTrue(subversion1 >= subversion2) self.assertTrue(not subversion1 != subversion2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None, list empty. """ subversion1 = SubversionConfig("daily", "gzip", []) subversion2 = SubversionConfig("daily", "gzip", []) self.assertEqual(subversion1, subversion2) self.assertTrue(subversion1 == subversion2) self.assertTrue(not subversion1 < subversion2) self.assertTrue(subversion1 <= subversion2) self.assertTrue(not subversion1 > subversion2) self.assertTrue(subversion1 >= subversion2) self.assertTrue(not subversion1 != subversion2) def testComparison_004(self): """ Test comparison of two identical objects, all attributes non-None, list non-empty. """ subversion1 = SubversionConfig("daily", "gzip", [ Repository(), ]) subversion2 = SubversionConfig("daily", "gzip", [ Repository(), ]) self.assertEqual(subversion1, subversion2) self.assertTrue(subversion1 == subversion2) self.assertTrue(not subversion1 < subversion2) self.assertTrue(subversion1 <= subversion2) self.assertTrue(not subversion1 > subversion2) self.assertTrue(subversion1 >= subversion2) self.assertTrue(not subversion1 != subversion2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ subversion1 = SubversionConfig() subversion2 = SubversionConfig(collectMode="daily") self.assertNotEqual(subversion1, subversion2) self.assertTrue(not subversion1 == subversion2) self.assertTrue(subversion1 < subversion2) self.assertTrue(subversion1 <= subversion2) self.assertTrue(not subversion1 > subversion2) self.assertTrue(not subversion1 >= subversion2) self.assertTrue(subversion1 != subversion2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ subversion1 = SubversionConfig("daily", "gzip", [ Repository(), ]) subversion2 = SubversionConfig("weekly", "gzip", [ Repository(), ]) self.assertNotEqual(subversion1, subversion2) self.assertTrue(not subversion1 == subversion2) self.assertTrue(subversion1 < subversion2) self.assertTrue(subversion1 <= subversion2) self.assertTrue(not subversion1 > subversion2) self.assertTrue(not subversion1 >= subversion2) self.assertTrue(subversion1 != subversion2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ subversion1 = SubversionConfig() subversion2 = SubversionConfig(compressMode="bzip2") self.assertNotEqual(subversion1, subversion2) self.assertTrue(not subversion1 == subversion2) self.assertTrue(subversion1 < subversion2) self.assertTrue(subversion1 <= subversion2) self.assertTrue(not subversion1 > subversion2) self.assertTrue(not subversion1 >= subversion2) self.assertTrue(subversion1 != subversion2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ subversion1 = SubversionConfig("daily", "bzip2", [ Repository(), ]) subversion2 = SubversionConfig("daily", "gzip", [ Repository(), ]) self.assertNotEqual(subversion1, subversion2) self.assertTrue(not subversion1 == subversion2) self.assertTrue(subversion1 < subversion2) self.assertTrue(subversion1 <= subversion2) self.assertTrue(not subversion1 > subversion2) self.assertTrue(not subversion1 >= subversion2) self.assertTrue(subversion1 != subversion2) def testComparison_009(self): """ Test comparison of two differing objects, repositories differs (one None, one empty). """ subversion1 = SubversionConfig() subversion2 = SubversionConfig(repositories=[]) self.assertNotEqual(subversion1, subversion2) self.assertTrue(not subversion1 == subversion2) self.assertTrue(subversion1 < subversion2) self.assertTrue(subversion1 <= subversion2) self.assertTrue(not subversion1 > subversion2) self.assertTrue(not subversion1 >= subversion2) self.assertTrue(subversion1 != subversion2) def testComparison_010(self): """ Test comparison of two differing objects, repositories differs (one None, one not empty). """ subversion1 = SubversionConfig() subversion2 = SubversionConfig(repositories=[Repository(), ]) self.assertNotEqual(subversion1, subversion2) self.assertTrue(not subversion1 == subversion2) self.assertTrue(subversion1 < subversion2) self.assertTrue(subversion1 <= subversion2) self.assertTrue(not subversion1 > subversion2) self.assertTrue(not subversion1 >= subversion2) self.assertTrue(subversion1 != subversion2) def testComparison_011(self): """ Test comparison of two differing objects, repositories differs (one empty, one not empty). """ subversion1 = SubversionConfig("daily", "gzip", [ ]) subversion2 = SubversionConfig("daily", "gzip", [ Repository(), ]) self.assertNotEqual(subversion1, subversion2) self.assertTrue(not subversion1 == subversion2) self.assertTrue(subversion1 < subversion2) self.assertTrue(subversion1 <= subversion2) self.assertTrue(not subversion1 > subversion2) self.assertTrue(not subversion1 >= subversion2) self.assertTrue(subversion1 != subversion2) def testComparison_012(self): """ Test comparison of two differing objects, repositories differs (both not empty). """ subversion1 = SubversionConfig("daily", "gzip", [ Repository(), ]) subversion2 = SubversionConfig("daily", "gzip", [ Repository(), Repository(), ]) self.assertNotEqual(subversion1, subversion2) self.assertTrue(not subversion1 == subversion2) self.assertTrue(subversion1 < subversion2) self.assertTrue(subversion1 <= subversion2) self.assertTrue(not subversion1 > subversion2) self.assertTrue(not subversion1 >= subversion2) self.assertTrue(subversion1 != subversion2) def testComparison_013(self): """ Test comparison of two differing objects, repositories differs (both not empty). """ subversion1 = SubversionConfig("daily", "gzip", [ Repository(repositoryType="other"), ]) subversion2 = SubversionConfig("daily", "gzip", [ Repository(repositoryType="type"), ]) self.assertNotEqual(subversion1, subversion2) self.assertTrue(not subversion1 == subversion2) self.assertTrue(subversion1 < subversion2) self.assertTrue(subversion1 <= subversion2) self.assertTrue(not subversion1 > subversion2) self.assertTrue(not subversion1 >= subversion2) self.assertTrue(subversion1 != subversion2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the subversion configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.assertEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.assertEqual(None, config.subversion) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.assertEqual(None, config.subversion) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["subversion.conf.1"] with open(path) as f: contents = f.read() self.assertRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of subversion attribute, None value. """ config = LocalConfig() config.subversion = None self.assertEqual(None, config.subversion) def testConstructor_005(self): """ Test assignment of subversion attribute, valid value. """ config = LocalConfig() config.subversion = SubversionConfig() self.assertEqual(SubversionConfig(), config.subversion) def testConstructor_006(self): """ Test assignment of subversion attribute, invalid value (not SubversionConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "subversion", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.subversion = SubversionConfig() config2 = LocalConfig() config2.subversion = SubversionConfig() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, subversion differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.subversion = SubversionConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, subversion differs. """ config1 = LocalConfig() config1.subversion = SubversionConfig(collectMode="daily") config2 = LocalConfig() config2.subversion = SubversionConfig(collectMode="weekly") self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None subversion section. """ config = LocalConfig() config.subversion = None self.assertRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty subversion section. """ config = LocalConfig() config.subversion = SubversionConfig() self.assertRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty subversion section, repositories=None. """ config = LocalConfig() config.subversion = SubversionConfig("weekly", "gzip", None) self.assertRaises(ValueError, config.validate) def testValidate_004(self): """ Test validate on a non-empty subversion section, repositories=[]. """ config = LocalConfig() config.subversion = SubversionConfig("weekly", "gzip", []) self.assertRaises(ValueError, config.validate) def testValidate_005(self): """ Test validate on a non-empty subversion section, non-empty repositories, defaults set, no values on repositories. """ repositories = [ Repository(repositoryPath="/one"), Repository(repositoryPath="/two") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositories = repositories config.validate() def testValidate_006(self): """ Test validate on a non-empty subversion section, non-empty repositories, no defaults set, no values on repositiories. """ repositories = [ Repository(repositoryPath="/one"), Repository(repositoryPath="/two") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.repositories = repositories self.assertRaises(ValueError, config.validate) def testValidate_007(self): """ Test validate on a non-empty subversion section, non-empty repositories, no defaults set, both values on repositories. """ repositories = [ Repository(repositoryPath="/two", collectMode="weekly", compressMode="gzip") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.repositories = repositories config.validate() def testValidate_008(self): """ Test validate on a non-empty subversion section, non-empty repositories, collectMode only on repositories. """ repositories = [ Repository(repositoryPath="/two", collectMode="weekly") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.compressMode = "gzip" config.subversion.repositories = repositories config.validate() def testValidate_009(self): """ Test validate on a non-empty subversion section, non-empty repositories, compressMode only on repositories. """ repositories = [ Repository(repositoryPath="/two", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "weekly" config.subversion.repositories = repositories config.validate() def testValidate_010(self): """ Test validate on a non-empty subversion section, non-empty repositories, compressMode default and on repository. """ repositories = [ Repository(repositoryPath="/two", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositories = repositories config.validate() def testValidate_011(self): """ Test validate on a non-empty subversion section, non-empty repositories, collectMode default and on repository. """ repositories = [ Repository(repositoryPath="/two", collectMode="daily") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositories = repositories config.validate() def testValidate_012(self): """ Test validate on a non-empty subversion section, non-empty repositories, collectMode and compressMode default and on repository. """ repositories = [ Repository(repositoryPath="/two", collectMode="daily", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositories = repositories config.validate() def testValidate_013(self): """ Test validate on a non-empty subversion section, repositoryDirs=None. """ config = LocalConfig() config.subversion = SubversionConfig("weekly", "gzip", repositoryDirs=None) self.assertRaises(ValueError, config.validate) def testValidate_014(self): """ Test validate on a non-empty subversion section, repositoryDirs=[]. """ config = LocalConfig() config.subversion = SubversionConfig("weekly", "gzip", repositoryDirs=[]) self.assertRaises(ValueError, config.validate) def testValidate_015(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, defaults set, no values on repositoryDirs. """ repositoryDirs = [ RepositoryDir(directoryPath="/one"), RepositoryDir(directoryPath="/two") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_016(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, no defaults set, no values on repositiories. """ repositoryDirs = [ RepositoryDir(directoryPath="/one"), RepositoryDir(directoryPath="/two") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.repositoryDirs = repositoryDirs self.assertRaises(ValueError, config.validate) def testValidate_017(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, no defaults set, both values on repositoryDirs. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", collectMode="weekly", compressMode="gzip") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_018(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, collectMode only on repositoryDirs. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", collectMode="weekly") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.compressMode = "gzip" config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_019(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, compressMode only on repositoryDirs. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "weekly" config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_020(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, compressMode default and on repository. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_021(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, collectMode default and on repository. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", collectMode="daily") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_022(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, collectMode and compressMode default and on repository. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", collectMode="daily", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositoryDirs = repositoryDirs config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["subversion.conf.1"] with open(path) as f: contents = f.read() self.assertRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.assertRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.assertEqual(None, config.subversion) config = LocalConfig(xmlData=contents, validate=False) self.assertEqual(None, config.subversion) def testParse_002(self): """ Parse config document with default modes, one repository. """ repositories = [ Repository(repositoryPath="/opt/public/svn/software"), ] path = self.resources["subversion.conf.2"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.subversion) self.assertEqual("daily", config.subversion.collectMode) self.assertEqual("gzip", config.subversion.compressMode) self.assertEqual(repositories, config.subversion.repositories) self.assertEqual(None, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.subversion) self.assertEqual("daily", config.subversion.collectMode) self.assertEqual("gzip", config.subversion.compressMode) self.assertEqual(repositories, config.subversion.repositories) self.assertEqual(None, config.subversion.repositoryDirs) def testParse_003(self): """ Parse config document with no default modes, one repository """ repositories = [ Repository(repositoryPath="/opt/public/svn/software", collectMode="daily", compressMode="gzip"), ] path = self.resources["subversion.conf.3"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.subversion) self.assertEqual(None, config.subversion.collectMode) self.assertEqual(None, config.subversion.compressMode) self.assertEqual(repositories, config.subversion.repositories) self.assertEqual(None, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.subversion) self.assertEqual(None, config.subversion.collectMode) self.assertEqual(None, config.subversion.compressMode) self.assertEqual(repositories, config.subversion.repositories) self.assertEqual(None, config.subversion.repositoryDirs) def testParse_004(self): """ Parse config document with default modes, several repositories with various overrides. """ repositories = [] repositories.append(Repository(repositoryPath="/opt/public/svn/one")) repositories.append(Repository(repositoryType="BDB", repositoryPath="/opt/public/svn/two", collectMode="weekly")) repositories.append(Repository(repositoryPath="/opt/public/svn/three", compressMode="bzip2")) repositories.append(Repository(repositoryType="FSFS", repositoryPath="/opt/public/svn/four", collectMode="incr", compressMode="bzip2")) path = self.resources["subversion.conf.4"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.subversion) self.assertEqual("daily", config.subversion.collectMode) self.assertEqual("gzip", config.subversion.compressMode) self.assertEqual(repositories, config.subversion.repositories) self.assertEqual(None, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.subversion) self.assertEqual("daily", config.subversion.collectMode) self.assertEqual("gzip", config.subversion.compressMode) self.assertEqual(repositories, config.subversion.repositories) self.assertEqual(None, config.subversion.repositoryDirs) def testParse_005(self): """ Parse config document with default modes, one repository. """ repositoryDirs = [ RepositoryDir(directoryPath="/opt/public/svn/software"), ] path = self.resources["subversion.conf.5"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.subversion) self.assertEqual("daily", config.subversion.collectMode) self.assertEqual("gzip", config.subversion.compressMode) self.assertEqual(None, config.subversion.repositories) self.assertEqual(repositoryDirs, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.subversion) self.assertEqual("daily", config.subversion.collectMode) self.assertEqual("gzip", config.subversion.compressMode) self.assertEqual(None, config.subversion.repositories) self.assertEqual(repositoryDirs, config.subversion.repositoryDirs) def testParse_006(self): """ Parse config document with no default modes, one repository """ repositoryDirs = [ RepositoryDir(directoryPath="/opt/public/svn/software", collectMode="daily", compressMode="gzip"), ] path = self.resources["subversion.conf.6"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.subversion) self.assertEqual(None, config.subversion.collectMode) self.assertEqual(None, config.subversion.compressMode) self.assertEqual(None, config.subversion.repositories) self.assertEqual(repositoryDirs, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.subversion) self.assertEqual(None, config.subversion.collectMode) self.assertEqual(None, config.subversion.compressMode) self.assertEqual(None, config.subversion.repositories) self.assertEqual(repositoryDirs, config.subversion.repositoryDirs) def testParse_007(self): """ Parse config document with default modes, several repositoryDirs with various overrides. """ repositoryDirs = [] repositoryDirs.append(RepositoryDir(directoryPath="/opt/public/svn/one")) repositoryDirs.append(RepositoryDir(repositoryType="BDB", directoryPath="/opt/public/svn/two", collectMode="weekly", relativeExcludePaths=["software", ])) repositoryDirs.append(RepositoryDir(directoryPath="/opt/public/svn/three", compressMode="bzip2", excludePatterns=[".*software.*", ])) repositoryDirs.append(RepositoryDir(repositoryType="FSFS", directoryPath="/opt/public/svn/four", collectMode="incr", compressMode="bzip2", relativeExcludePaths=["cedar", "banner", ], excludePatterns=[".*software.*", ".*database.*", ])) path = self.resources["subversion.conf.7"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.subversion) self.assertEqual("daily", config.subversion.collectMode) self.assertEqual("gzip", config.subversion.compressMode) self.assertEqual(None, config.subversion.repositories) self.assertEqual(repositoryDirs, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.subversion) self.assertEqual("daily", config.subversion.collectMode) self.assertEqual("gzip", config.subversion.compressMode) self.assertEqual(None, config.subversion.repositories) self.assertEqual(repositoryDirs, config.subversion.repositoryDirs) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document. """ subversion = SubversionConfig() config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_002(self): """ Test with defaults set, single repository with no optional values. """ repositories = [] repositories.append(Repository(repositoryPath="/path")) subversion = SubversionConfig(collectMode="daily", compressMode="gzip", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_003(self): """ Test with defaults set, single repository with collectMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="incr")) subversion = SubversionConfig(collectMode="daily", compressMode="gzip", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_004(self): """ Test with defaults set, single repository with compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", compressMode="bzip2")) subversion = SubversionConfig(collectMode="daily", compressMode="gzip", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_005(self): """ Test with defaults set, single repository with collectMode and compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="weekly", compressMode="bzip2")) subversion = SubversionConfig(collectMode="daily", compressMode="gzip", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_006(self): """ Test with no defaults set, single repository with collectMode and compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="weekly", compressMode="bzip2")) subversion = SubversionConfig(repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_007(self): """ Test with compressMode set, single repository with collectMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="weekly")) subversion = SubversionConfig(compressMode="gzip", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_008(self): """ Test with collectMode set, single repository with compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", compressMode="gzip")) subversion = SubversionConfig(collectMode="weekly", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_009(self): """ Test with compressMode set, single repository with collectMode and compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="incr", compressMode="gzip")) subversion = SubversionConfig(compressMode="bzip2", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_010(self): """ Test with collectMode set, single repository with collectMode and compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="weekly", compressMode="gzip")) subversion = SubversionConfig(collectMode="incr", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_011(self): """ Test with defaults set, multiple repositories with collectMode and compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path1", collectMode="daily", compressMode="gzip")) repositories.append(Repository(repositoryPath="/path2", collectMode="weekly", compressMode="gzip")) repositories.append(Repository(repositoryPath="/path3", collectMode="incr", compressMode="gzip")) repositories.append(Repository(repositoryPath="/path1", collectMode="daily", compressMode="bzip2")) repositories.append(Repository(repositoryPath="/path2", collectMode="weekly", compressMode="bzip2")) repositories.append(Repository(repositoryPath="/path3", collectMode="incr", compressMode="bzip2")) subversion = SubversionConfig(collectMode="incr", compressMode="bzip2", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" tests = [ ] tests.append(unittest.makeSuite(TestBDBRepository, 'test')) tests.append(unittest.makeSuite(TestFSFSRepository, 'test')) tests.append(unittest.makeSuite(TestRepository, 'test')) tests.append(unittest.makeSuite(TestRepositoryDir, 'test')) tests.append(unittest.makeSuite(TestSubversionConfig, 'test')) tests.append(unittest.makeSuite(TestLocalConfig, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/mysqltests.py0000664000175000017500000011542612642032656022126 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2005-2006,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests MySQL extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/extend/mysql.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in extend/mysql.py. There are also tests for several of the private methods. Unfortunately, it's rather difficult to test this code in an automated fashion, even if you have access to MySQL, since the actual dump would need to have access to a real database. Because of this, there aren't any tests below that actually talk to a database. As a compromise, I test some of the private methods in the implementation. Normally, I don't like to test private methods, but in this case, testing the private methods will help give us some reasonable confidence in the code even if we can't talk to a database.. This isn't perfect, but it's better than nothing. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a MYSQLTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest # Cedar Backup modules from CedarBackup3.testutil import findResources, failUnlessAssignRaises from CedarBackup3.xmlutil import createOutputDom, serializeDom from CedarBackup3.extend.mysql import LocalConfig, MysqlConfig ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "mysql.conf.1", "mysql.conf.2", "mysql.conf.3", "mysql.conf.4", "mysql.conf.5", ] ####################################################################### # Test Case Classes ####################################################################### ######################## # TestMysqlConfig class ######################## class TestMysqlConfig(unittest.TestCase): """Tests for the MysqlConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = MysqlConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ mysql = MysqlConfig() self.assertEqual(None, mysql.user) self.assertEqual(None, mysql.password) self.assertEqual(None, mysql.compressMode) self.assertEqual(False, mysql.all) self.assertEqual(None, mysql.databases) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values, databases=None. """ mysql = MysqlConfig("user", "password", "none", False, None) self.assertEqual("user", mysql.user) self.assertEqual("password", mysql.password) self.assertEqual("none", mysql.compressMode) self.assertEqual(False, mysql.all) self.assertEqual(None, mysql.databases) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values, no databases. """ mysql = MysqlConfig("user", "password", "none", True, []) self.assertEqual("user", mysql.user) self.assertEqual("password", mysql.password) self.assertEqual("none", mysql.compressMode) self.assertEqual(True, mysql.all) self.assertEqual([], mysql.databases) def testConstructor_004(self): """ Test constructor with all values filled in, with valid values, with one database. """ mysql = MysqlConfig("user", "password", "gzip", True, [ "one", ]) self.assertEqual("user", mysql.user) self.assertEqual("password", mysql.password) self.assertEqual("gzip", mysql.compressMode) self.assertEqual(True, mysql.all) self.assertEqual([ "one", ], mysql.databases) def testConstructor_005(self): """ Test constructor with all values filled in, with valid values, with multiple databases. """ mysql = MysqlConfig("user", "password", "bzip2", True, [ "one", "two", ]) self.assertEqual("user", mysql.user) self.assertEqual("password", mysql.password) self.assertEqual("bzip2", mysql.compressMode) self.assertEqual(True, mysql.all) self.assertEqual([ "one", "two", ], mysql.databases) def testConstructor_006(self): """ Test assignment of user attribute, None value. """ mysql = MysqlConfig(user="user") self.assertEqual("user", mysql.user) mysql.user = None self.assertEqual(None, mysql.user) def testConstructor_007(self): """ Test assignment of user attribute, valid value. """ mysql = MysqlConfig() self.assertEqual(None, mysql.user) mysql.user = "user" self.assertEqual("user", mysql.user) def testConstructor_008(self): """ Test assignment of user attribute, invalid value (empty). """ mysql = MysqlConfig() self.assertEqual(None, mysql.user) self.failUnlessAssignRaises(ValueError, mysql, "user", "") self.assertEqual(None, mysql.user) def testConstructor_009(self): """ Test assignment of password attribute, None value. """ mysql = MysqlConfig(password="password") self.assertEqual("password", mysql.password) mysql.password = None self.assertEqual(None, mysql.password) def testConstructor_010(self): """ Test assignment of password attribute, valid value. """ mysql = MysqlConfig() self.assertEqual(None, mysql.password) mysql.password = "password" self.assertEqual("password", mysql.password) def testConstructor_011(self): """ Test assignment of password attribute, invalid value (empty). """ mysql = MysqlConfig() self.assertEqual(None, mysql.password) self.failUnlessAssignRaises(ValueError, mysql, "password", "") self.assertEqual(None, mysql.password) def testConstructor_012(self): """ Test assignment of compressMode attribute, None value. """ mysql = MysqlConfig(compressMode="none") self.assertEqual("none", mysql.compressMode) mysql.compressMode = None self.assertEqual(None, mysql.compressMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, valid value. """ mysql = MysqlConfig() self.assertEqual(None, mysql.compressMode) mysql.compressMode = "none" self.assertEqual("none", mysql.compressMode) mysql.compressMode = "gzip" self.assertEqual("gzip", mysql.compressMode) mysql.compressMode = "bzip2" self.assertEqual("bzip2", mysql.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, invalid value (empty). """ mysql = MysqlConfig() self.assertEqual(None, mysql.compressMode) self.failUnlessAssignRaises(ValueError, mysql, "compressMode", "") self.assertEqual(None, mysql.compressMode) def testConstructor_015(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ mysql = MysqlConfig() self.assertEqual(None, mysql.compressMode) self.failUnlessAssignRaises(ValueError, mysql, "compressMode", "bogus") self.assertEqual(None, mysql.compressMode) def testConstructor_016(self): """ Test assignment of all attribute, None value. """ mysql = MysqlConfig(all=True) self.assertEqual(True, mysql.all) mysql.all = None self.assertEqual(False, mysql.all) def testConstructor_017(self): """ Test assignment of all attribute, valid value (real boolean). """ mysql = MysqlConfig() self.assertEqual(False, mysql.all) mysql.all = True self.assertEqual(True, mysql.all) mysql.all = False self.assertEqual(False, mysql.all) #pylint: disable=R0204 def testConstructor_018(self): """ Test assignment of all attribute, valid value (expression). """ mysql = MysqlConfig() self.assertEqual(False, mysql.all) mysql.all = 0 self.assertEqual(False, mysql.all) mysql.all = [] self.assertEqual(False, mysql.all) mysql.all = None self.assertEqual(False, mysql.all) mysql.all = ['a'] self.assertEqual(True, mysql.all) mysql.all = 3 self.assertEqual(True, mysql.all) def testConstructor_019(self): """ Test assignment of databases attribute, None value. """ mysql = MysqlConfig(databases=[]) self.assertEqual([], mysql.databases) mysql.databases = None self.assertEqual(None, mysql.databases) def testConstructor_020(self): """ Test assignment of databases attribute, [] value. """ mysql = MysqlConfig() self.assertEqual(None, mysql.databases) mysql.databases = [] self.assertEqual([], mysql.databases) def testConstructor_021(self): """ Test assignment of databases attribute, single valid entry. """ mysql = MysqlConfig() self.assertEqual(None, mysql.databases) mysql.databases = ["/whatever", ] self.assertEqual(["/whatever", ], mysql.databases) mysql.databases.append("/stuff") self.assertEqual(["/whatever", "/stuff", ], mysql.databases) def testConstructor_022(self): """ Test assignment of databases attribute, multiple valid entries. """ mysql = MysqlConfig() self.assertEqual(None, mysql.databases) mysql.databases = ["/whatever", "/stuff", ] self.assertEqual(["/whatever", "/stuff", ], mysql.databases) mysql.databases.append("/etc/X11") self.assertEqual(["/whatever", "/stuff", "/etc/X11", ], mysql.databases) def testConstructor_023(self): """ Test assignment of databases attribute, single invalid entry (empty). """ mysql = MysqlConfig() self.assertEqual(None, mysql.databases) self.failUnlessAssignRaises(ValueError, mysql, "databases", ["", ]) self.assertEqual(None, mysql.databases) def testConstructor_024(self): """ Test assignment of databases attribute, mixed valid and invalid entries. """ mysql = MysqlConfig() self.assertEqual(None, mysql.databases) self.failUnlessAssignRaises(ValueError, mysql, "databases", ["good", "", "alsogood", ]) self.assertEqual(None, mysql.databases) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ mysql1 = MysqlConfig() mysql2 = MysqlConfig() self.assertEqual(mysql1, mysql2) self.assertTrue(mysql1 == mysql2) self.assertTrue(not mysql1 < mysql2) self.assertTrue(mysql1 <= mysql2) self.assertTrue(not mysql1 > mysql2) self.assertTrue(mysql1 >= mysql2) self.assertTrue(not mysql1 != mysql2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None, list None. """ mysql1 = MysqlConfig("user", "password", "gzip", True, None) mysql2 = MysqlConfig("user", "password", "gzip", True, None) self.assertEqual(mysql1, mysql2) self.assertTrue(mysql1 == mysql2) self.assertTrue(not mysql1 < mysql2) self.assertTrue(mysql1 <= mysql2) self.assertTrue(not mysql1 > mysql2) self.assertTrue(mysql1 >= mysql2) self.assertTrue(not mysql1 != mysql2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None, list empty. """ mysql1 = MysqlConfig("user", "password", "bzip2", True, []) mysql2 = MysqlConfig("user", "password", "bzip2", True, []) self.assertEqual(mysql1, mysql2) self.assertTrue(mysql1 == mysql2) self.assertTrue(not mysql1 < mysql2) self.assertTrue(mysql1 <= mysql2) self.assertTrue(not mysql1 > mysql2) self.assertTrue(mysql1 >= mysql2) self.assertTrue(not mysql1 != mysql2) def testComparison_004(self): """ Test comparison of two identical objects, all attributes non-None, list non-empty. """ mysql1 = MysqlConfig("user", "password", "none", True, [ "whatever", ]) mysql2 = MysqlConfig("user", "password", "none", True, [ "whatever", ]) self.assertEqual(mysql1, mysql2) self.assertTrue(mysql1 == mysql2) self.assertTrue(not mysql1 < mysql2) self.assertTrue(mysql1 <= mysql2) self.assertTrue(not mysql1 > mysql2) self.assertTrue(mysql1 >= mysql2) self.assertTrue(not mysql1 != mysql2) def testComparison_005(self): """ Test comparison of two differing objects, user differs (one None). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(user="user") self.assertNotEqual(mysql1, mysql2) self.assertTrue(not mysql1 == mysql2) self.assertTrue(mysql1 < mysql2) self.assertTrue(mysql1 <= mysql2) self.assertTrue(not mysql1 > mysql2) self.assertTrue(not mysql1 >= mysql2) self.assertTrue(mysql1 != mysql2) def testComparison_006(self): """ Test comparison of two differing objects, user differs. """ mysql1 = MysqlConfig("user1", "password", "gzip", True, [ "whatever", ]) mysql2 = MysqlConfig("user2", "password", "gzip", True, [ "whatever", ]) self.assertNotEqual(mysql1, mysql2) self.assertTrue(not mysql1 == mysql2) self.assertTrue(mysql1 < mysql2) self.assertTrue(mysql1 <= mysql2) self.assertTrue(not mysql1 > mysql2) self.assertTrue(not mysql1 >= mysql2) self.assertTrue(mysql1 != mysql2) def testComparison_007(self): """ Test comparison of two differing objects, password differs (one None). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(password="password") self.assertNotEqual(mysql1, mysql2) self.assertTrue(not mysql1 == mysql2) self.assertTrue(mysql1 < mysql2) self.assertTrue(mysql1 <= mysql2) self.assertTrue(not mysql1 > mysql2) self.assertTrue(not mysql1 >= mysql2) self.assertTrue(mysql1 != mysql2) def testComparison_008(self): """ Test comparison of two differing objects, password differs. """ mysql1 = MysqlConfig("user", "password1", "gzip", True, [ "whatever", ]) mysql2 = MysqlConfig("user", "password2", "gzip", True, [ "whatever", ]) self.assertNotEqual(mysql1, mysql2) self.assertTrue(not mysql1 == mysql2) self.assertTrue(mysql1 < mysql2) self.assertTrue(mysql1 <= mysql2) self.assertTrue(not mysql1 > mysql2) self.assertTrue(not mysql1 >= mysql2) self.assertTrue(mysql1 != mysql2) def testComparison_009(self): """ Test comparison of two differing objects, compressMode differs (one None). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(compressMode="gzip") self.assertNotEqual(mysql1, mysql2) self.assertTrue(not mysql1 == mysql2) self.assertTrue(mysql1 < mysql2) self.assertTrue(mysql1 <= mysql2) self.assertTrue(not mysql1 > mysql2) self.assertTrue(not mysql1 >= mysql2) self.assertTrue(mysql1 != mysql2) def testComparison_010(self): """ Test comparison of two differing objects, compressMode differs. """ mysql1 = MysqlConfig("user", "password", "bzip2", True, [ "whatever", ]) mysql2 = MysqlConfig("user", "password", "gzip", True, [ "whatever", ]) self.assertNotEqual(mysql1, mysql2) self.assertTrue(not mysql1 == mysql2) self.assertTrue(mysql1 < mysql2) self.assertTrue(mysql1 <= mysql2) self.assertTrue(not mysql1 > mysql2) self.assertTrue(not mysql1 >= mysql2) self.assertTrue(mysql1 != mysql2) def testComparison_011(self): """ Test comparison of two differing objects, all differs (one None). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(all=True) self.assertNotEqual(mysql1, mysql2) self.assertTrue(not mysql1 == mysql2) self.assertTrue(mysql1 < mysql2) self.assertTrue(mysql1 <= mysql2) self.assertTrue(not mysql1 > mysql2) self.assertTrue(not mysql1 >= mysql2) self.assertTrue(mysql1 != mysql2) def testComparison_012(self): """ Test comparison of two differing objects, all differs. """ mysql1 = MysqlConfig("user", "password", "gzip", False, [ "whatever", ]) mysql2 = MysqlConfig("user", "password", "gzip", True, [ "whatever", ]) self.assertNotEqual(mysql1, mysql2) self.assertTrue(not mysql1 == mysql2) self.assertTrue(mysql1 < mysql2) self.assertTrue(mysql1 <= mysql2) self.assertTrue(not mysql1 > mysql2) self.assertTrue(not mysql1 >= mysql2) self.assertTrue(mysql1 != mysql2) def testComparison_013(self): """ Test comparison of two differing objects, databases differs (one None, one empty). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(databases=[]) self.assertNotEqual(mysql1, mysql2) self.assertTrue(not mysql1 == mysql2) self.assertTrue(mysql1 < mysql2) self.assertTrue(mysql1 <= mysql2) self.assertTrue(not mysql1 > mysql2) self.assertTrue(not mysql1 >= mysql2) self.assertTrue(mysql1 != mysql2) def testComparison_014(self): """ Test comparison of two differing objects, databases differs (one None, one not empty). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(databases=["whatever", ]) self.assertNotEqual(mysql1, mysql2) self.assertTrue(not mysql1 == mysql2) self.assertTrue(mysql1 < mysql2) self.assertTrue(mysql1 <= mysql2) self.assertTrue(not mysql1 > mysql2) self.assertTrue(not mysql1 >= mysql2) self.assertTrue(mysql1 != mysql2) def testComparison_015(self): """ Test comparison of two differing objects, databases differs (one empty, one not empty). """ mysql1 = MysqlConfig("user", "password", "gzip", True, [ ]) mysql2 = MysqlConfig("user", "password", "gzip", True, [ "whatever", ]) self.assertNotEqual(mysql1, mysql2) self.assertTrue(not mysql1 == mysql2) self.assertTrue(mysql1 < mysql2) self.assertTrue(mysql1 <= mysql2) self.assertTrue(not mysql1 > mysql2) self.assertTrue(not mysql1 >= mysql2) self.assertTrue(mysql1 != mysql2) def testComparison_016(self): """ Test comparison of two differing objects, databases differs (both not empty). """ mysql1 = MysqlConfig("user", "password", "gzip", True, [ "whatever", ]) mysql2 = MysqlConfig("user", "password", "gzip", True, [ "whatever", "bogus", ]) self.assertNotEqual(mysql1, mysql2) self.assertTrue(not mysql1 == mysql2) self.assertTrue(not mysql1 < mysql2) # note: different than standard due to unsorted list self.assertTrue(not mysql1 <= mysql2) # note: different than standard due to unsorted list self.assertTrue(mysql1 > mysql2) # note: different than standard due to unsorted list self.assertTrue(mysql1 >= mysql2) # note: different than standard due to unsorted list self.assertTrue(mysql1 != mysql2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the mysql configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.assertEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.assertEqual(None, config.mysql) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.assertEqual(None, config.mysql) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["mysql.conf.1"] with open(path) as f: contents = f.read() self.assertRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of mysql attribute, None value. """ config = LocalConfig() config.mysql = None self.assertEqual(None, config.mysql) def testConstructor_005(self): """ Test assignment of mysql attribute, valid value. """ config = LocalConfig() config.mysql = MysqlConfig() self.assertEqual(MysqlConfig(), config.mysql) def testConstructor_006(self): """ Test assignment of mysql attribute, invalid value (not MysqlConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "mysql", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.mysql = MysqlConfig() config2 = LocalConfig() config2.mysql = MysqlConfig() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, mysql differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.mysql = MysqlConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, mysql differs. """ config1 = LocalConfig() config1.mysql = MysqlConfig(user="one") config2 = LocalConfig() config2.mysql = MysqlConfig(user="two") self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None mysql section. """ config = LocalConfig() config.mysql = None self.assertRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty mysql section. """ config = LocalConfig() config.mysql = MysqlConfig() self.assertRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty mysql section, all=True, databases=None. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "gzip", True, None) config.validate() def testValidate_004(self): """ Test validate on a non-empty mysql section, all=True, empty databases. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "none", True, []) config.validate() def testValidate_005(self): """ Test validate on a non-empty mysql section, all=True, non-empty databases. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "bzip2", True, ["whatever", ]) self.assertRaises(ValueError, config.validate) def testValidate_006(self): """ Test validate on a non-empty mysql section, all=False, databases=None. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "gzip", False, None) self.assertRaises(ValueError, config.validate) def testValidate_007(self): """ Test validate on a non-empty mysql section, all=False, empty databases. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "bzip2", False, []) self.assertRaises(ValueError, config.validate) def testValidate_008(self): """ Test validate on a non-empty mysql section, all=False, non-empty databases. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "gzip", False, ["whatever", ]) config.validate() def testValidate_009(self): """ Test validate on a non-empty mysql section, with user=None. """ config = LocalConfig() config.mysql = MysqlConfig(None, "password", "gzip", True, None) config.validate() def testValidate_010(self): """ Test validate on a non-empty mysql section, with password=None. """ config = LocalConfig() config.mysql = MysqlConfig("user", None, "gzip", True, None) config.validate() def testValidate_011(self): """ Test validate on a non-empty mysql section, with user=None and password=None. """ config = LocalConfig() config.mysql = MysqlConfig(None, None, "gzip", True, None) config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["mysql.conf.1"] with open(path) as f: contents = f.read() self.assertRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.assertRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.assertEqual(None, config.mysql) config = LocalConfig(xmlData=contents, validate=False) self.assertEqual(None, config.mysql) def testParse_003(self): """ Parse config document containing only a mysql section, no databases, all=True. """ path = self.resources["mysql.conf.2"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.mysql) self.assertEqual("user", config.mysql.user) self.assertEqual("password", config.mysql.password) self.assertEqual("none", config.mysql.compressMode) self.assertEqual(True, config.mysql.all) self.assertEqual(None, config.mysql.databases) config = LocalConfig(xmlData=contents, validate=False) self.assertEqual("user", config.mysql.user) self.assertEqual("password", config.mysql.password) self.assertEqual("none", config.mysql.compressMode) self.assertNotEqual(None, config.mysql.password) self.assertEqual(True, config.mysql.all) self.assertEqual(None, config.mysql.databases) def testParse_004(self): """ Parse config document containing only a mysql section, single database, all=False. """ path = self.resources["mysql.conf.3"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.mysql) self.assertEqual("user", config.mysql.user) self.assertEqual("password", config.mysql.password) self.assertEqual("gzip", config.mysql.compressMode) self.assertEqual(False, config.mysql.all) self.assertEqual(["database", ], config.mysql.databases) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.mysql) self.assertEqual("user", config.mysql.user) self.assertEqual("password", config.mysql.password) self.assertEqual("gzip", config.mysql.compressMode) self.assertEqual(False, config.mysql.all) self.assertEqual(["database", ], config.mysql.databases) def testParse_005(self): """ Parse config document containing only a mysql section, multiple databases, all=False. """ path = self.resources["mysql.conf.4"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.mysql) self.assertEqual("user", config.mysql.user) self.assertEqual("password", config.mysql.password) self.assertEqual("bzip2", config.mysql.compressMode) self.assertEqual(False, config.mysql.all) self.assertEqual(["database1", "database2", ], config.mysql.databases) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.mysql) self.assertEqual("user", config.mysql.user) self.assertEqual("password", config.mysql.password) self.assertEqual("bzip2", config.mysql.compressMode) self.assertEqual(False, config.mysql.all) self.assertEqual(["database1", "database2", ], config.mysql.databases) def testParse_006(self): """ Parse config document containing only a mysql section, no user or password, multiple databases, all=False. """ path = self.resources["mysql.conf.5"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.mysql) self.assertEqual(None, config.mysql.user) self.assertEqual(None, config.mysql.password) self.assertEqual("bzip2", config.mysql.compressMode) self.assertEqual(False, config.mysql.all) self.assertEqual(["database1", "database2", ], config.mysql.databases) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.mysql) self.assertEqual(None, config.mysql.user) self.assertEqual(None, config.mysql.password) self.assertEqual("bzip2", config.mysql.compressMode) self.assertEqual(False, config.mysql.all) self.assertEqual(["database1", "database2", ], config.mysql.databases) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document """ config = LocalConfig() self.validateAddConfig(config) def testAddConfig_003(self): """ Test with no databases, all other values filled in, all=True. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "none", True, None) self.validateAddConfig(config) def testAddConfig_004(self): """ Test with no databases, all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "gzip", False, None) self.validateAddConfig(config) def testAddConfig_005(self): """ Test with single database, all other values filled in, all=True. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "bzip2", True, [ "database", ]) self.validateAddConfig(config) def testAddConfig_006(self): """ Test with single database, all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "none", False, [ "database", ]) self.validateAddConfig(config) def testAddConfig_007(self): """ Test with multiple databases, all other values filled in, all=True. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "bzip2", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_008(self): """ Test with multiple databases, all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_009(self): """ Test with multiple databases, user=None but all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig(None, "password", "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_010(self): """ Test with multiple databases, password=None but all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig("user", None, "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_011(self): """ Test with multiple databases, user=None and password=None but all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig(None, None, "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" tests = [ ] tests.append(unittest.makeSuite(TestMysqlConfig, 'test')) tests.append(unittest.makeSuite(TestLocalConfig, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/writersutiltests.py0000664000175000017500000020341312642032704023342 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010,2011,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests writer utility functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/writers/util.py. Code Coverage ============= This module contains individual tests for the public functions and classes implemented in writers/util.py. I usually prefer to test only the public interface to a class, because that way the regression tests don't depend on the internal implementation. In this case, I've decided to test some of the private methods, because their "privateness" is more a matter of presenting a clean external interface than anything else (most of the private methods are static). Being able to test these methods also makes it easier to gain some reasonable confidence in the code even if some tests are not run because WRITERSUTILTESTS_FULL is not set to "Y" in the environment (see below). Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. If you want to run all of the tests, set WRITERSUTILTESTS_FULL to "Y" in the environment. In this module, there are three dependencies: the system must have C{mkisofs} installed, the kernel must allow ISO images to be mounted in-place via a loopback mechanism, and the current user must be allowed (via C{sudo}) to mount and unmount such loopback filesystems. See documentation by the L{TestIsoImage.mountImage} and L{TestIsoImage.unmountImage} methods for more information on what C{sudo} access is required. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import os import unittest import tempfile import time from CedarBackup3.testutil import findResources, buildPath, removedir, extractTar from CedarBackup3.testutil import platformMacOsX from CedarBackup3.testutil import setupOverrides from CedarBackup3.filesystem import FilesystemList from CedarBackup3.writers.util import validateScsiId, validateDriveSpeed, IsoImage from CedarBackup3.util import executeCommand ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "tree9.tar.gz", ] SUDO_CMD = [ "sudo", ] HDIUTIL_CMD = [ "hdiutil", ] GCONF_CMD = [ "gconftool-2", ] INVALID_FILE = "bogus" # This file name should never exist ####################################################################### # Utility functions ####################################################################### def runAllTests(): """Returns true/false depending on whether the full test suite should be run.""" if "WRITERSUTILTESTS_FULL" in os.environ: return os.environ["WRITERSUTILTESTS_FULL"] == "Y" else: return False ####################################################################### # Test Case Classes ####################################################################### ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the various public functions.""" ################ # Setup methods ################ @classmethod def setUpClass(cls): # We absolutely need the overrides set properly for this test, since it # runs programs. Since other tests might mess with the overrides and/or # singletons, and we don't control the order of execution, we need to set # them up here. setupOverrides() def setUp(self): pass def tearDown(self): pass ######################## # Test validateScsiId() ######################## def testValidateScsiId_001(self): """ Test with simple scsibus,target,lun address. """ scsiId = "0,0,0" result = validateScsiId(scsiId) self.assertEqual(scsiId, result) def testValidateScsiId_002(self): """ Test with simple scsibus,target,lun address containing spaces. """ scsiId = " 0, 0, 0 " result = validateScsiId(scsiId) self.assertEqual(scsiId, result) def testValidateScsiId_003(self): """ Test with simple ATA address. """ scsiId = "ATA:3,2,1" result = validateScsiId(scsiId) self.assertEqual(scsiId, result) def testValidateScsiId_004(self): """ Test with simple ATA address containing spaces. """ scsiId = "ATA: 3, 2,1 " result = validateScsiId(scsiId) self.assertEqual(scsiId, result) def testValidateScsiId_005(self): """ Test with simple ATAPI address. """ scsiId = "ATAPI:1,2,3" result = validateScsiId(scsiId) self.assertEqual(scsiId, result) def testValidateScsiId_006(self): """ Test with simple ATAPI address containing spaces. """ scsiId = " ATAPI:1, 2, 3" result = validateScsiId(scsiId) self.assertEqual(scsiId, result) def testValidateScsiId_007(self): """ Test with default-device Mac address. """ scsiId = "IOCompactDiscServices" result = validateScsiId(scsiId) self.assertEqual(scsiId, result) def testValidateScsiId_008(self): """ Test with an alternate-device Mac address. """ scsiId = "IOCompactDiscServices/2" result = validateScsiId(scsiId) self.assertEqual(scsiId, result) def testValidateScsiId_009(self): """ Test with an alternate-device Mac address. """ scsiId = "IOCompactDiscServices/12" result = validateScsiId(scsiId) self.assertEqual(scsiId, result) def testValidateScsiId_010(self): """ Test with an invalid address with a missing field. """ scsiId = "1,2" self.assertRaises(ValueError, validateScsiId, scsiId) def testValidateScsiId_011(self): """ Test with an invalid Mac-style address with a backslash. """ scsiId = "IOCompactDiscServices\\3" self.assertRaises(ValueError, validateScsiId, scsiId) def testValidateScsiId_012(self): """ Test with an invalid address with an invalid prefix separator. """ scsiId = "ATAPI;1,2,3" self.assertRaises(ValueError, validateScsiId, scsiId) def testValidateScsiId_013(self): """ Test with an invalid address with an invalid prefix separator. """ scsiId = "ATA-1,2,3" self.assertRaises(ValueError, validateScsiId, scsiId) def testValidateScsiId_014(self): """ Test with a None SCSI id. """ scsiId = None result = validateScsiId(scsiId) self.assertEqual(scsiId, result) ############################ # Test validateDriveSpeed() ############################ #pylint: disable=R0204 def testValidateDriveSpeed_001(self): """ Test for a valid drive speed. """ speed = 1 result = validateDriveSpeed(speed) self.assertEqual(result, speed) speed = 2 result = validateDriveSpeed(speed) self.assertEqual(result, speed) speed = 30 result = validateDriveSpeed(speed) self.assertEqual(result, speed) speed = 2.0 result = validateDriveSpeed(speed) self.assertEqual(result, speed) speed = 1.3 result = validateDriveSpeed(speed) self.assertEqual(result, 1) # truncated def testValidateDriveSpeed_002(self): """ Test for a None drive speed (special case). """ speed = None result = validateDriveSpeed(speed) self.assertEqual(result, speed) def testValidateDriveSpeed_003(self): """ Test for an invalid drive speed (zero) """ speed = 0 self.assertRaises(ValueError, validateDriveSpeed, speed) def testValidateDriveSpeed_004(self): """ Test for an invalid drive speed (negative) """ speed = -1 self.assertRaises(ValueError, validateDriveSpeed, speed) def testValidateDriveSpeed_005(self): """ Test for an invalid drive speed (not integer) """ speed = "ken" self.assertRaises(ValueError, validateDriveSpeed, speed) ##################### # TestIsoImage class ##################### class TestIsoImage(unittest.TestCase): """Tests for the IsoImage class.""" ################ # Setup methods ################ def setUp(self): try: self.disableGnomeAutomount() self.mounted = False self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): if self.mounted: self.unmountImage() removedir(self.tmpdir) self.enableGnomeAutomount() ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def mountImage(self, imagePath): """ Mounts an ISO image at C{self.tmpdir/mnt} using loopback. This function chooses the correct operating system-specific function and calls it. If there is no operating-system-specific function, we fall back to the generic function, which uses 'sudo mount'. @return: Path the image is mounted at. @raise IOError: If the command cannot be executed. """ if platformMacOsX(): return self.mountImageDarwin(imagePath) else: return self.mountImageGeneric(imagePath) def mountImageDarwin(self, imagePath): """ Mounts an ISO image at C{self.tmpdir/mnt} using Darwin's C{hdiutil} program. Darwin (Mac OS X) uses the C{hdiutil} program to mount volumes. The mount command doesn't really exist (or rather, doesn't know what to do with ISO 9660 volumes). @note: According to the manpage, the mountpoint path can't be any longer than MNAMELEN characters (currently 90?) so you might have problems with this depending on how your test environment is set up. @return: Path the image is mounted at. @raise IOError: If the command cannot be executed. """ mountPath = self.buildPath([ "mnt", ]) os.mkdir(mountPath) args = [ "attach", "-mountpoint", mountPath, imagePath, ] (result, output) = executeCommand(HDIUTIL_CMD, args, returnOutput=True) if result != 0: raise IOError("Error (%d) executing command to mount image." % result) self.mounted = True return mountPath def mountImageGeneric(self, imagePath): """ Mounts an ISO image at C{self.tmpdir/mnt} using loopback. Note that this will fail unless the user has been granted permissions via sudo, using something like this: Cmnd_Alias LOOPMOUNT = /bin/mount -d -t iso9660 -o loop * * Keep in mind that this entry is a security hole, so you might not want to keep it in C{/etc/sudoers} all of the time. @return: Path the image is mounted at. @raise IOError: If the command cannot be executed. """ mountPath = self.buildPath([ "mnt", ]) os.mkdir(mountPath) args = [ "mount", "-t", "iso9660", "-o", "loop", imagePath, mountPath, ] (result, output) = executeCommand(SUDO_CMD, args, returnOutput=True) if result != 0: raise IOError("Error (%d) executing command to mount image." % result) self.mounted = True return mountPath def unmountImage(self): """ Unmounts an ISO image from C{self.tmpdir/mnt}. This function chooses the correct operating system-specific function and calls it. If there is no operating-system-specific function, we fall back to the generic function, which uses 'sudo unmount'. @raise IOError: If the command cannot be executed. """ if platformMacOsX(): self.unmountImageDarwin() else: self.unmountImageGeneric() def unmountImageDarwin(self): """ Unmounts an ISO image from C{self.tmpdir/mnt} using Darwin's C{hdiutil} program. Darwin (Mac OS X) uses the C{hdiutil} program to mount volumes. The mount command doesn't really exist (or rather, doesn't know what to do with ISO 9660 volumes). @note: According to the manpage, the mountpoint path can't be any longer than MNAMELEN characters (currently 90?) so you might have problems with this depending on how your test environment is set up. @raise IOError: If the command cannot be executed. """ mountPath = self.buildPath([ "mnt", ]) args = [ "detach", mountPath, ] (result, output) = executeCommand(HDIUTIL_CMD, args, returnOutput=True) if result != 0: raise IOError("Error (%d) executing command to unmount image." % result) self.mounted = False def unmountImageGeneric(self): """ Unmounts an ISO image from C{self.tmpdir/mnt}. Sometimes, multiple tries are needed because the ISO filesystem is still in use. We try twice with a 1-second pause between attempts. If this isn't successful, you may run out of loopback devices. Check for leftover mounts using 'losetup -a' as root. You can remove a leftover mount using something like 'losetup -d /dev/loop0'. Note that this will fail unless the user has been granted permissions via sudo, using something like this: Cmnd_Alias LOOPUNMOUNT = /bin/umount -d -t iso9660 * Keep in mind that this entry is a security hole, so you might not want to keep it in C{/etc/sudoers} all of the time. @raise IOError: If the command cannot be executed. """ mountPath = self.buildPath([ "mnt", ]) args = [ "umount", "-d", "-t", "iso9660", mountPath, ] (result, output) = executeCommand(SUDO_CMD, args, returnOutput=True) if result != 0: time.sleep(1) (result, output) = executeCommand(SUDO_CMD, args, returnOutput=True) if result != 0: raise IOError("Error (%d) executing command to unmount image." % result) self.mounted = False def disableGnomeAutomount(self): """ Disables GNOME auto-mounting of ISO volumes when full tests are enabled. As of this writing (October 2011), recent versions of GNOME in Debian come pre-configured to auto-mount various kinds of media (like CDs and thumb drives). Besides auto-mounting the media, GNOME also often opens up a Nautilus browser window to explore the newly-mounted media. This causes lots of problems for these unit tests, which assume that they have complete control over the mounting and unmounting process. So, for these tests to work, we need to disable GNOME auto-mounting. """ self.origMediaAutomount = None self.origMediaAutomountOpen = None if runAllTests(): args = [ "--get", "/apps/nautilus/preferences/media_automount", ] (result, output) = executeCommand(GCONF_CMD, args, returnOutput=True) if result == 0: self.origMediaAutomount = output[0][:-1] # pylint: disable=W0201 if self.origMediaAutomount == "true": args = [ "--type", "bool", "--set", "/apps/nautilus/preferences/media_automount", "false", ] executeCommand(GCONF_CMD, args) args = [ "--get", "/apps/nautilus/preferences/media_automount_open", ] (result, output) = executeCommand(GCONF_CMD, args, returnOutput=True) if result == 0: self.origMediaAutomountOpen = output[0][:-1] # pylint: disable=W0201 if self.origMediaAutomountOpen == "true": args = [ "--type", "bool", "--set", "/apps/nautilus/preferences/media_automount_open", "false", ] executeCommand(GCONF_CMD, args) def enableGnomeAutomount(self): """ Resets GNOME auto-mounting options back to their state prior to disableGnomeAutomount(). """ if self.origMediaAutomount == "true": args = [ "--type", "bool", "--set", "/apps/nautilus/preferences/media_automount", "true", ] executeCommand(GCONF_CMD, args) if self.origMediaAutomountOpen == "true": args = [ "--type", "bool", "--set", "/apps/nautilus/preferences/media_automount_open", "true", ] executeCommand(GCONF_CMD, args) ################### # Test constructor ################### def testConstructor_001(self): """ Test the constructor using all default arguments. """ isoImage = IsoImage() self.assertEqual(None, isoImage.device) self.assertEqual(None, isoImage.boundaries) self.assertEqual(None, isoImage.graftPoint) self.assertEqual(True, isoImage.useRockRidge) self.assertEqual(None, isoImage.applicationId) self.assertEqual(None, isoImage.biblioFile) self.assertEqual(None, isoImage.publisherId) self.assertEqual(None, isoImage.preparerId) self.assertEqual(None, isoImage.volumeId) def testConstructor_002(self): """ Test the constructor using non-default arguments. """ isoImage = IsoImage("/dev/cdrw", boundaries=(1, 2), graftPoint="/france") self.assertEqual("/dev/cdrw", isoImage.device) self.assertEqual((1, 2), isoImage.boundaries) self.assertEqual("/france", isoImage.graftPoint) self.assertEqual(True, isoImage.useRockRidge) self.assertEqual(None, isoImage.applicationId) self.assertEqual(None, isoImage.biblioFile) self.assertEqual(None, isoImage.publisherId) self.assertEqual(None, isoImage.preparerId) self.assertEqual(None, isoImage.volumeId) ################################ # Test IsoImage utility methods ################################ def testUtilityMethods_001(self): """ Test _buildDirEntries() with an empty entries dictionary. """ entries = {} result = IsoImage._buildDirEntries(entries) self.assertEqual(0, len(result)) def testUtilityMethods_002(self): """ Test _buildDirEntries() with an entries dictionary that has no graft points. """ entries = {} entries["/one/two/three"] = None entries["/four/five/six"] = None entries["/seven/eight/nine"] = None result = IsoImage._buildDirEntries(entries) self.assertEqual(3, len(result)) self.assertTrue("/one/two/three" in result) self.assertTrue("/four/five/six" in result) self.assertTrue("/seven/eight/nine" in result) def testUtilityMethods_003(self): """ Test _buildDirEntries() with an entries dictionary that has all graft points. """ entries = {} entries["/one/two/three"] = "/backup1" entries["/four/five/six"] = "backup2" entries["/seven/eight/nine"] = "backup3" result = IsoImage._buildDirEntries(entries) self.assertEqual(3, len(result)) self.assertTrue("backup1/=/one/two/three" in result) self.assertTrue("backup2/=/four/five/six" in result) self.assertTrue("backup3/=/seven/eight/nine" in result) def testUtilityMethods_004(self): """ Test _buildDirEntries() with an entries dictionary that has mixed graft points and not. """ entries = {} entries["/one/two/three"] = "backup1" entries["/four/five/six"] = None entries["/seven/eight/nine"] = "/backup3" result = IsoImage._buildDirEntries(entries) self.assertEqual(3, len(result)) self.assertTrue("backup1/=/one/two/three" in result) self.assertTrue("/four/five/six" in result) self.assertTrue("backup3/=/seven/eight/nine" in result) def testUtilityMethods_005(self): """ Test _buildGeneralArgs() with all optional values as None. """ isoImage = IsoImage() result = isoImage._buildGeneralArgs() self.assertEqual(0, len(result)) def testUtilityMethods_006(self): """ Test _buildGeneralArgs() with applicationId set. """ isoImage = IsoImage() isoImage.applicationId = "one" result = isoImage._buildGeneralArgs() self.assertEqual(["-A", "one", ], result) def testUtilityMethods_007(self): """ Test _buildGeneralArgs() with biblioFile set. """ isoImage = IsoImage() isoImage.biblioFile = "two" result = isoImage._buildGeneralArgs() self.assertEqual(["-biblio", "two", ], result) def testUtilityMethods_008(self): """ Test _buildGeneralArgs() with publisherId set. """ isoImage = IsoImage() isoImage.publisherId = "three" result = isoImage._buildGeneralArgs() self.assertEqual(["-publisher", "three", ], result) def testUtilityMethods_009(self): """ Test _buildGeneralArgs() with preparerId set. """ isoImage = IsoImage() isoImage.preparerId = "four" result = isoImage._buildGeneralArgs() self.assertEqual(["-p", "four", ], result) def testUtilityMethods_010(self): """ Test _buildGeneralArgs() with volumeId set. """ isoImage = IsoImage() isoImage.volumeId = "five" result = isoImage._buildGeneralArgs() self.assertEqual(["-V", "five", ], result) def testUtilityMethods_011(self): """ Test _buildSizeArgs() with device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() result = isoImage._buildSizeArgs(entries) self.assertEqual(["-print-size", "-graft-points", "-r", "backup1/=/one/two/three", ], result) def testUtilityMethods_012(self): """ Test _buildSizeArgs() with useRockRidge set to True and device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() isoImage.useRockRidge = True result = isoImage._buildSizeArgs(entries) self.assertEqual(["-print-size", "-graft-points", "-r", "backup1/=/one/two/three", ], result) def testUtilityMethods_013(self): """ Test _buildSizeArgs() with useRockRidge set to False and device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() isoImage.useRockRidge = False result = isoImage._buildSizeArgs(entries) self.assertEqual(["-print-size", "-graft-points", "backup1/=/one/two/three", ], result) def testUtilityMethods_014(self): """ Test _buildSizeArgs() with device as None and boundaries as non-None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device=None, boundaries=(1, 2)) result = isoImage._buildSizeArgs(entries) self.assertEqual(["-print-size", "-graft-points", "-r", "backup1/=/one/two/three", ], result) def testUtilityMethods_015(self): """ Test _buildSizeArgs() with device as non-None and boundaries as None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device="/dev/cdrw", boundaries=None) result = isoImage._buildSizeArgs(entries) self.assertEqual(["-print-size", "-graft-points", "-r", "backup1/=/one/two/three", ], result) def testUtilityMethods_016(self): """ Test _buildSizeArgs() with device and boundaries as non-None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device="/dev/cdrw", boundaries=(1, 2)) result = isoImage._buildSizeArgs(entries) self.assertEqual(["-print-size", "-graft-points", "-r", "-C", "1,2", "-M", "/dev/cdrw", "backup1/=/one/two/three", ], result) def testUtilityMethods_017(self): """ Test _buildWriteArgs() with device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.assertEqual(["-graft-points", "-r", "-o", "/tmp/file.iso", "backup1/=/one/two/three", ], result) def testUtilityMethods_018(self): """ Test _buildWriteArgs() with useRockRidge set to True and device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() isoImage.useRockRidge = True result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.assertEqual(["-graft-points", "-r", "-o", "/tmp/file.iso", "backup1/=/one/two/three", ], result) def testUtilityMethods_019(self): """ Test _buildWriteArgs() with useRockRidge set to False and device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() isoImage.useRockRidge = False result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.assertEqual(["-graft-points", "-o", "/tmp/file.iso", "backup1/=/one/two/three", ], result) def testUtilityMethods_020(self): """ Test _buildWriteArgs() with device as None and boundaries as non-None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device=None, boundaries=(3, 4)) isoImage.useRockRidge = False result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.assertEqual(["-graft-points", "-o", "/tmp/file.iso", "backup1/=/one/two/three", ], result) def testUtilityMethods_021(self): """ Test _buildWriteArgs() with device as non-None and boundaries as None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device="/dev/cdrw", boundaries=None) isoImage.useRockRidge = False result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.assertEqual(["-graft-points", "-o", "/tmp/file.iso", "backup1/=/one/two/three", ], result) def testUtilityMethods_022(self): """ Test _buildWriteArgs() with device and boundaries as non-None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device="/dev/cdrw", boundaries=(3, 4)) isoImage.useRockRidge = False result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.assertEqual(["-graft-points", "-o", "/tmp/file.iso", "-C", "3,4", "-M", "/dev/cdrw", "backup1/=/one/two/three", ], result) ################## # Test addEntry() ################## def testAddEntry_001(self): """ Attempt to add a non-existent entry. """ file1 = self.buildPath([ INVALID_FILE, ]) isoImage = IsoImage() self.assertRaises(ValueError, isoImage.addEntry, file1) def testAddEntry_002(self): """ Attempt to add a an entry that is a soft link to a file. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "dir002", "link003", ]) isoImage = IsoImage() self.assertRaises(ValueError, isoImage.addEntry, file1) def testAddEntry_003(self): """ Attempt to add a an entry that is a soft link to a directory """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "link001", ]) isoImage = IsoImage() self.assertRaises(ValueError, isoImage.addEntry, file1) def testAddEntry_004(self): """ Attempt to add a file, no graft point set. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.assertEqual({}, isoImage.entries) isoImage.addEntry(file1) self.assertEqual({ file1:None, }, isoImage.entries) def testAddEntry_005(self): """ Attempt to add a file, graft point set on the object level. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.assertEqual({}, isoImage.entries) isoImage.addEntry(file1) self.assertEqual({ file1:"whatever", }, isoImage.entries) def testAddEntry_006(self): """ Attempt to add a file, graft point set on the method level. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.assertEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="stuff") self.assertEqual({ file1:"stuff", }, isoImage.entries) def testAddEntry_007(self): """ Attempt to add a file, graft point set on the object and method levels. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.assertEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="stuff") self.assertEqual({ file1:"stuff", }, isoImage.entries) def testAddEntry_008(self): """ Attempt to add a file, graft point set on the object and method levels, where method value is None (which can't be distinguished from the method value being unset). """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.assertEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint=None) self.assertEqual({ file1:"whatever", }, isoImage.entries) def testAddEntry_009(self): """ Attempt to add a directory, no graft point set. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage() self.assertEqual({}, isoImage.entries) isoImage.addEntry(dir1) self.assertEqual({ dir1:os.path.basename(dir1), }, isoImage.entries) def testAddEntry_010(self): """ Attempt to add a directory, graft point set on the object level. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage(graftPoint="p") self.assertEqual({}, isoImage.entries) isoImage.addEntry(dir1) self.assertEqual({ dir1:os.path.join("p", "tree9") }, isoImage.entries) def testAddEntry_011(self): """ Attempt to add a directory, graft point set on the method level. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage() self.assertEqual({}, isoImage.entries) isoImage.addEntry(dir1, graftPoint="s") self.assertEqual({ dir1:os.path.join("s", "tree9"), }, isoImage.entries) def testAddEntry_012(self): """ Attempt to add a file, no graft point set, contentsOnly=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.assertEqual({}, isoImage.entries) isoImage.addEntry(file1, contentsOnly=True) self.assertEqual({ file1:None, }, isoImage.entries) def testAddEntry_013(self): """ Attempt to add a file, graft point set on the object level, contentsOnly=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.assertEqual({}, isoImage.entries) isoImage.addEntry(file1, contentsOnly=True) self.assertEqual({ file1:"whatever", }, isoImage.entries) def testAddEntry_014(self): """ Attempt to add a file, graft point set on the method level, contentsOnly=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.assertEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="stuff", contentsOnly=True) self.assertEqual({ file1:"stuff", }, isoImage.entries) def testAddEntry_015(self): """ Attempt to add a file, graft point set on the object and method levels, contentsOnly=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.assertEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="stuff", contentsOnly=True) self.assertEqual({ file1:"stuff", }, isoImage.entries) def testAddEntry_016(self): """ Attempt to add a file, graft point set on the object and method levels, where method value is None (which can't be distinguished from the method value being unset), contentsOnly=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.assertEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint=None, contentsOnly=True) self.assertEqual({ file1:"whatever", }, isoImage.entries) def testAddEntry_017(self): """ Attempt to add a directory, no graft point set, contentsOnly=True. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage() self.assertEqual({}, isoImage.entries) isoImage.addEntry(dir1, contentsOnly=True) self.assertEqual({ dir1:None, }, isoImage.entries) def testAddEntry_018(self): """ Attempt to add a directory, graft point set on the object level, contentsOnly=True. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage(graftPoint="p") self.assertEqual({}, isoImage.entries) isoImage.addEntry(dir1, contentsOnly=True) self.assertEqual({ dir1:"p" }, isoImage.entries) def testAddEntry_019(self): """ Attempt to add a directory, graft point set on the method level, contentsOnly=True. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage() self.assertEqual({}, isoImage.entries) isoImage.addEntry(dir1, graftPoint="s", contentsOnly=True) self.assertEqual({ dir1:"s", }, isoImage.entries) def testAddEntry_020(self): """ Attempt to add a directory, graft point set on the object and methods levels, contentsOnly=True. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage(graftPoint="p") self.assertEqual({}, isoImage.entries) isoImage.addEntry(dir1, graftPoint="s", contentsOnly=True) self.assertEqual({ dir1:"s", }, isoImage.entries) def testAddEntry_021(self): """ Attempt to add a directory, graft point set on the object and methods levels, contentsOnly=True. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage(graftPoint="p") self.assertEqual({}, isoImage.entries) isoImage.addEntry(dir1, graftPoint="s", contentsOnly=True) self.assertEqual({ dir1:"s", }, isoImage.entries) def testAddEntry_022(self): """ Attempt to add a file that has already been added, override=False. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.assertEqual({}, isoImage.entries) isoImage.addEntry(file1) self.assertEqual({ file1:None, }, isoImage.entries) self.assertRaises(ValueError, isoImage.addEntry, file1, override=False) self.assertEqual({ file1:None, }, isoImage.entries) def testAddEntry_023(self): """ Attempt to add a file that has already been added, override=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.assertEqual({}, isoImage.entries) isoImage.addEntry(file1) self.assertEqual({ file1:None, }, isoImage.entries) isoImage.addEntry(file1, override=True) self.assertEqual({ file1:None, }, isoImage.entries) def testAddEntry_024(self): """ Attempt to add a directory that has already been added, override=False, changing the graft point. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.assertEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="one") self.assertEqual({ file1:"one", }, isoImage.entries) self.assertRaises(ValueError, isoImage.addEntry, file1, graftPoint="two", override=False) self.assertEqual({ file1:"one", }, isoImage.entries) def testAddEntry_025(self): """ Attempt to add a directory that has already been added, override=True, changing the graft point. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.assertEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="one") self.assertEqual({ file1:"one", }, isoImage.entries) isoImage.addEntry(file1, graftPoint="two", override=True) self.assertEqual({ file1:"two", }, isoImage.entries) ########################## # Test getEstimatedSize() ########################## def testGetEstimatedSize_001(self): """ Test with an empty list. """ self.extractTar("tree9") isoImage = IsoImage() self.assertRaises(ValueError, isoImage.getEstimatedSize) def testGetEstimatedSize_002(self): """ Test with non-empty empty list. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9", ]) isoImage = IsoImage() isoImage.addEntry(dir1, graftPoint="base") result = isoImage.getEstimatedSize() self.assertTrue(result > 0) #################### # Test writeImage() #################### def testWriteImage_001(self): """ Attempt to write an image containing no entries. """ isoImage = IsoImage() imagePath = self.buildPath([ "image.iso", ]) self.assertRaises(ValueError, isoImage.writeImage, imagePath) def testWriteImage_002(self): """ Attempt to write an image containing only an empty directory, no graft point. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(2, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "dir002") in fsList) def testWriteImage_003(self): """ Attempt to write an image containing only an empty directory, with a graft point. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint="base") isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(3, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "base") in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir002") in fsList) def testWriteImage_004(self): """ Attempt to write an image containing only a non-empty directory, no graft point. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir002" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(10, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "dir002") in fsList) self.assertTrue(os.path.join(mountPath, "dir002", "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "dir002", "file002", ) in fsList) self.assertTrue(os.path.join(mountPath, "dir002", "link001", ) in fsList) self.assertTrue(os.path.join(mountPath, "dir002", "link002", ) in fsList) self.assertTrue(os.path.join(mountPath, "dir002", "link003", ) in fsList) self.assertTrue(os.path.join(mountPath, "dir002", "link004", ) in fsList) self.assertTrue(os.path.join(mountPath, "dir002", "dir001", ) in fsList) self.assertTrue(os.path.join(mountPath, "dir002", "dir002", ) in fsList) def testWriteImage_005(self): """ Attempt to write an image containing only a non-empty directory, with a graft point. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir002" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint=os.path.join("something", "else")) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(12, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "something", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", "dir002") in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", "dir002", "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", "dir002", "file002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", "dir002", "link001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", "dir002", "link002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", "dir002", "link003", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", "dir002", "link004", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", "dir002", "dir001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", "dir002", "dir002", ) in fsList) def testWriteImage_006(self): """ Attempt to write an image containing only a file, no graft point. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(2, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "file001", ) in fsList) def testWriteImage_007(self): """ Attempt to write an image containing only a file, with a graft point. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint="point") isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(3, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "point", ) in fsList) self.assertTrue(os.path.join(mountPath, "point", "file001", ) in fsList) def testWriteImage_008(self): """ Attempt to write an image containing a file and an empty directory, no graft points. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1) isoImage.addEntry(dir1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(3, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "dir002", ) in fsList) def testWriteImage_009(self): """ Attempt to write an image containing a file and an empty directory, with graft points. """ self.extractTar("tree9") isoImage = IsoImage(graftPoint="base") file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint="other") isoImage.addEntry(dir1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(5, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "other", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", ) in fsList) self.assertTrue(os.path.join(mountPath, "other", "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir002", ) in fsList) def testWriteImage_010(self): """ Attempt to write an image containing a file and a non-empty directory, mixed graft points. """ self.extractTar("tree9") isoImage = IsoImage(graftPoint="base") file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint=None) isoImage.addEntry(dir1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(11, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "base", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir001", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir001", "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir001", "file002", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir001", "link001", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir001", "link002", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir001", "link003", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir001", "dir001", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir001", "dir002", ) in fsList) def testWriteImage_011(self): """ Attempt to write an image containing several files and a non-empty directory, mixed graft points. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) file2 = self.buildPath([ "tree9", "file002" ]) dir1 = self.buildPath([ "tree9", "dir001", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1) isoImage.addEntry(file2, graftPoint="other") isoImage.addEntry(dir1, graftPoint="base") isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(13, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "base", ) in fsList) self.assertTrue(os.path.join(mountPath, "other", ) in fsList) self.assertTrue(os.path.join(mountPath, "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "other", "file002", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir001", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir001", "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir001", "file002", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir001", "link001", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir001", "link002", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir001", "link003", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir001", "dir001", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir001", "dir002", ) in fsList) def testWriteImage_012(self): """ Attempt to write an image containing a deeply-nested directory. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint="something") isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(24, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "something", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "file002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "link001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "link002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "dir001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "dir001", "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "dir001", "file002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "dir001", "link001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "dir001", "link002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "dir001", "link003", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "dir001", "dir001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "dir001", "dir002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "dir002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "dir002", "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "dir002", "file002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "dir002", "link001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "dir002", "link002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "dir002", "link003", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "dir002", "link004", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "dir002", "dir001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "tree9", "dir002", "dir002", ) in fsList) def testWriteImage_013(self): """ Attempt to write an image containing only an empty directory, no graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(1, len(fsList)) self.assertTrue(mountPath in fsList) def testWriteImage_014(self): """ Attempt to write an image containing only an empty directory, with a graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint="base", contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(2, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "base") in fsList) def testWriteImage_015(self): """ Attempt to write an image containing only a non-empty directory, no graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir002" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(9, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "file002", ) in fsList) self.assertTrue(os.path.join(mountPath, "link001", ) in fsList) self.assertTrue(os.path.join(mountPath, "link002", ) in fsList) self.assertTrue(os.path.join(mountPath, "link003", ) in fsList) self.assertTrue(os.path.join(mountPath, "link004", ) in fsList) self.assertTrue(os.path.join(mountPath, "dir001", ) in fsList) self.assertTrue(os.path.join(mountPath, "dir002", ) in fsList) def testWriteImage_016(self): """ Attempt to write an image containing only a non-empty directory, with a graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir002" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint=os.path.join("something", "else"), contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(11, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "something", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", "file002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", "link001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", "link002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", "link003", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", "link004", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", "dir001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "else", "dir002", ) in fsList) def testWriteImage_017(self): """ Attempt to write an image containing only a file, no graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(2, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "file001", ) in fsList) def testWriteImage_018(self): """ Attempt to write an image containing only a file, with a graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint="point", contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(3, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "point", ) in fsList) self.assertTrue(os.path.join(mountPath, "point", "file001", ) in fsList) def testWriteImage_019(self): """ Attempt to write an image containing a file and an empty directory, no graft points, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, contentsOnly=True) isoImage.addEntry(dir1, contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(2, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "file001", ) in fsList) def testWriteImage_020(self): """ Attempt to write an image containing a file and an empty directory, with graft points, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage(graftPoint="base") file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint="other", contentsOnly=True) isoImage.addEntry(dir1, contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(4, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "other", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", ) in fsList) self.assertTrue(os.path.join(mountPath, "other", "file001", ) in fsList) def testWriteImage_021(self): """ Attempt to write an image containing a file and a non-empty directory, mixed graft points, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage(graftPoint="base") file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint=None, contentsOnly=True) isoImage.addEntry(dir1, contentsOnly=True) self.assertRaises(IOError, isoImage.writeImage, imagePath) # ends up with a duplicate name def testWriteImage_022(self): """ Attempt to write an image containing several files and a non-empty directory, mixed graft points, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) file2 = self.buildPath([ "tree9", "file002" ]) dir1 = self.buildPath([ "tree9", "dir001", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, contentsOnly=True) isoImage.addEntry(file2, graftPoint="other", contentsOnly=True) isoImage.addEntry(dir1, graftPoint="base", contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(12, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "base", ) in fsList) self.assertTrue(os.path.join(mountPath, "other", ) in fsList) self.assertTrue(os.path.join(mountPath, "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "other", "file002", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "file002", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "link001", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "link002", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "link003", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir001", ) in fsList) self.assertTrue(os.path.join(mountPath, "base", "dir002", ) in fsList) def testWriteImage_023(self): """ Attempt to write an image containing a deeply-nested directory, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint="something", contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.assertEqual(23, len(fsList)) self.assertTrue(mountPath in fsList) self.assertTrue(os.path.join(mountPath, "something", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "file002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "link001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "link002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "dir001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "dir001", "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "dir001", "file002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "dir001", "link001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "dir001", "link002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "dir001", "link003", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "dir001", "dir001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "dir001", "dir002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "dir002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "dir002", "file001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "dir002", "file002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "dir002", "link001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "dir002", "link002", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "dir002", "link003", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "dir002", "link004", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "dir002", "dir001", ) in fsList) self.assertTrue(os.path.join(mountPath, "something", "dir002", "dir002", ) in fsList) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" if runAllTests(): tests = [ ] tests.append(unittest.makeSuite(TestFunctions, 'test')) tests.append(unittest.makeSuite(TestIsoImage, 'test')) return unittest.TestSuite(tests) else: tests = [ ] tests.append(unittest.makeSuite(TestFunctions, 'test')) tests.append(unittest.makeSuite(TestIsoImage, 'testConstructor')) tests.append(unittest.makeSuite(TestIsoImage, 'testUtilityMethods')) tests.append(unittest.makeSuite(TestIsoImage, 'testAddEntry')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/amazons3tests.py0000664000175000017500000010546712657664563022536 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2014-2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests amazons3 extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/extend/amazons3.py. Code Coverage ============= This module contains individual tests for the the public classes implemented in extend/amazons3.py. There are also tests for some of the private functions. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest import tempfile # Cedar Backup modules from CedarBackup3.util import UNIT_BYTES, UNIT_MBYTES, UNIT_GBYTES from CedarBackup3.config import ByteQuantity from CedarBackup3.testutil import findResources, buildPath, removedir, extractTar, failUnlessAssignRaises from CedarBackup3.xmlutil import createOutputDom, serializeDom from CedarBackup3.extend.amazons3 import LocalConfig, AmazonS3Config from CedarBackup3.tools.amazons3 import _buildSourceFiles, _checkSourceFiles ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "amazons3.conf.1", "amazons3.conf.2", "amazons3.conf.3", "tree1.tar.gz", "tree2.tar.gz", "tree4.tar.gz", "tree8.tar.gz", "tree13.tar.gz", "tree15.tar.gz", "tree16.tar.gz", "tree17.tar.gz", "tree18.tar.gz", "tree19.tar.gz", "tree20.tar.gz", ] ####################################################################### # Test Case Classes ####################################################################### ########################## # TestAmazonS3Config class ########################## class TestAmazonS3Config(unittest.TestCase): """Tests for the AmazonS3Config class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = AmazonS3Config() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ amazons3 = AmazonS3Config() self.assertEqual(False, amazons3.warnMidnite) self.assertEqual(None, amazons3.s3Bucket) self.assertEqual(None, amazons3.encryptCommand) self.assertEqual(None, amazons3.fullBackupSizeLimit) self.assertEqual(None, amazons3.incrementalBackupSizeLimit) def testConstructor_002a(self): """ Test constructor with all values filled in, with valid values (integers). """ amazons3 = AmazonS3Config(True, "bucket", "encrypt", 1, 2) self.assertEqual(True, amazons3.warnMidnite) self.assertEqual("bucket", amazons3.s3Bucket) self.assertEqual("encrypt", amazons3.encryptCommand) self.assertEqual(1, amazons3.fullBackupSizeLimit) self.assertEqual(2, amazons3.incrementalBackupSizeLimit) self.assertEqual(ByteQuantity(1, UNIT_BYTES), amazons3.fullBackupSizeLimit) self.assertEqual(ByteQuantity(2, UNIT_BYTES), amazons3.incrementalBackupSizeLimit) def testConstructor_002b(self): """ Test constructor with all values filled in, with valid values (byte quantities). """ amazons3 = AmazonS3Config(True, "bucket", "encrypt", ByteQuantity(1, UNIT_BYTES), ByteQuantity(2, UNIT_BYTES)) self.assertEqual(True, amazons3.warnMidnite) self.assertEqual("bucket", amazons3.s3Bucket) self.assertEqual("encrypt", amazons3.encryptCommand) self.assertEqual(ByteQuantity(1, UNIT_BYTES), amazons3.fullBackupSizeLimit) self.assertEqual(ByteQuantity(2, UNIT_BYTES), amazons3.incrementalBackupSizeLimit) def testConstructor_003(self): """ Test assignment of warnMidnite attribute, valid value (real boolean). """ amazons3 = AmazonS3Config() self.assertEqual(False, amazons3.warnMidnite) amazons3.warnMidnite = True self.assertEqual(True, amazons3.warnMidnite) amazons3.warnMidnite = False self.assertEqual(False, amazons3.warnMidnite) #pylint: disable=R0204 def testConstructor_004(self): """ Test assignment of warnMidnite attribute, valid value (expression). """ amazons3 = AmazonS3Config() self.assertEqual(False, amazons3.warnMidnite) amazons3.warnMidnite = 0 self.assertEqual(False, amazons3.warnMidnite) amazons3.warnMidnite = [] self.assertEqual(False, amazons3.warnMidnite) amazons3.warnMidnite = None self.assertEqual(False, amazons3.warnMidnite) amazons3.warnMidnite = ['a'] self.assertEqual(True, amazons3.warnMidnite) amazons3.warnMidnite = 3 self.assertEqual(True, amazons3.warnMidnite) def testConstructor_005(self): """ Test assignment of s3Bucket attribute, None value. """ amazons3 = AmazonS3Config(s3Bucket="bucket") self.assertEqual("bucket", amazons3.s3Bucket) amazons3.s3Bucket = None self.assertEqual(None, amazons3.s3Bucket) def testConstructor_006(self): """ Test assignment of s3Bucket attribute, valid value. """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.s3Bucket) amazons3.s3Bucket = "bucket" self.assertEqual("bucket", amazons3.s3Bucket) def testConstructor_007(self): """ Test assignment of s3Bucket attribute, invalid value (empty). """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.s3Bucket) self.failUnlessAssignRaises(ValueError, amazons3, "s3Bucket", "") self.assertEqual(None, amazons3.s3Bucket) def testConstructor_008(self): """ Test assignment of encryptCommand attribute, None value. """ amazons3 = AmazonS3Config(encryptCommand="encrypt") self.assertEqual("encrypt", amazons3.encryptCommand) amazons3.encryptCommand = None self.assertEqual(None, amazons3.encryptCommand) def testConstructor_009(self): """ Test assignment of encryptCommand attribute, valid value. """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.encryptCommand) amazons3.encryptCommand = "encrypt" self.assertEqual("encrypt", amazons3.encryptCommand) def testConstructor_010(self): """ Test assignment of encryptCommand attribute, invalid value (empty). """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.encryptCommand) self.failUnlessAssignRaises(ValueError, amazons3, "encryptCommand", "") self.assertEqual(None, amazons3.encryptCommand) def testConstructor_011(self): """ Test assignment of fullBackupSizeLimit attribute, None value. """ amazons3 = AmazonS3Config(fullBackupSizeLimit=100) self.assertEqual(100, amazons3.fullBackupSizeLimit) amazons3.fullBackupSizeLimit = None self.assertEqual(None, amazons3.fullBackupSizeLimit) def testConstructor_012a(self): """ Test assignment of fullBackupSizeLimit attribute, valid int value. """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.fullBackupSizeLimit) amazons3.fullBackupSizeLimit = 15 self.assertEqual(15, amazons3.fullBackupSizeLimit) self.assertEqual(ByteQuantity(15, UNIT_BYTES), amazons3.fullBackupSizeLimit) def testConstructor_012b(self): """ Test assignment of fullBackupSizeLimit attribute, valid long value. """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.fullBackupSizeLimit) amazons3.fullBackupSizeLimit = 7516192768 self.assertEqual(7516192768, amazons3.fullBackupSizeLimit) self.assertEqual(ByteQuantity(7516192768, UNIT_BYTES), amazons3.fullBackupSizeLimit) def testConstructor_012c(self): """ Test assignment of fullBackupSizeLimit attribute, valid float value. """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.fullBackupSizeLimit) amazons3.fullBackupSizeLimit = 7516192768.0 self.assertEqual(7516192768.0, amazons3.fullBackupSizeLimit) self.assertEqual(ByteQuantity(7516192768.0, UNIT_BYTES), amazons3.fullBackupSizeLimit) def testConstructor_012d(self): """ Test assignment of fullBackupSizeLimit attribute, valid string value. """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.fullBackupSizeLimit) amazons3.fullBackupSizeLimit = "7516192768" self.assertEqual(7516192768, amazons3.fullBackupSizeLimit) self.assertEqual(ByteQuantity("7516192768", UNIT_BYTES), amazons3.fullBackupSizeLimit) def testConstructor_012e(self): """ Test assignment of fullBackupSizeLimit attribute, valid byte quantity value. """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.fullBackupSizeLimit) amazons3.fullBackupSizeLimit = ByteQuantity(2.5, UNIT_GBYTES) self.assertEqual(ByteQuantity(2.5, UNIT_GBYTES), amazons3.fullBackupSizeLimit) self.assertEqual(2684354560.0, amazons3.fullBackupSizeLimit.bytes) def testConstructor_012f(self): """ Test assignment of fullBackupSizeLimit attribute, valid byte quantity value. """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.fullBackupSizeLimit) amazons3.fullBackupSizeLimit = ByteQuantity(600, UNIT_MBYTES) self.assertEqual(ByteQuantity(600, UNIT_MBYTES), amazons3.fullBackupSizeLimit) self.assertEqual(629145600.0, amazons3.fullBackupSizeLimit.bytes) def testConstructor_013(self): """ Test assignment of fullBackupSizeLimit attribute, invalid value. """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.fullBackupSizeLimit) self.failUnlessAssignRaises(ValueError, amazons3, "fullBackupSizeLimit", "xxx") self.assertEqual(None, amazons3.fullBackupSizeLimit) def testConstructor_014(self): """ Test assignment of incrementalBackupSizeLimit attribute, None value. """ amazons3 = AmazonS3Config(incrementalBackupSizeLimit=100) self.assertEqual(100, amazons3.incrementalBackupSizeLimit) amazons3.incrementalBackupSizeLimit = None self.assertEqual(None, amazons3.incrementalBackupSizeLimit) def testConstructor_015a(self): """ Test assignment of incrementalBackupSizeLimit attribute, valid int value. """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.incrementalBackupSizeLimit) amazons3.incrementalBackupSizeLimit = 15 self.assertEqual(15, amazons3.incrementalBackupSizeLimit) self.assertEqual(ByteQuantity(15, UNIT_BYTES), amazons3.incrementalBackupSizeLimit) def testConstructor_015b(self): """ Test assignment of incrementalBackupSizeLimit attribute, valid long value. """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.incrementalBackupSizeLimit) amazons3.incrementalBackupSizeLimit = 7516192768 self.assertEqual(7516192768, amazons3.incrementalBackupSizeLimit) self.assertEqual(ByteQuantity(7516192768, UNIT_BYTES), amazons3.incrementalBackupSizeLimit) def testConstructor_015c(self): """ Test assignment of incrementalBackupSizeLimit attribute, valid float value. """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.incrementalBackupSizeLimit) amazons3.incrementalBackupSizeLimit = 7516192768.0 self.assertEqual(7516192768.0, amazons3.incrementalBackupSizeLimit) self.assertEqual(ByteQuantity(7516192768.0, UNIT_BYTES), amazons3.incrementalBackupSizeLimit) def testConstructor_015d(self): """ Test assignment of incrementalBackupSizeLimit attribute, valid string value. """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.incrementalBackupSizeLimit) amazons3.incrementalBackupSizeLimit = "7516192768" self.assertEqual(7516192768, amazons3.incrementalBackupSizeLimit) self.assertEqual(ByteQuantity("7516192768", UNIT_BYTES), amazons3.incrementalBackupSizeLimit) def testConstructor_015e(self): """ Test assignment of incrementalBackupSizeLimit attribute, valid byte quantity value. """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.incrementalBackupSizeLimit) amazons3.incrementalBackupSizeLimit = ByteQuantity(2.5, UNIT_GBYTES) self.assertEqual(ByteQuantity(2.5, UNIT_GBYTES), amazons3.incrementalBackupSizeLimit) self.assertEqual(2684354560.0, amazons3.incrementalBackupSizeLimit.bytes) def testConstructor_015f(self): """ Test assignment of incrementalBackupSizeLimit attribute, valid byte quantity value. """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.incrementalBackupSizeLimit) amazons3.incrementalBackupSizeLimit = ByteQuantity(600, UNIT_MBYTES) self.assertEqual(ByteQuantity(600, UNIT_MBYTES), amazons3.incrementalBackupSizeLimit) self.assertEqual(629145600.0, amazons3.incrementalBackupSizeLimit.bytes) def testConstructor_016(self): """ Test assignment of incrementalBackupSizeLimit attribute, invalid value. """ amazons3 = AmazonS3Config() self.assertEqual(None, amazons3.incrementalBackupSizeLimit) self.failUnlessAssignRaises(ValueError, amazons3, "incrementalBackupSizeLimit", "xxx") self.assertEqual(None, amazons3.incrementalBackupSizeLimit) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ amazons31 = AmazonS3Config() amazons32 = AmazonS3Config() self.assertEqual(amazons31, amazons32) self.assertTrue(amazons31 == amazons32) self.assertTrue(not amazons31 < amazons32) self.assertTrue(amazons31 <= amazons32) self.assertTrue(not amazons31 > amazons32) self.assertTrue(amazons31 >= amazons32) self.assertTrue(not amazons31 != amazons32) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ amazons31 = AmazonS3Config(True, "bucket", "encrypt", 1, 2) amazons32 = AmazonS3Config(True, "bucket", "encrypt", 1, 2) self.assertEqual(amazons31, amazons32) self.assertTrue(amazons31 == amazons32) self.assertTrue(not amazons31 < amazons32) self.assertTrue(amazons31 <= amazons32) self.assertTrue(not amazons31 > amazons32) self.assertTrue(amazons31 >= amazons32) self.assertTrue(not amazons31 != amazons32) def testComparison_003(self): """ Test comparison of two differing objects, warnMidnite differs. """ amazons31 = AmazonS3Config(warnMidnite=False) amazons32 = AmazonS3Config(warnMidnite=True) self.assertNotEqual(amazons31, amazons32) self.assertTrue(not amazons31 == amazons32) self.assertTrue(amazons31 < amazons32) self.assertTrue(amazons31 <= amazons32) self.assertTrue(not amazons31 > amazons32) self.assertTrue(not amazons31 >= amazons32) self.assertTrue(amazons31 != amazons32) def testComparison_004(self): """ Test comparison of two differing objects, s3Bucket differs (one None). """ amazons31 = AmazonS3Config() amazons32 = AmazonS3Config(s3Bucket="bucket") self.assertNotEqual(amazons31, amazons32) self.assertTrue(not amazons31 == amazons32) self.assertTrue(amazons31 < amazons32) self.assertTrue(amazons31 <= amazons32) self.assertTrue(not amazons31 > amazons32) self.assertTrue(not amazons31 >= amazons32) self.assertTrue(amazons31 != amazons32) def testComparison_005(self): """ Test comparison of two differing objects, s3Bucket differs. """ amazons31 = AmazonS3Config(s3Bucket="bucket1") amazons32 = AmazonS3Config(s3Bucket="bucket2") self.assertNotEqual(amazons31, amazons32) self.assertTrue(not amazons31 == amazons32) self.assertTrue(amazons31 < amazons32) self.assertTrue(amazons31 <= amazons32) self.assertTrue(not amazons31 > amazons32) self.assertTrue(not amazons31 >= amazons32) self.assertTrue(amazons31 != amazons32) def testComparison_006(self): """ Test comparison of two differing objects, encryptCommand differs (one None). """ amazons31 = AmazonS3Config() amazons32 = AmazonS3Config(encryptCommand="encrypt") self.assertNotEqual(amazons31, amazons32) self.assertTrue(not amazons31 == amazons32) self.assertTrue(amazons31 < amazons32) self.assertTrue(amazons31 <= amazons32) self.assertTrue(not amazons31 > amazons32) self.assertTrue(not amazons31 >= amazons32) self.assertTrue(amazons31 != amazons32) def testComparison_007(self): """ Test comparison of two differing objects, encryptCommand differs. """ amazons31 = AmazonS3Config(encryptCommand="encrypt1") amazons32 = AmazonS3Config(encryptCommand="encrypt2") self.assertNotEqual(amazons31, amazons32) self.assertTrue(not amazons31 == amazons32) self.assertTrue(amazons31 < amazons32) self.assertTrue(amazons31 <= amazons32) self.assertTrue(not amazons31 > amazons32) self.assertTrue(not amazons31 >= amazons32) self.assertTrue(amazons31 != amazons32) def testComparison_008(self): """ Test comparison of two differing objects, fullBackupSizeLimit differs (one None). """ amazons31 = AmazonS3Config() amazons32 = AmazonS3Config(fullBackupSizeLimit=1) self.assertNotEqual(amazons31, amazons32) self.assertTrue(not amazons31 == amazons32) self.assertTrue(amazons31 < amazons32) self.assertTrue(amazons31 <= amazons32) self.assertTrue(not amazons31 > amazons32) self.assertTrue(not amazons31 >= amazons32) self.assertTrue(amazons31 != amazons32) def testComparison_009(self): """ Test comparison of two differing objects, fullBackupSizeLimit differs. """ amazons31 = AmazonS3Config(fullBackupSizeLimit=1) amazons32 = AmazonS3Config(fullBackupSizeLimit=2) self.assertNotEqual(amazons31, amazons32) self.assertTrue(not amazons31 == amazons32) self.assertTrue(amazons31 < amazons32) self.assertTrue(amazons31 <= amazons32) self.assertTrue(not amazons31 > amazons32) self.assertTrue(not amazons31 >= amazons32) self.assertTrue(amazons31 != amazons32) def testComparison_010(self): """ Test comparison of two differing objects, incrementalBackupSizeLimit differs (one None). """ amazons31 = AmazonS3Config() amazons32 = AmazonS3Config(incrementalBackupSizeLimit=1) self.assertNotEqual(amazons31, amazons32) self.assertTrue(not amazons31 == amazons32) self.assertTrue(amazons31 < amazons32) self.assertTrue(amazons31 <= amazons32) self.assertTrue(not amazons31 > amazons32) self.assertTrue(not amazons31 >= amazons32) self.assertTrue(amazons31 != amazons32) def testComparison_011(self): """ Test comparison of two differing objects, incrementalBackupSizeLimit differs. """ amazons31 = AmazonS3Config(incrementalBackupSizeLimit=1) amazons32 = AmazonS3Config(incrementalBackupSizeLimit=2) self.assertNotEqual(amazons31, amazons32) self.assertTrue(not amazons31 == amazons32) self.assertTrue(amazons31 < amazons32) self.assertTrue(amazons31 <= amazons32) self.assertTrue(not amazons31 > amazons32) self.assertTrue(not amazons31 >= amazons32) self.assertTrue(amazons31 != amazons32) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the amazons3 configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.assertEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.assertEqual(None, config.amazons3) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.assertEqual(None, config.amazons3) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["amazons3.conf.1"] with open(path) as f: contents = f.read() self.assertRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of amazons3 attribute, None value. """ config = LocalConfig() config.amazons3 = None self.assertEqual(None, config.amazons3) def testConstructor_005(self): """ Test assignment of amazons3 attribute, valid value. """ config = LocalConfig() config.amazons3 = AmazonS3Config() self.assertEqual(AmazonS3Config(), config.amazons3) def testConstructor_006(self): """ Test assignment of amazons3 attribute, invalid value (not AmazonS3Config). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "amazons3", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.amazons3 = AmazonS3Config() config2 = LocalConfig() config2.amazons3 = AmazonS3Config() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, amazons3 differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.amazons3 = AmazonS3Config() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, s3Bucket differs. """ config1 = LocalConfig() config1.amazons3 = AmazonS3Config(True, "bucket1", "encrypt", 1, 2) config2 = LocalConfig() config2.amazons3 = AmazonS3Config(True, "bucket2", "encrypt", 1, 2) self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None amazons3 section. """ config = LocalConfig() config.amazons3 = None self.assertRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty amazons3 section. """ config = LocalConfig() config.amazons3 = AmazonS3Config() self.assertRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty amazons3 section with no values filled in. """ config = LocalConfig() config.amazons3 = AmazonS3Config(None) self.assertRaises(ValueError, config.validate) def testValidate_005(self): """ Test validate on a non-empty amazons3 section with valid values filled in. """ config = LocalConfig() config.amazons3 = AmazonS3Config(True, "bucket") config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["amazons3.conf.1"] with open(path) as f: contents = f.read() self.assertRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.assertRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.assertEqual(None, config.amazons3) config = LocalConfig(xmlData=contents, validate=False) self.assertEqual(None, config.amazons3) def testParse_002(self): """ Parse config document with filled-in values. """ path = self.resources["amazons3.conf.2"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.amazons3) self.assertEqual(True, config.amazons3.warnMidnite) self.assertEqual("mybucket", config.amazons3.s3Bucket) self.assertEqual("encrypt", config.amazons3.encryptCommand) self.assertEqual(5368709120, config.amazons3.fullBackupSizeLimit) self.assertEqual(2147483648, config.amazons3.incrementalBackupSizeLimit) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.amazons3) self.assertEqual(True, config.amazons3.warnMidnite) self.assertEqual("mybucket", config.amazons3.s3Bucket) self.assertEqual("encrypt", config.amazons3.encryptCommand) self.assertEqual(5368709120, config.amazons3.fullBackupSizeLimit) self.assertEqual(2147483648, config.amazons3.incrementalBackupSizeLimit) def testParse_003(self): """ Parse config document with filled-in values. """ path = self.resources["amazons3.conf.3"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.amazons3) self.assertEqual(True, config.amazons3.warnMidnite) self.assertEqual("mybucket", config.amazons3.s3Bucket) self.assertEqual("encrypt", config.amazons3.encryptCommand) self.assertEqual(ByteQuantity(2.5, UNIT_GBYTES), config.amazons3.fullBackupSizeLimit) self.assertEqual(ByteQuantity(600, UNIT_MBYTES), config.amazons3.incrementalBackupSizeLimit) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.amazons3) self.assertEqual(True, config.amazons3.warnMidnite) self.assertEqual("mybucket", config.amazons3.s3Bucket) self.assertEqual("encrypt", config.amazons3.encryptCommand) self.assertEqual(ByteQuantity(2.5, UNIT_GBYTES), config.amazons3.fullBackupSizeLimit) self.assertEqual(ByteQuantity(600, UNIT_MBYTES), config.amazons3.incrementalBackupSizeLimit) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document. """ amazons3 = AmazonS3Config() config = LocalConfig() config.amazons3 = amazons3 self.validateAddConfig(config) def testAddConfig_002(self): """ Test with values set. """ amazons3 = AmazonS3Config(True, "bucket", "encrypt", 1, 2) config = LocalConfig() config.amazons3 = amazons3 self.validateAddConfig(config) ################# # TestTool class ################# class TestTool(unittest.TestCase): ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) ########################### # Test _checkSourceFiles() ########################### def testCheckSourceFiles_001(self): """ Test _checkSourceFiles() where some files have an invalid encoding. """ self.extractTar("tree13") sourceDir = self.buildPath(["tree13", ]) sourceFiles = _buildSourceFiles(sourceDir) self.assertRaises(ValueError, _checkSourceFiles, sourceDir=sourceDir, sourceFiles=sourceFiles) def testFileEncoding_002(self): """ Test _checkSourceFiles() where all files have a valid encoding. """ self.extractTar("tree4") sourceDir = self.buildPath(["tree4", "dir006", ]) sourceFiles = _buildSourceFiles(sourceDir) _checkSourceFiles(sourceDir=sourceDir, sourceFiles=sourceFiles) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" tests = [ ] tests.append(unittest.makeSuite(TestAmazonS3Config, 'test')) tests.append(unittest.makeSuite(TestLocalConfig, 'test')) tests.append(unittest.makeSuite(TestTool, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/customizetests.py0000664000175000017500000001722012560007330022762 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests customization functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/customize.py. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import unittest from CedarBackup3.customize import PLATFORM, customizeOverrides from CedarBackup3.config import Config, OptionsConfig, CommandOverride ####################################################################### # Test Case Classes ####################################################################### ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the various public functions.""" ############################ # Test customizeOverrides() ############################ def testCustomizeOverrides_001(self): """ Test platform=standard, no existing overrides. """ config = Config() options = OptionsConfig() if PLATFORM == "standard": config.options = options customizeOverrides(config) self.assertEqual(None, options.overrides) config.options = options customizeOverrides(config, platform="standard") self.assertEqual(None, options.overrides) def testCustomizeOverrides_002(self): """ Test platform=standard, existing override for cdrecord. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/blech"), ] if PLATFORM == "standard": config.options = options customizeOverrides(config) self.assertEqual([ CommandOverride("cdrecord", "/blech"), ], options.overrides) config.options = options customizeOverrides(config, platform="standard") self.assertEqual([ CommandOverride("cdrecord", "/blech"), ], options.overrides) def testCustomizeOverrides_003(self): """ Test platform=standard, existing override for mkisofs. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("mkisofs", "/blech"), ] if PLATFORM == "standard": config.options = options customizeOverrides(config) self.assertEqual([ CommandOverride("mkisofs", "/blech"), ], options.overrides) config.options = options customizeOverrides(config, platform="standard") self.assertEqual([ CommandOverride("mkisofs", "/blech"), ], options.overrides) def testCustomizeOverrides_004(self): """ Test platform=standard, existing override for cdrecord and mkisofs. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ] if PLATFORM == "standard": config.options = options customizeOverrides(config) self.assertEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ], options.overrides) config.options = options customizeOverrides(config, platform="standard") self.assertEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ], options.overrides) def testCustomizeOverrides_005(self): """ Test platform=debian, no existing overrides. """ config = Config() options = OptionsConfig() if PLATFORM == "debian": config.options = options customizeOverrides(config) self.assertEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), CommandOverride("mkisofs", "/usr/bin/genisoimage"), ], options.overrides) config.options = options customizeOverrides(config, platform="debian") self.assertEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), CommandOverride("mkisofs", "/usr/bin/genisoimage"), ], options.overrides) def testCustomizeOverrides_006(self): """ Test platform=debian, existing override for cdrecord. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/blech"), ] if PLATFORM == "debian": config.options = options customizeOverrides(config) self.assertEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/usr/bin/genisoimage"), ], options.overrides) config.options = options customizeOverrides(config, platform="debian") self.assertEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/usr/bin/genisoimage"), ], options.overrides) def testCustomizeOverrides_007(self): """ Test platform=debian, existing override for mkisofs. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("mkisofs", "/blech"), ] if PLATFORM == "debian": config.options = options customizeOverrides(config) self.assertEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), CommandOverride("mkisofs", "/blech"), ], options.overrides) config.options = options customizeOverrides(config, platform="debian") self.assertEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), CommandOverride("mkisofs", "/blech"), ], options.overrides) def testCustomizeOverrides_008(self): """ Test platform=debian, existing override for cdrecord and mkisofs. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ] if PLATFORM == "debian": config.options = options customizeOverrides(config) self.assertEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ], options.overrides) config.options = options customizeOverrides(config, platform="debian") self.assertEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ], options.overrides) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" tests = [ ] tests.append(unittest.makeSuite(TestFunctions, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/filesystemtests.py0000664000175000017500000471125312642033003023134 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests filesystem-related classes. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/filesystem.py. Test Notes ========== This module contains individual tests for each of the classes implemented in filesystem.py: FilesystemList, BackupFileList and PurgeItemList. The BackupFileList and PurgeItemList classes inherit from FilesystemList, and the FilesystemList class itself inherits from the standard Python list class. For the most part, I won't spend time testing inherited functionality, especially if it's already been tested. However, I do test some of the base list functionality just to ensure that the inheritence has been constructed properly and everything seems to work as expected. You may look at this code and ask, "Why all of the checks that XXX is in list YYY? Why not just compare what we got to a known list?" The answer is that the order of the list is not significant, only its contents. We can't be positive about the order in which we recurse a directory, but we do need to make sure that everything we expect is in the list and nothing more. We do this by checking the count if items and then making sure that exactly that many known items exist in the list. This file is ridiculously long, almost too long to be worked with easily. I really should split it up into smaller files, but I like having a 1:1 relationship between a module and its test. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality. Instead, I create lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_023}. Each method then has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge the extent of a problem when one exists. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a FILESYSTEMTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import os import unittest import tempfile import tarfile import hashlib from CedarBackup3.testutil import findResources, buildPath, removedir, extractTar, changeFileAge, randomFilename from CedarBackup3.testutil import platformMacOsX from CedarBackup3.testutil import failUnlessAssignRaises from CedarBackup3.util import encodePath from CedarBackup3.filesystem import FilesystemList, BackupFileList, PurgeItemList, normalizeDir, compareContents ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data" ] RESOURCES = [ "tree1.tar.gz", "tree2.tar.gz", "tree3.tar.gz", "tree4.tar.gz", "tree5.tar.gz", "tree6.tar.gz", "tree7.tar.gz", "tree8.tar.gz", "tree9.tar.gz", "tree10.tar.gz", "tree11.tar.gz", "tree12.tar.gz", "tree13.tar.gz", "tree22.tar.gz", ] INVALID_FILE = "bogus" # This file name should never exist NOMATCH_PATH = "/something" # This path should never match something we put in a file list NOMATCH_BASENAME = "something" # This basename should never match something we put in a file list NOMATCH_PATTERN = "pattern" # This pattern should never match something we put in a file list AGE_1_HOUR = 1*60*60 # in seconds AGE_2_HOURS = 2*60*60 # in seconds AGE_12_HOURS = 12*60*60 # in seconds AGE_23_HOURS = 23*60*60 # in seconds AGE_24_HOURS = 24*60*60 # in seconds AGE_25_HOURS = 25*60*60 # in seconds AGE_47_HOURS = 47*60*60 # in seconds AGE_48_HOURS = 48*60*60 # in seconds AGE_49_HOURS = 49*60*60 # in seconds ####################################################################### # Test Case Classes ####################################################################### ########################### # TestFilesystemList class ########################### class TestFilesystemList(unittest.TestCase): """Tests for the FilesystemList class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def pathPattern(self, path): """Returns properly-escaped regular expression pattern matching the indicated path.""" return ".*%s.*" % path.replace("\\", "\\\\") def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test attribute assignment ############################ #pylint: disable=R0204 def testAssignment_001(self): """ Test assignment of excludeFiles attribute, true values. """ fsList = FilesystemList() self.assertEqual(False, fsList.excludeFiles) fsList.excludeFiles = True self.assertEqual(True, fsList.excludeFiles) fsList.excludeFiles = [ 1, ] self.assertEqual(True, fsList.excludeFiles) #pylint: disable=R0204 def testAssignment_002(self): """ Test assignment of excludeFiles attribute, false values. """ fsList = FilesystemList() self.assertEqual(False, fsList.excludeFiles) fsList.excludeFiles = False self.assertEqual(False, fsList.excludeFiles) fsList.excludeFiles = [ ] self.assertEqual(False, fsList.excludeFiles) #pylint: disable=R0204 def testAssignment_003(self): """ Test assignment of excludeLinks attribute, true values. """ fsList = FilesystemList() self.assertEqual(False, fsList.excludeLinks) fsList.excludeLinks = True self.assertEqual(True, fsList.excludeLinks) fsList.excludeLinks = [ 1, ] self.assertEqual(True, fsList.excludeLinks) #pylint: disable=R0204 def testAssignment_004(self): """ Test assignment of excludeLinks attribute, false values. """ fsList = FilesystemList() self.assertEqual(False, fsList.excludeLinks) fsList.excludeLinks = False self.assertEqual(False, fsList.excludeLinks) fsList.excludeLinks = [ ] self.assertEqual(False, fsList.excludeLinks) #pylint: disable=R0204 def testAssignment_005(self): """ Test assignment of excludeDirs attribute, true values. """ fsList = FilesystemList() self.assertEqual(False, fsList.excludeDirs) fsList.excludeDirs = True self.assertEqual(True, fsList.excludeDirs) fsList.excludeDirs = [ 1, ] self.assertEqual(True, fsList.excludeDirs) #pylint: disable=R0204 def testAssignment_006(self): """ Test assignment of excludeDirs attribute, false values. """ fsList = FilesystemList() self.assertEqual(False, fsList.excludeDirs) fsList.excludeDirs = False self.assertEqual(False, fsList.excludeDirs) fsList.excludeDirs = [ ] self.assertEqual(False, fsList.excludeDirs) def testAssignment_007(self): """ Test assignment of ignoreFile attribute. """ fsList = FilesystemList() self.assertEqual(None, fsList.ignoreFile) fsList.ignoreFile = "ken" self.assertEqual("ken", fsList.ignoreFile) fsList.ignoreFile = None self.assertEqual(None, fsList.ignoreFile) def testAssignment_008(self): """ Test assignment of excludePaths attribute. """ fsList = FilesystemList() self.assertEqual([], fsList.excludePaths) fsList.excludePaths = None self.assertEqual([], fsList.excludePaths) fsList.excludePaths = [ "/path/to/something/absolute", ] self.assertEqual([ "/path/to/something/absolute", ], fsList.excludePaths) fsList.excludePaths = [ "/path/to/something/absolute", "/path/to/something/else", ] self.assertEqual([ "/path/to/something/absolute", "/path/to/something/else", ], fsList.excludePaths) self.failUnlessAssignRaises(ValueError, fsList, "excludePaths", ["path/to/something/relative", ]) self.failUnlessAssignRaises(ValueError, fsList, "excludePaths", [ "/path/to/something/absolute", "path/to/something/relative", ]) fsList.excludePaths = [ "/path/to/something/absolute", ] self.assertEqual([ "/path/to/something/absolute", ], fsList.excludePaths) fsList.excludePaths.insert(0, "/ken") self.assertEqual([ "/ken", "/path/to/something/absolute", ], fsList.excludePaths) fsList.excludePaths.append("/file") self.assertEqual([ "/ken", "/path/to/something/absolute", "/file", ], fsList.excludePaths) fsList.excludePaths.extend(["/one", "/two", ]) self.assertEqual([ "/ken", "/path/to/something/absolute", "/file", "/one", "/two", ], fsList.excludePaths) fsList.excludePaths = [ "/path/to/something/absolute", ] self.assertRaises(ValueError, fsList.excludePaths.insert, 0, "path/to/something/relative") self.assertRaises(ValueError, fsList.excludePaths.append, "path/to/something/relative") self.assertRaises(ValueError, fsList.excludePaths.extend, ["path/to/something/relative", ]) def testAssignment_009(self): """ Test assignment of excludePatterns attribute. """ fsList = FilesystemList() self.assertEqual([], fsList.excludePatterns) fsList.excludePatterns = None self.assertEqual([], fsList.excludePatterns) fsList.excludePatterns = [ r".*\.jpg", ] self.assertEqual([ r".*\.jpg", ], fsList.excludePatterns) fsList.excludePatterns = [ r".*\.jpg", "[a-zA-Z0-9]*", ] self.assertEqual([ r".*\.jpg", "[a-zA-Z0-9]*", ], fsList.excludePatterns) self.failUnlessAssignRaises(ValueError, fsList, "excludePatterns", [ "*.jpg", ]) self.failUnlessAssignRaises(ValueError, fsList, "excludePatterns", [ "*.jpg", "[a-zA-Z0-9]*", ]) fsList.excludePatterns = [ r".*\.jpg", ] self.assertEqual([ r".*\.jpg", ], fsList.excludePatterns) fsList.excludePatterns.insert(0, "ken") self.assertEqual([ "ken", r".*\.jpg", ], fsList.excludePatterns) fsList.excludePatterns.append("pattern") self.assertEqual([ "ken", r".*\.jpg", "pattern", ], fsList.excludePatterns) fsList.excludePatterns.extend(["one", "two", ]) self.assertEqual([ "ken", r".*\.jpg", "pattern", "one", "two", ], fsList.excludePatterns) fsList.excludePatterns = [ r".*\.jpg", ] self.assertRaises(ValueError, fsList.excludePatterns.insert, 0, "*.jpg") self.assertEqual([ r".*\.jpg", ], fsList.excludePatterns) self.assertRaises(ValueError, fsList.excludePatterns.append, "*.jpg") self.assertEqual([ r".*\.jpg", ], fsList.excludePatterns) self.assertRaises(ValueError, fsList.excludePatterns.extend, ["*.jpg", ]) self.assertEqual([ r".*\.jpg", ], fsList.excludePatterns) def testAssignment_010(self): """ Test assignment of excludeBasenamePatterns attribute. """ fsList = FilesystemList() self.assertEqual([], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns = None self.assertEqual([], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns = [ r".*\.jpg", ] self.assertEqual([ r".*\.jpg", ], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns = [ r".*\.jpg", "[a-zA-Z0-9]*", ] self.assertEqual([ r".*\.jpg", "[a-zA-Z0-9]*", ], fsList.excludeBasenamePatterns) self.failUnlessAssignRaises(ValueError, fsList, "excludeBasenamePatterns", [ "*.jpg", ]) self.failUnlessAssignRaises(ValueError, fsList, "excludeBasenamePatterns", [ "*.jpg", "[a-zA-Z0-9]*", ]) fsList.excludeBasenamePatterns = [ r".*\.jpg", ] self.assertEqual([ r".*\.jpg", ], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns.insert(0, "ken") self.assertEqual([ "ken", r".*\.jpg", ], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns.append("pattern") self.assertEqual([ "ken", r".*\.jpg", "pattern", ], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns.extend(["one", "two", ]) self.assertEqual([ "ken", r".*\.jpg", "pattern", "one", "two", ], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns = [ r".*\.jpg", ] self.assertRaises(ValueError, fsList.excludeBasenamePatterns.insert, 0, "*.jpg") self.assertEqual([ r".*\.jpg", ], fsList.excludeBasenamePatterns) self.assertRaises(ValueError, fsList.excludeBasenamePatterns.append, "*.jpg") self.assertEqual([ r".*\.jpg", ], fsList.excludeBasenamePatterns) self.assertRaises(ValueError, fsList.excludeBasenamePatterns.extend, ["*.jpg", ]) self.assertEqual([ r".*\.jpg", ], fsList.excludeBasenamePatterns) ################################ # Test basic list functionality ################################ def testBasic_001(self): """ Test the append() method. """ fsList = FilesystemList() self.assertEqual([], fsList) fsList.append('a') self.assertEqual(['a'], fsList) fsList.append('b') self.assertEqual(['a', 'b'], fsList) def testBasic_002(self): """ Test the insert() method. """ fsList = FilesystemList() self.assertEqual([], fsList) fsList.insert(0, 'a') self.assertEqual(['a'], fsList) fsList.insert(0, 'b') self.assertEqual(['b', 'a'], fsList) def testBasic_003(self): """ Test the remove() method. """ fsList = FilesystemList() self.assertEqual([], fsList) fsList.insert(0, 'a') fsList.insert(0, 'b') self.assertEqual(['b', 'a'], fsList) fsList.remove('a') self.assertEqual(['b'], fsList) fsList.remove('b') self.assertEqual([], fsList) def testBasic_004(self): """ Test the pop() method. """ fsList = FilesystemList() self.assertEqual([], fsList) fsList.append('a') fsList.append('b') fsList.append('c') fsList.append('d') fsList.append('e') self.assertEqual(['a', 'b', 'c', 'd', 'e'], fsList) self.assertEqual('e', fsList.pop()) self.assertEqual(['a', 'b', 'c', 'd'], fsList) self.assertEqual('d', fsList.pop()) self.assertEqual(['a', 'b', 'c'], fsList) self.assertEqual('c', fsList.pop()) self.assertEqual(['a', 'b'], fsList) self.assertEqual('b', fsList.pop()) self.assertEqual(['a'], fsList) self.assertEqual('a', fsList.pop()) self.assertEqual([], fsList) def testBasic_005(self): """ Test the count() method. """ fsList = FilesystemList() self.assertEqual([], fsList) fsList.append('a') fsList.append('b') fsList.append('c') fsList.append('d') fsList.append('e') self.assertEqual(['a', 'b', 'c', 'd', 'e'], fsList) self.assertEqual(1, fsList.count('a')) def testBasic_006(self): """ Test the index() method. """ fsList = FilesystemList() self.assertEqual([], fsList) fsList.append('a') fsList.append('b') fsList.append('c') fsList.append('d') fsList.append('e') self.assertEqual(['a', 'b', 'c', 'd', 'e'], fsList) self.assertEqual(2, fsList.index('c')) def testBasic_007(self): """ Test the reverse() method. """ fsList = FilesystemList() self.assertEqual([], fsList) fsList.append('a') fsList.append('b') fsList.append('c') fsList.append('d') fsList.append('e') self.assertEqual(['a', 'b', 'c', 'd', 'e'], fsList) fsList.reverse() self.assertEqual(['e', 'd', 'c', 'b', 'a'], fsList) fsList.reverse() self.assertEqual(['a', 'b', 'c', 'd', 'e'], fsList) def testBasic_008(self): """ Test the sort() method. """ fsList = FilesystemList() self.assertEqual([], fsList) fsList.append('e') fsList.append('d') fsList.append('c') fsList.append('b') fsList.append('a') self.assertEqual(['e', 'd', 'c', 'b', 'a'], fsList) fsList.sort() self.assertEqual(['a', 'b', 'c', 'd', 'e'], fsList) fsList.sort() self.assertEqual(['a', 'b', 'c', 'd', 'e'], fsList) def testBasic_009(self): """ Test slicing. """ fsList = FilesystemList() self.assertEqual([], fsList) fsList.append('e') fsList.append('d') fsList.append('c') fsList.append('b') fsList.append('a') self.assertEqual(['e', 'd', 'c', 'b', 'a'], fsList) self.assertEqual(['e', 'd', 'c', 'b', 'a'], fsList[:]) self.assertEqual(['e', 'd', 'c', 'b', 'a'], fsList[0:]) self.assertEqual('e', fsList[0]) self.assertEqual('a', fsList[4]) self.assertEqual(['d', 'c', 'b'], fsList[1:4]) ################# # Test addFile() ################# def testAddFile_001(self): """ Attempt to add a file that doesn't exist; no exclusions. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_002(self): """ Attempt to add a directory; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_003(self): """ Attempt to add a soft link; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() count = fsList.addFile(path) self.assertEqual(1, count) self.assertEqual([path], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_004(self): """ Attempt to add an existing file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() count = fsList.addFile(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddFile_005(self): """ Attempt to add a file that doesn't exist; excludeFiles set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeFiles = True self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_006(self): """ Attempt to add a directory; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeFiles = True self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_007(self): """ Attempt to add a soft link; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addFile(path) self.assertEqual(0, count) self.assertEqual([], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeFiles = True self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_008(self): """ Attempt to add an existing file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addFile(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddFile_009(self): """ Attempt to add a file that doesn't exist; excludeDirs set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeDirs = True self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_010(self): """ Attempt to add a directory; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeDirs = True self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_011(self): """ Attempt to add a soft link; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addFile(path) self.assertEqual(1, count) self.assertEqual([path], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeDirs = True self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_012(self): """ Attempt to add an existing file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addFile(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddFile_013(self): """ Attempt to add a file that doesn't exist; excludeLinks set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeLinks = True self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_014(self): """ Attempt to add a directory; excludeLinks set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeLinks = True self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_015(self): """ Attempt to add a soft link; excludeLinks set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addFile(path) self.assertEqual(0, count) self.assertEqual([], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeLinks = True self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_016(self): """ Attempt to add an existing file; excludeLinks set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addFile(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddFile_017(self): """ Attempt to add a file that doesn't exist; with excludePaths including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_018(self): """ Attempt to add a directory; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_019(self): """ Attempt to add a soft link; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addFile(path) self.assertEqual(0, count) self.assertEqual([], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ path ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_020(self): """ Attempt to add an existing file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addFile(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddFile_021(self): """ Attempt to add a file that doesn't exist; with excludePaths not including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_022(self): """ Attempt to add a directory; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_023(self): """ Attempt to add a soft link; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addFile(path) self.assertEqual(1, count) self.assertEqual([path], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_024(self): """ Attempt to add an existing file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addFile(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddFile_025(self): """ Attempt to add a file that doesn't exist; with excludePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_026(self): """ Attempt to add a directory; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_027(self): """ Attempt to add a soft link; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addFile(path) self.assertEqual(0, count) self.assertEqual([], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_028(self): """ Attempt to add an existing file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addFile(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddFile_029(self): """ Attempt to add a file that doesn't exist; with excludePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_030(self): """ Attempt to add a directory; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_031(self): """ Attempt to add a soft link; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addFile(path) self.assertEqual(1, count) self.assertEqual([path], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_032(self): """ Attempt to add an existing file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addFile(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddFile_033(self): """ Attempt to add an invalid link (i.e. a link that points to something that doesn't exist). """ self.extractTar("tree10") path = self.buildPath(["tree10", "link001"]) fsList = FilesystemList() self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_034(self): """ Attempt to add a file that has spaces in its name. """ self.extractTar("tree11") path = self.buildPath(["tree11", "file with spaces"]) fsList = FilesystemList() count = fsList.addFile(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddFile_035(self): """ Attempt to add a UTF-8 file. """ self.extractTar("tree12") path = self.buildPath([ "tree12", "unicode", encodePath(b"\xe2\x99\xaa\xe2\x99\xac")]) fsList = FilesystemList() count = fsList.addFile(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddFile_036(self): """ Attempt to add a file that doesn't exist; with excludeBasenamePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ INVALID_FILE ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_037(self): """ Attempt to add a directory; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "dir001", ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_038(self): """ Attempt to add a soft link; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] count = fsList.addFile(path) self.assertEqual(0, count) self.assertEqual([], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_039(self): """ Attempt to add an existing file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "file001", ] count = fsList.addFile(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddFile_040(self): """ Attempt to add a file that doesn't exist; with excludeBasenamePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_041(self): """ Attempt to add a directory; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_042(self): """ Attempt to add a soft link; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePaths = [ NOMATCH_BASENAME ] count = fsList.addFile(path) self.assertEqual(1, count) self.assertEqual([path], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePaths = [ NOMATCH_BASENAME ] self.assertRaises(ValueError, fsList.addFile, path) def testAddFile_043(self): """ Attempt to add an existing file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addFile(path) self.assertEqual(1, count) self.assertEqual([path], fsList) ################ # Test addDir() ################ def testAddDir_001(self): """ Attempt to add a directory that doesn't exist; no exclusions. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_002(self): """ Attempt to add a file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_003(self): """ Attempt to add a soft link; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() self.assertRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() count = fsList.addDir(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDir_004(self): """ Attempt to add an existing directory; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() count = fsList.addDir(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDir_005(self): """ Attempt to add a directory that doesn't exist; excludeFiles set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeFiles = True self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_006(self): """ Attempt to add a file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeFiles = True self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_007(self): """ Attempt to add a soft link; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeFiles = True self.assertRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDir(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDir_008(self): """ Attempt to add an existing directory; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDir(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDir_009(self): """ Attempt to add a directory that doesn't exist; excludeDirs set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeDirs = True self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_010(self): """ Attempt to add a file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeDirs = True self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_011(self): """ Attempt to add a soft link; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeDirs = True self.assertRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDir(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDir_012(self): """ Attempt to add an existing directory; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDir(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDir_013(self): """ Attempt to add a directory that doesn't exist; excludeLinks set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeLinks = True self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_014(self): """ Attempt to add a file; excludeLinks set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeLinks = True self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_015(self): """ Attempt to add a soft link; excludeLinks set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeLinks = True self.assertRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDir(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDir_016(self): """ Attempt to add an existing directory; excludeLinks set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDir(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDir_017(self): """ Attempt to add a directory that doesn't exist; with excludePaths including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_018(self): """ Attempt to add a file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_019(self): """ Attempt to add a soft link; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ path ] self.assertRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDir(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDir_020(self): """ Attempt to add an existing directory; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDir(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDir_021(self): """ Attempt to add a directory that doesn't exist; with excludePaths not including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_022(self): """ Attempt to add a file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_023(self): """ Attempt to add a soft link; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDir(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDir_024(self): """ Attempt to add an existing directory; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDir(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDir_025(self): """ Attempt to add a directory that doesn't exist; with excludePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_026(self): """ Attempt to add a file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_027(self): """ Attempt to add a soft link; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.assertRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDir(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDir_028(self): """ Attempt to add an existing directory; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDir(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDir_029(self): """ Attempt to add a directory that doesn't exist; with excludePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_030(self): """ Attempt to add a file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_031(self): """ Attempt to add a soft link; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDir(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDir_032(self): """ Attempt to add an existing directory; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDir(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDir_033(self): """ Attempt to add an invalid link (i.e. a link that points to something that doesn't exist). """ self.extractTar("tree10") path = self.buildPath(["tree10", "link001"]) fsList = FilesystemList() self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_034(self): """ Attempt to add a directory that has spaces in its name. """ self.extractTar("tree11") path = self.buildPath(["tree11", "dir with spaces"]) fsList = FilesystemList() count = fsList.addDir(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDir_035(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ INVALID_FILE ] self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_036(self): """ Attempt to add a file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "file001", ] self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_037(self): """ Attempt to add a soft link; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] self.assertRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] count = fsList.addDir(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDir_038(self): """ Attempt to add an existing directory; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "dir001", ] count = fsList.addDir(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDir_039(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_040(self): """ Attempt to add a file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.assertRaises(ValueError, fsList.addDir, path) def testAddDir_041(self): """ Attempt to add a soft link; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.assertRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDir(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDir_042(self): """ Attempt to add an existing directory; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDir(path) self.assertEqual(1, count) self.assertEqual([path], fsList) ######################## # Test addDirContents() ######################## def testAddDirContents_001(self): """ Attempt to add a directory that doesn't exist; no exclusions. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_002(self): """ Attempt to add a file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_003(self): """ Attempt to add a soft link; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() self.assertRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() count = fsList.addDir(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDirContents_004(self): """ Attempt to add an empty directory containing ignore file; no exclusions. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_005(self): """ Attempt to add an empty directory; no exclusions. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDirContents_006(self): """ Attempt to add an non-empty directory containing ignore file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_007(self): """ Attempt to add an non-empty directory; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath(["tree5", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "link001", ]) in fsList) def testAddDirContents_008(self): """ Attempt to add a directory that doesn't exist; excludeFiles set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeFiles = True self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_009(self): """ Attempt to add a file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeFiles = True self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_010(self): """ Attempt to add a soft link; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeFiles = True self.assertRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDirContents(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDirContents_011(self): """ Attempt to add an empty directory containing ignore file; excludeFiles set. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeFiles = True count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_012(self): """ Attempt to add an empty directory; excludeFiles set. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDirContents(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDirContents_013(self): """ Attempt to add an non-empty directory containing ignore file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeFiles = True count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_014(self): """ Attempt to add an non-empty directory; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDirContents(path) self.assertEqual(5, count) self.assertEqual(5, len(fsList)) self.assertTrue(self.buildPath(["tree5", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) def testAddDirContents_015(self): """ Attempt to add a directory that doesn't exist; excludeDirs set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeDirs = True self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_016(self): """ Attempt to add a file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeDirs = True self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_017(self): """ Attempt to add a soft link; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeDirs = True self.assertRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_018(self): """ Attempt to add an empty directory containing ignore file; excludeDirs set. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeDirs = True count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_019(self): """ Attempt to add an empty directory; excludeDirs set. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_020(self): """ Attempt to add an non-empty directory containing ignore file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeDirs = True count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_021(self): """ Attempt to add an non-empty directory; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDirContents(path) self.assertEqual(3, count) self.assertEqual(3, len(fsList)) self.assertTrue(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "link001", ]) in fsList) def testAddDirContents_023(self): """ Attempt to add a directory that doesn't exist; excludeLinks set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeLinks = True self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_024(self): """ Attempt to add a file; excludeLinks set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeLinks = True self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_025(self): """ Attempt to add a soft link; excludeLinks set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeLinks = True self.assertRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_026(self): """ Attempt to add an empty directory containing ignore file; excludeLinks set. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeLinks = True count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_027(self): """ Attempt to add an empty directory; excludeLinks set. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDirContents(path) self.assertEqual(1, count) self.assertTrue(self.buildPath(["tree8", "dir001", ]) in fsList) def testAddDirContents_028(self): """ Attempt to add an non-empty directory containing ignore file; excludeLinks set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeLinks = True count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_029(self): """ Attempt to add an non-empty directory; excludeLinks set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDirContents(path) self.assertEqual(7, count) self.assertEqual(7, len(fsList)) self.assertTrue(self.buildPath(["tree5", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) def testAddDirContents_030(self): """ Attempt to add a directory that doesn't exist; with excludePaths including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_031(self): """ Attempt to add a file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_032(self): """ Attempt to add a soft link; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ path ] self.assertRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_033(self): """ Attempt to add an empty directory containing ignore file; with excludePaths including the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePaths = [ path ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_034(self): """ Attempt to add an empty directory; with excludePaths including the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_035(self): """ Attempt to add an non-empty directory containing ignore file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePaths = [ path ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_036(self): """ Attempt to add an non-empty directory; with excludePaths including the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_037(self): """ Attempt to add a directory that doesn't exist; with excludePaths not including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_038(self): """ Attempt to add a file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_039(self): """ Attempt to add a soft link; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDirContents_040(self): """ Attempt to add an empty directory containing ignore file; with excludePaths not including the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_041(self): """ Attempt to add an empty directory; with excludePaths not including the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDirContents_042(self): """ Attempt to add an non-empty directory containing ignore file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_043(self): """ Attempt to add an non-empty directory; with excludePaths not including the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath(["tree5", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "link001", ]) in fsList) def testAddDirContents_044(self): """ Attempt to add a directory that doesn't exist; with excludePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_045(self): """ Attempt to add a file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_046(self): """ Attempt to add a soft link; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.assertRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_047(self): """ Attempt to add an empty directory containing ignore file; with excludePatterns matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_048(self): """ Attempt to add an empty directory; with excludePatterns matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_049(self): """ Attempt to add an non-empty directory containing ignore file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_050(self): """ Attempt to add an non-empty directory; with excludePatterns matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_051(self): """ Attempt to add a directory that doesn't exist; with excludePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_052(self): """ Attempt to add a file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_053(self): """ Attempt to add a soft link; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDirContents_054(self): """ Attempt to add an empty directory containing ignore file; with excludePatterns not matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_055(self): """ Attempt to add an empty directory; with excludePatterns not matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDirContents_056(self): """ Attempt to add an non-empty directory containing ignore file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_057(self): """ Attempt to add an non-empty directory; with excludePatterns not matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath(["tree5", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "link001", ]) in fsList) def testAddDirContents_058(self): """ Attempt to add a large tree with no exclusions. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(136, count) self.assertEqual(136, len(fsList)) self.assertTrue(self.buildPath([ "tree6", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_059(self): """ Attempt to add a large tree, with excludeFiles set. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDirContents(path) self.assertEqual(42, count) self.assertEqual(42, len(fsList)) self.assertTrue(self.buildPath([ "tree6", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_060(self): """ Attempt to add a large tree, with excludeDirs set. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDirContents(path) self.assertEqual(94, count) self.assertEqual(94, len(fsList)) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in fsList) def testAddDirContents_061(self): """ Attempt to add a large tree, with excludeLinks set. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDirContents(path) self.assertEqual(96, count) self.assertEqual(96, len(fsList)) self.assertTrue(self.buildPath([ "tree6", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in fsList) def testAddDirContents_062(self): """ Attempt to add a large tree, with excludePaths set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.excludePaths = [ self.buildPath([ "tree6", "dir001", "dir002", ]), self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]), self.buildPath([ "tree6", "dir003", "dir002", "file001", ]), self.buildPath([ "tree6", "dir003", "dir002", "file002", ]), ] count = fsList.addDirContents(path) self.assertEqual(125, count) self.assertEqual(125, len(fsList)) self.assertTrue(self.buildPath([ "tree6", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_063(self): """ Attempt to add a large tree, with excludePatterns set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.excludePatterns = [ ".*file001.*", r".*tree6\/dir002\/dir001.*" ] count = fsList.addDirContents(path) self.assertEqual(108, count) self.assertEqual(108, len(fsList)) self.assertTrue(self.buildPath([ "tree6", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_064(self): """ Attempt to add a large tree, with ignoreFile set to exclude some directories. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" count = fsList.addDirContents(path) self.assertEqual(79, count) self.assertEqual(79, len(fsList)) self.assertTrue(self.buildPath([ "tree6", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_065(self): """ Attempt to add a link to a file. """ self.extractTar("tree9") path = self.buildPath(["tree9", "dir002", "link003", ]) fsList = FilesystemList() self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_066(self): """ Attempt to add a link to a directory (which should add its contents). """ self.extractTar("tree9") path = self.buildPath(["tree9", "link002" ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(9, count) self.assertEqual(9, len(fsList)) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", "link004", ]) in fsList) def testAddDirContents_067(self): """ Attempt to add an invalid link (i.e. a link that points to something that doesn't exist). """ self.extractTar("tree10") path = self.buildPath(["tree10", "link001"]) fsList = FilesystemList() self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_068(self): """ Attempt to add directory containing an invalid link (i.e. a link that points to something that doesn't exist). """ self.extractTar("tree10") path = self.buildPath(["tree10"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(3, count) self.assertEqual(3, len(fsList)) self.assertTrue(self.buildPath([ "tree10", ]) in fsList) self.assertTrue(self.buildPath([ "tree10", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree10", "dir002", ]) in fsList) def testAddDirContents_069(self): """ Attempt to add a directory containing items with spaces. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testAddDirContents_070(self): """ Attempt to add a directory which has a name containing spaces. """ self.extractTar("tree11") path = self.buildPath(["tree11", "dir with spaces", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(5, count) self.assertEqual(5, len(fsList)) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testAddDirContents_071(self): """ Attempt to add a directory which has a UTF-8 filename in it. """ self.extractTar("tree12") path = self.buildPath(["tree12", "unicode", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(6, count) self.assertEqual(6, len(fsList)) self.assertTrue(self.buildPath([ "tree12", "unicode", ]) in fsList) self.assertTrue(self.buildPath([ "tree12", "unicode", "README.strange-name", ]) in fsList) self.assertTrue(self.buildPath([ "tree12", "unicode", "utflist.long.gz", ]) in fsList) self.assertTrue(self.buildPath([ "tree12", "unicode", "utflist.cp437.gz", ]) in fsList) self.assertTrue(self.buildPath([ "tree12", "unicode", "utflist.short.gz", ]) in fsList) self.assertTrue(self.buildPath([ "tree12", "unicode", encodePath(b"\xe2\x99\xaa\xe2\x99\xac"), ]) in fsList) def testAddDirContents_072(self): """ Attempt to add a directory which has several UTF-8 filenames in it. This test data was taken from Rick Lowe's problems around the release of v1.10. I don't run the test for Darwin (Mac OS X) because the tarball isn't valid on that platform. All of the tests with unicode paths were incredibly painful to get working with Python 3, but these tests in particular were difficult, because character 0x82 is not a valid UTF-8 character. The key is was to get the filename into the same encoding used by methods like os.listdir(), which uses a "surrogateescape" fallback for encoding filenames. Once I switched encodePath to do the same thing, this test started passing. There's apparently no other way to represent filenames like this. """ if not platformMacOsX(): self.extractTar("tree13") path = self.buildPath(["tree13", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree13", ]) in fsList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"Les mouvements de r\x82forme.doc"), ]) in fsList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"l'\x82nonc\x82.sxw"), ]) in fsList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"l\x82onard - renvois et bibliographie.sxw"), ]) in fsList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"l\x82onard copie finale.sxw"), ]) in fsList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"l\x82onard de vinci - page titre.sxw"), ]) in fsList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"l\x82onard de vinci.sxw"), ]) in fsList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"Rammstein - B\x81ck Dich.mp3"), ]) in fsList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"megaherz - Glas Und Tr\x84nen.mp3"), ]) in fsList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"Megaherz - Mistst\x81ck.MP3"), ]) in fsList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"Rammstein - Mutter - B\x94se.mp3"), ]) in fsList) def testAddDirContents_073(self): """ Attempt to add a large tree with recursive=False. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path, recursive=False) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree6", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_074(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ INVALID_FILE ] self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_075(self): """ Attempt to add a file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "file001", ] self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_076(self): """ Attempt to add a soft link; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] self.assertRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_077(self): """ Attempt to add an empty directory containing ignore file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeBasenamePatterns = [ "dir001", ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_078(self): """ Attempt to add an empty directory; with excludeBasenamePatterns matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "dir001", ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_079(self): """ Attempt to add an non-empty directory containing ignore file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeBasenamePatterns = [ "dir008", ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_080(self): """ Attempt to add an non-empty directory; with excludeBasenamePatterns matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "dir001", ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_081(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_082(self): """ Attempt to add a file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.assertRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_083(self): """ Attempt to add a soft link; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_PATH ] self.assertRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDirContents(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDirContents_084(self): """ Attempt to add an empty directory containing ignore file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_085(self): """ Attempt to add an empty directory; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDirContents(path) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDirContents_086(self): """ Attempt to add an non-empty directory containing ignore file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_087(self): """ Attempt to add an non-empty directory; with excludeBasenamePatterns not matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath(["tree5", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree5", "dir001", "link001", ]) in fsList) def testAddDirContents_088(self): """ Attempt to add a large tree, with excludeBasenamePatterns set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "file001", "dir001" ] count = fsList.addDirContents(path) self.assertEqual(64, count) self.assertEqual(64, len(fsList)) self.assertTrue(self.buildPath([ "tree6", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_089(self): """ Attempt to add a large tree with no exclusions, addSelf=True. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path, addSelf=True) self.assertEqual(136, count) self.assertEqual(136, len(fsList)) self.assertTrue(self.buildPath([ "tree6", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_090(self): """ Attempt to add a large tree with no exclusions, addSelf=False. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path, addSelf=False) self.assertEqual(135, count) self.assertEqual(135, len(fsList)) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_091(self): """ Attempt to add a directory with linkDepth=1. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=1) self.assertEqual(165, count) self.assertEqual(165, len(fsList)) self.assertTrue(self.buildPath([ "tree6", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in fsList) def testAddDirContents_092(self): """ Attempt to add a directory with linkDepth=2. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=2) self.assertEqual(241, count) self.assertEqual(241, len(fsList)) self.assertTrue(self.buildPath([ "tree6" ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "ignore", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in fsList) def testAddDirContents_093(self): """ Attempt to add a directory with linkDepth=0, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=0, dereference=False) self.assertEqual(12, count) self.assertEqual(12, len(fsList)) self.assertTrue(self.buildPath(["tree22", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", ]) in fsList) def testAddDirContents_094(self): """ Attempt to add a directory with linkDepth=1, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=1, dereference=False) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath(["tree22", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link002", ]) in fsList) def testAddDirContents_095(self): """ Attempt to add a directory with linkDepth=2, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=2, dereference=False) self.assertEqual(20, count) self.assertEqual(20, len(fsList)) self.assertTrue(self.buildPath(["tree22", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link002", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link002", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link002", "link002", ]) in fsList) def testAddDirContents_096(self): """ Attempt to add a directory with linkDepth=3, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=3, dereference=False) self.assertEqual(20, count) self.assertEqual(20, len(fsList)) self.assertTrue(self.buildPath(["tree22", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link002", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link002", "link001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link002", "link002", ]) in fsList) def testAddDirContents_097(self): """ Attempt to add a directory with linkDepth=0, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=0, dereference=True) self.assertEqual(12, count) self.assertEqual(12, len(fsList)) self.assertTrue(self.buildPath(["tree22", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", ]) in fsList) def testAddDirContents_098(self): """ Attempt to add a directory with linkDepth=1, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=1, dereference=True) self.assertEqual(20, count) self.assertEqual(20, len(fsList)) self.assertTrue(self.buildPath(["tree22", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005" ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005", "link002", ]) in fsList) def testAddDirContents_099(self): """ Attempt to add a directory with linkDepth=2, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=2, dereference=True) self.assertEqual(32, count) self.assertEqual(32, len(fsList)) self.assertTrue(self.buildPath(["tree22", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir002", "file009", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir004", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir004", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir004", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir006", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir006", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir006", "link002", ]) in fsList) def testAddDirContents_100(self): """ Attempt to add a directory with linkDepth=3, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=3, dereference=True) self.assertEqual(35, count) self.assertEqual(35, len(fsList)) self.assertTrue(self.buildPath(["tree22", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir002", "file009", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir004", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir004", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir004", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir005", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir006", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir006", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir006", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir007", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir007", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree22", "dir008", "file001", ]) in fsList) def testAddDirContents_101(self): """ Attempt to add a soft link; excludeFiles and dereference set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeFiles = True self.assertRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to dir003 fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDirContents(path, True, True, 1, True) self.assertEqual(1, count) self.assertEqual([path], fsList) def testAddDirContents_102(self): """ Attempt to add a soft link; excludeDirs and dereference set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeDirs = True self.assertRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to dir003 fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDirContents(path, True, True, 1, True) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_103(self): """ Attempt to add a soft link; excludeLinks and dereference set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeLinks = True self.assertRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to dir003 fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDirContents(path, True, True, 1, True) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_104(self): """ Attempt to add a soft link; with excludePaths including the path, with dereference=True. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ path ] self.assertRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to dir003 fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDirContents(path, True, True, 1, True) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_105(self): """ Attempt to add a soft link; with excludePatterns matching the path, with dereference=True. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.assertRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to dir003 fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path, True, True, 1, True) self.assertEqual(0, count) self.assertEqual([], fsList) def testAddDirContents_106(self): """ Attempt to add a link to a file, with dereference=True. """ self.extractTar("tree9") path = self.buildPath(["tree9", "dir002", "link003", ]) fsList = FilesystemList() self.assertRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) def testAddDirContents_107(self): """ Attempt to add a link to a directory (which should add its contents), with dereference=True. """ self.extractTar("tree9") path = self.buildPath(["tree9", "link002" ]) fsList = FilesystemList() count = fsList.addDirContents(path, True, True, 1, True) self.assertEqual(13, count) self.assertEqual(13, len(fsList)) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", "dir001", ]) in fsList) # duplicated self.assertTrue(self.buildPath([ "tree9", "link002", "dir002", ]) in fsList) # duplicated self.assertTrue(self.buildPath([ "tree9", "link002", "file001", ]) in fsList) # duplicated self.assertTrue(self.buildPath([ "tree9", "link002", "file002", ]) in fsList) # duplicated self.assertTrue(self.buildPath([ "tree9", "link002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", "link004", ]) in fsList) def testAddDirContents_108(self): """ Attempt to add an invalid link (i.e. a link that points to something that doesn't exist), and dereference=True. """ self.extractTar("tree10") path = self.buildPath(["tree10", "link001"]) fsList = FilesystemList() self.assertRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) def testAddDirContents_109(self): """ Attempt to add directory containing an invalid link (i.e. a link that points to something that doesn't exist), and dereference=True. """ self.extractTar("tree10") path = self.buildPath(["tree10"]) fsList = FilesystemList() count = fsList.addDirContents(path, True, True, 1, True) self.assertEqual(3, count) self.assertEqual(3, len(fsList)) self.assertTrue(self.buildPath([ "tree10", ]) in fsList) self.assertTrue(self.buildPath([ "tree10", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree10", "dir002", ]) in fsList) def testAddDirContents_110(self): """ Attempt to add a soft link; with excludeBasenamePatterns matching the path, and dereference=True. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] self.assertRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] count = fsList.addDirContents(path, True, True, 1, True) self.assertEqual(0, count) self.assertEqual([], fsList) ##################### # Test removeFiles() ##################### def testRemoveFiles_001(self): """ Test with an empty list and a pattern of None. """ fsList = FilesystemList() count = fsList.removeFiles(pattern=None) self.assertEqual(0, count) def testRemoveFiles_002(self): """ Test with an empty list and a non-empty pattern. """ fsList = FilesystemList() count = fsList.removeFiles(pattern="pattern") self.assertEqual(0, count) self.assertRaises(ValueError, fsList.removeFiles, pattern="*.jpg") def testRemoveFiles_003(self): """ Test with a non-empty list (files only) and a pattern of None. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeFiles(pattern=None) self.assertEqual(7, count) self.assertEqual(1, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) def testRemoveFiles_004(self): """ Test with a non-empty list (directories only) and a pattern of None. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeFiles(pattern=None) self.assertEqual(0, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveFiles_005(self): """ Test with a non-empty list (files and directories) and a pattern of None. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=None) self.assertEqual(44, count) self.assertEqual(37, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) def testRemoveFiles_006(self): """ Test with a non-empty list (files, directories and links) and a pattern of None. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeFiles(pattern=None) self.assertEqual(10, count) self.assertEqual(12, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveFiles_007(self): """ Test with a non-empty list (files and directories, some nonexistent) and a pattern of None. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=None) self.assertEqual(44, count) self.assertEqual(38, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) def testRemoveFiles_008(self): """ Test with a non-empty list (spaces in path names) and a pattern of None. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveFiles_009(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches none of the files. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveFiles_010(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches none of the files. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveFiles_011(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches none of the files. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveFiles_012(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches none of the files. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveFiles_013(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches none of the files. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveFiles_014(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches none of the files. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveFiles_015(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches some of the files. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*tree1.*file00[67]") self.assertEqual(2, count) self.assertEqual(6, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) def testRemoveFiles_016(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches some of the files. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeFiles(pattern=".*tree2.*") self.assertEqual(0, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveFiles_017(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches some of the files. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*tree4.*dir006.*") self.assertEqual(10, count) self.assertEqual(71, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveFiles_018(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches some of the files. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeFiles(pattern=".*tree9.*dir002.*") self.assertEqual(4, count) self.assertEqual(18, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveFiles_019(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches some of the files. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*dir001.*file002.*") self.assertEqual(1, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveFiles_020(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches some of the files. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeFiles(pattern=".*with spaces.*") self.assertEqual(6, count) self.assertEqual(10, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) def testRemoveFiles_021(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches anything. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.assertEqual(7, count) self.assertEqual(1, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) def testRemoveFiles_022(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches anything. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.assertEqual(0, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveFiles_023(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches anything. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.assertEqual(44, count) self.assertEqual(37, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) def testRemoveFiles_024(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches all of the files. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.assertEqual(10, count) self.assertEqual(12, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveFiles_025(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches all of the files. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.assertEqual(44, count) self.assertEqual(38, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) def testRemoveFiles_026(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches all of the files. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.assertEqual(11, count) self.assertEqual(5, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) #################### # Test removeDirs() #################### def testRemoveDirs_001(self): """ Test with an empty list and a pattern of None. """ fsList = FilesystemList() count = fsList.removeDirs(pattern=None) self.assertEqual(0, count) def testRemoveDirs_002(self): """ Test with an empty list and a non-empty pattern. """ fsList = FilesystemList() count = fsList.removeDirs(pattern="pattern") self.assertEqual(0, count) self.assertRaises(ValueError, fsList.removeDirs, pattern="*.jpg") def testRemoveDirs_003(self): """ Test with a non-empty list (files only) and a pattern of None. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeDirs(pattern=None) self.assertEqual(1, count) self.assertEqual(7, len(fsList)) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveDirs_004(self): """ Test with a non-empty list (directories only) and a pattern of None. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeDirs(pattern=None) self.assertEqual(11, count) self.assertEqual(0, len(fsList)) def testRemoveDirs_005(self): """ Test with a non-empty list (files and directories) and a pattern of None. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=None) self.assertEqual(37, count) self.assertEqual(44, len(fsList)) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_006(self): """ Test with a non-empty list (files, directories and links) and a pattern of None. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeDirs(pattern=None) self.assertEqual(12, count) self.assertEqual(10, len(fsList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) def testRemoveDirs_007(self): """ Test with a non-empty list (files and directories, some nonexistent) and a pattern of None. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=None) self.assertEqual(37, count) self.assertEqual(45, len(fsList)) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_008(self): """ Test with a non-empty list (spaces in path names) and a pattern of None. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeDirs(pattern=None) self.assertEqual(5, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveDirs_009(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches none of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveDirs_010(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches none of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveDirs_011(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches none of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_012(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches none of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveDirs_013(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches none of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_014(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches none of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveDirs_015(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches some of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*tree1.file00[67]") self.assertEqual(0, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveDirs_016(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches some of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeDirs(pattern=".*dir0[012]0") self.assertEqual(1, count) self.assertEqual(10, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) def testRemoveDirs_017(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches some of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*dir001") self.assertEqual(9, count) self.assertEqual(72, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_018(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches some of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeDirs(pattern=".*tree9.*dir002.*") self.assertEqual(6, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveDirs_019(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches some of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*dir001") self.assertEqual(9, count) self.assertEqual(73, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_020(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches some of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeDirs(pattern=".*with spaces.*") self.assertEqual(1, count) self.assertEqual(15, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveDirs_021(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches all of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.assertEqual(1, count) self.assertEqual(7, len(fsList)) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveDirs_022(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches all of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.assertEqual(11, count) self.assertEqual(0, len(fsList)) def testRemoveDirs_023(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches all of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.assertEqual(37, count) self.assertEqual(44, len(fsList)) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_024(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches all of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.assertEqual(12, count) self.assertEqual(10, len(fsList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) def testRemoveDirs_025(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches all of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.assertEqual(37, count) self.assertEqual(45, len(fsList)) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_026(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches all of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.assertEqual(5, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) ##################### # Test removeLinks() ##################### def testRemoveLinks_001(self): """ Test with an empty list and a pattern of None. """ fsList = FilesystemList() count = fsList.removeLinks(pattern=None) self.assertEqual(0, count) def testRemoveLinks_002(self): """ Test with an empty list and a non-empty pattern. """ fsList = FilesystemList() count = fsList.removeLinks(pattern="pattern") self.assertEqual(0, count) self.assertRaises(ValueError, fsList.removeLinks, pattern="*.jpg") def testRemoveLinks_003(self): """ Test with a non-empty list (files only) and a pattern of None. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeLinks(pattern=None) self.assertEqual(0, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveLinks_004(self): """ Test with a non-empty list (directories only) and a pattern of None. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeLinks(pattern=None) self.assertEqual(0, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveLinks_005(self): """ Test with a non-empty list (files and directories) and a pattern of None. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=None) self.assertEqual(0, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_006(self): """ Test with a non-empty list (files, directories and links) and a pattern of None. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeLinks(pattern=None) self.assertEqual(9, count) self.assertEqual(13, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) def testRemoveLinks_007(self): """ Test with a non-empty list (files and directories, some nonexistent) and a pattern of None. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=None) self.assertEqual(0, count) self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_008(self): """ Test with a non-empty list (spaces in path names) and a pattern of None. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeLinks(pattern=None) self.assertEqual(6, count) self.assertEqual(10, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) def testRemoveLinks_009(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches none of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveLinks_010(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches none of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveLinks_011(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches none of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_012(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches none of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveLinks_013(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches none of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_014(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches none of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveLinks_015(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches some of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*tree1.*file007") self.assertEqual(0, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveLinks_016(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches some of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeLinks(pattern=".*tree2.*") self.assertEqual(0, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveLinks_017(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches some of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*tree4.*dir006.*") self.assertEqual(0, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_018(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches some of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeLinks(pattern=".*tree9.*dir002.*") self.assertEqual(4, count) self.assertEqual(18, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveLinks_019(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches some of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*tree4.*dir006.*") self.assertEqual(0, count) self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_020(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches some of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeLinks(pattern=".*with spaces.*") self.assertEqual(3, count) self.assertEqual(13, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) def testRemoveLinks_021(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches all of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.assertEqual(0, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveLinks_022(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches all of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.assertEqual(0, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveLinks_023(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches all of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.assertEqual(0, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_024(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches all of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.assertEqual(9, count) self.assertEqual(13, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) def testRemoveLinks_025(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches all of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.assertEqual(0, count) self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_026(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches all of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.assertEqual(6, count) self.assertEqual(10, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) ##################### # Test removeMatch() ##################### def testRemoveMatch_001(self): """ Test with an empty list and a pattern of None. """ fsList = FilesystemList() self.assertRaises(TypeError, fsList.removeMatch, pattern=None) def testRemoveMatch_002(self): """ Test with an empty list and a non-empty pattern. """ fsList = FilesystemList() count = fsList.removeMatch(pattern="pattern") self.assertEqual(0, count) self.assertRaises(ValueError, fsList.removeMatch, pattern="*.jpg") def testRemoveMatch_003(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches none of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveMatch_004(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches none of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveMatch_005(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches none of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveMatch_006(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches none of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveMatch_007(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches none of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveMatch_008(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches none of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.assertEqual(0, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveMatch_009(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches some of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*file00[135].*") self.assertEqual(3, count) self.assertEqual(5, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveMatch_010(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches some of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeMatch(pattern=".*dir00[2468].*") self.assertEqual(4, count) self.assertEqual(7, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveMatch_011(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches some of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*tree4.*dir006") self.assertEqual(18, count) self.assertEqual(63, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveMatch_012(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches some of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeMatch(pattern=".*file001.*") self.assertEqual(3, count) self.assertEqual(19, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveMatch_013(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches some of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*dir00[46].*") self.assertEqual(25, count) self.assertEqual(57, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveMatch_014(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches some of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeMatch(pattern=".*with spaces.*") self.assertEqual(7, count) self.assertEqual(9, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) def testRemoveMatch_015(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches all of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.assertEqual(8, count) self.assertEqual(0, len(fsList)) def testRemoveMatch_016(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches all of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.assertEqual(11, count) self.assertEqual(0, len(fsList)) def testRemoveMatch_017(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches all of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.assertEqual(81, count) self.assertEqual(0, len(fsList)) def testRemoveMatch_019(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches all of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.assertEqual(22, count) self.assertEqual(0, len(fsList)) def testRemoveMatch_020(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches all of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(82, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.assertEqual(82, count) self.assertEqual(0, len(fsList)) def testRemoveMatch_021(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches all of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.assertEqual(16, count) self.assertEqual(0, len(fsList)) ####################### # Test removeInvalid() ####################### def testRemoveInvalid_001(self): """ Test with an empty list. """ fsList = FilesystemList() count = fsList.removeInvalid() self.assertEqual(0, count) def testRemoveInvalid_002(self): """ Test with a non-empty list containing only invalid entries (some with spaces). """ self.extractTar("tree9") fsList = FilesystemList() fsList.append(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", " %s 5 " % INVALID_FILE, ])) # file won't exist on disk self.assertEqual(5, len(fsList)) self.assertTrue(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", " %s 5 " % INVALID_FILE, ]) in fsList) count = fsList.removeInvalid() self.assertEqual(5, count) self.assertEqual(0, len(fsList)) def testRemoveInvalid_003(self): """ Test with a non-empty list containing only valid entries (files only). """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeInvalid() self.assertEqual(0, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveInvalid_004(self): """ Test with a non-empty list containing only valid entries (directories only). """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeInvalid() self.assertEqual(0, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveInvalid_005(self): """ Test with a non-empty list containing only valid entries (files and directories). """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeInvalid() self.assertEqual(0, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveInvalid_006(self): """ Test with a non-empty list containing only valid entries (files, directories and links). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeInvalid() self.assertEqual(0, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveInvalid_007(self): """ Test with a non-empty list containing valid and invalid entries (files, directories and links). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) fsList.append(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.assertEqual(26, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeInvalid() self.assertEqual(4, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveInvalid_008(self): """ Test with a non-empty list containing only valid entries (files, directories and links, some with spaces). """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeInvalid() self.assertEqual(0, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) ################### # Test normalize() ################### def testNormalize_001(self): """ Test with an empty list. """ fsList = FilesystemList() self.assertEqual(0, len(fsList)) fsList.normalize() self.assertEqual(0, len(fsList)) def testNormalize_002(self): """ Test with a list containing one entry. """ fsList = FilesystemList() fsList.append("one") self.assertEqual(1, len(fsList)) fsList.normalize() self.assertEqual(1, len(fsList)) self.assertTrue("one" in fsList) def testNormalize_003(self): """ Test with a list containing two entries, no duplicates. """ fsList = FilesystemList() fsList.append("one") fsList.append("two") self.assertEqual(2, len(fsList)) fsList.normalize() self.assertEqual(2, len(fsList)) self.assertTrue("one" in fsList) self.assertTrue("two" in fsList) def testNormalize_004(self): """ Test with a list containing two entries, with duplicates. """ fsList = FilesystemList() fsList.append("one") fsList.append("one") self.assertEqual(2, len(fsList)) fsList.normalize() self.assertEqual(1, len(fsList)) self.assertTrue("one" in fsList) def testNormalize_005(self): """ Test with a list containing many entries, no duplicates. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) fsList.normalize() self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) def testNormalize_006(self): """ Test with a list containing many entries, with duplicates. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(44, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) fsList.normalize() self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) ################ # Test verify() ################ def testVerify_001(self): """ Test with an empty list. """ fsList = FilesystemList() ok = fsList.verify() self.assertEqual(True, ok) def testVerify_002(self): """ Test with a non-empty list containing only invalid entries. """ self.extractTar("tree9") fsList = FilesystemList() fsList.append(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.assertEqual(4, len(fsList)) self.assertTrue(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) ok = fsList.verify() self.assertEqual(False, ok) self.assertEqual(4, len(fsList)) self.assertTrue(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) def testVerify_003(self): """ Test with a non-empty list containing only valid entries (files only). """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) ok = fsList.verify() self.assertEqual(True, ok) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) def testVerify_004(self): """ Test with a non-empty list containing only valid entries (directories only). """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) ok = fsList.verify() self.assertEqual(True, ok) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) def testVerify_005(self): """ Test with a non-empty list containing only valid entries (files and directories). """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(81, count) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) ok = fsList.verify() self.assertEqual(True, ok) self.assertEqual(81, len(fsList)) self.assertTrue(self.buildPath([ "tree4", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree4", "file007", ]) in fsList) def testVerify_006(self): """ Test with a non-empty list containing only valid entries (files, directories and links). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) ok = fsList.verify() self.assertEqual(True, ok) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) def testVerify_007(self): """ Test with a non-empty list containing valid and invalid entries (files, directories and links). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) fsList.append(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.assertEqual(26, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) ok = fsList.verify() self.assertEqual(False, ok) self.assertEqual(26, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) def testVerify_008(self): """ Test with a non-empty list containing valid and invalid entries (some containing spaces). """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.assertEqual(20, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "%s-1" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "%s-2" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "%s-3" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "%s-4" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) ok = fsList.verify() self.assertEqual(False, ok) self.assertEqual(20, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "%s-1" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "%s-2" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "%s-3" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "%s-4" % INVALID_FILE, ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) ########################### # TestBackupFileList class ########################### class TestBackupFileList(unittest.TestCase): """Tests for the BackupFileList class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def tarPath(self, components): """Builds a complete search path from a list of components, compatible with Python tar output.""" result = self.buildPath(components) if result[0:1] == os.path.sep: return result[1:] return result def buildRandomPath(self, maxlength, extension): """Builds a complete, randomly-named search path.""" maxlength -= len(self.tmpdir) maxlength -= len(extension) components = [ self.tmpdir, randomFilename(maxlength, suffix=extension), ] return buildPath(components) ################ # Test addDir() ################ def testAddDir_001(self): """ Test that function is overridden, no exclusions. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ self.extractTar("tree5") backupList = BackupFileList() dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.assertEqual(0, count) self.assertEqual(0, len(backupList)) dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.assertEqual(1, count) self.assertEqual([dirPath], backupList) def testAddDir_002(self): """ Test that function is overridden, excludeFiles set. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ self.extractTar("tree5") backupList = BackupFileList() backupList.excludeFiles = True dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.assertEqual(0, count) self.assertEqual(0, len(backupList)) dirPath = self.buildPath(["tree5", "dir002", "link001", ]) dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.assertEqual(1, count) self.assertEqual([dirPath], backupList) def testAddDir_003(self): """ Test that function is overridden, excludeDirs set. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ self.extractTar("tree5") backupList = BackupFileList() backupList.excludeDirs = True dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.assertEqual(0, count) self.assertEqual(0, len(backupList)) dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.assertEqual(0, count) self.assertEqual(0, len(backupList)) def testAddDir_004(self): """ Test that function is overridden, excludeLinks set. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ self.extractTar("tree5") backupList = BackupFileList() backupList.excludeLinks = True dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.assertEqual(0, count) self.assertEqual(0, len(backupList)) dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.assertEqual(0, count) self.assertEqual(0, len(backupList)) def testAddDir_005(self): """ Test that function is overridden, excludePaths set. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ self.extractTar("tree5") backupList = BackupFileList() backupList.excludePaths = [ NOMATCH_PATH ] dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.assertEqual(0, count) self.assertEqual(0, len(backupList)) dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.assertEqual(1, count) self.assertEqual([dirPath], backupList) def testAddDir_006(self): """ Test that function is overridden, excludePatterns set. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ self.extractTar("tree5") backupList = BackupFileList() backupList.excludePatterns = [ NOMATCH_PATH ] dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.assertEqual(0, count) self.assertEqual(0, len(backupList)) dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.assertEqual(1, count) self.assertEqual([dirPath], backupList) ################### # Test totalSize() ################### def testTotalSize_001(self): """ Test on an empty list. """ backupList = BackupFileList() size = backupList.totalSize() self.assertEqual(0, size) def testTotalSize_002(self): """ Test on a non-empty list containing only valid entries. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) size = backupList.totalSize() self.assertEqual(1116, size) def testTotalSize_004(self): """ Test on a non-empty list (some containing spaces). """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(13, count) self.assertEqual(13, len(backupList)) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) size = backupList.totalSize() self.assertEqual(1085, size) def testTotalSize_005(self): """ Test on a non-empty list containing a directory (which shouldn't be possible). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.assertEqual(16, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001" ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) size = backupList.totalSize() self.assertEqual(1116, size) def testTotalSize_006(self): """ Test on a non-empty list containing a non-existent file. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(16, len(backupList)) self.assertTrue(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) size = backupList.totalSize() self.assertEqual(1116, size) ######################### # Test generateSizeMap() ######################### def testGenerateSizeMap_001(self): """ Test on an empty list. """ backupList = BackupFileList() sizeMap = backupList.generateSizeMap() self.assertEqual(0, len(sizeMap)) def testGenerateSizeMap_002(self): """ Test on a non-empty list containing only valid entries. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) sizeMap = backupList.generateSizeMap() self.assertEqual(15, len(sizeMap)) self.assertEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "file001", ]) ]) self.assertEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "file002", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link001", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link002", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link003", ]) ]) self.assertEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "file001", ]) ]) self.assertEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "file002", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link001", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link002", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link003", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link004", ]) ]) self.assertEqual(155, sizeMap[self.buildPath([ "tree9", "file001", ]) ]) self.assertEqual(242, sizeMap[self.buildPath([ "tree9", "file002", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "link001", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "link002", ]) ]) def testGenerateSizeMap_004(self): """ Test on a non-empty list (some containing spaces). """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(13, count) self.assertEqual(13, len(backupList)) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) sizeMap = backupList.generateSizeMap() self.assertEqual(13, len(sizeMap)) self.assertEqual(155, sizeMap[self.buildPath([ "tree11", "file001", ])]) self.assertEqual(155, sizeMap[self.buildPath([ "tree11", "file with spaces", ])]) self.assertEqual(0, sizeMap[self.buildPath([ "tree11", "link001", ])]) self.assertEqual(0, sizeMap[self.buildPath([ "tree11", "link002", ])]) self.assertEqual(0, sizeMap[self.buildPath([ "tree11", "link003", ])]) self.assertEqual(0, sizeMap[self.buildPath([ "tree11", "link with spaces", ])]) self.assertEqual(155, sizeMap[self.buildPath([ "tree11", "dir002", "file001", ])]) self.assertEqual(155, sizeMap[self.buildPath([ "tree11", "dir002", "file002", ])]) self.assertEqual(155, sizeMap[self.buildPath([ "tree11", "dir002", "file003", ])]) self.assertEqual(155, sizeMap[self.buildPath([ "tree11", "dir with spaces", "file001", ])]) self.assertEqual(155, sizeMap[self.buildPath([ "tree11", "dir with spaces", "file with spaces", ])]) self.assertEqual(0, sizeMap[self.buildPath([ "tree11", "dir with spaces", "link002", ])]) self.assertEqual(0, sizeMap[self.buildPath([ "tree11", "dir with spaces", "link with spaces", ])]) def testGenerateSizeMap_005(self): """ Test on a non-empty list containing a directory (which shouldn't be possible). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.assertEqual(16, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001" ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) sizeMap = backupList.generateSizeMap() self.assertEqual(15, len(sizeMap)) self.assertEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "file001", ]) ]) self.assertEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "file002", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link001", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link002", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link003", ]) ]) self.assertEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "file001", ]) ]) self.assertEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "file002", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link001", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link002", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link003", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link004", ]) ]) self.assertEqual(155, sizeMap[self.buildPath([ "tree9", "file001", ]) ]) self.assertEqual(242, sizeMap[self.buildPath([ "tree9", "file002", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "link001", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "link002", ]) ]) def testGenerateSizeMap_006(self): """ Test on a non-empty list containing a non-existent file. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(16, len(backupList)) self.assertTrue(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) sizeMap = backupList.generateSizeMap() self.assertEqual(15, len(sizeMap)) self.assertEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "file001", ]) ]) self.assertEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "file002", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link001", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link002", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link003", ]) ]) self.assertEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "file001", ]) ]) self.assertEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "file002", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link001", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link002", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link003", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link004", ]) ]) self.assertEqual(155, sizeMap[self.buildPath([ "tree9", "file001", ]) ]) self.assertEqual(242, sizeMap[self.buildPath([ "tree9", "file002", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "link001", ]) ]) self.assertEqual(0, sizeMap[self.buildPath([ "tree9", "link002", ]) ]) ########################### # Test generateDigestMap() ########################### def testGenerateDigestMap_001(self): """ Test on an empty list. """ backupList = BackupFileList() digestMap = backupList.generateDigestMap() self.assertEqual(0, len(digestMap)) def testGenerateDigestMap_002(self): """ Test on a non-empty list containing only valid entries. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) digestMap = backupList.generateDigestMap() self.assertEqual(6, len(digestMap)) self.assertEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "file001", ])]) self.assertEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "file002", ])]) self.assertEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "file001", ])]) self.assertEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "file002", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree9", "file001", ])]) self.assertEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[self.buildPath([ "tree9", "file002", ])]) def testGenerateDigestMap_003(self): """ Test on a non-empty list containing only valid entries (some containing spaces). """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(13, count) self.assertEqual(13, len(backupList)) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) digestMap = backupList.generateDigestMap() self.assertEqual(7, len(digestMap)) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "file001", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "file with spaces", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir002", "file001", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir002", "file002", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir002", "file003", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir with spaces", "file001", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir with spaces", "file with spaces", ])]) def testGenerateDigestMap_004(self): """ Test on a non-empty list containing a directory (which shouldn't be possible). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.assertEqual(16, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001" ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) digestMap = backupList.generateDigestMap() self.assertEqual(6, len(digestMap)) self.assertEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "file001", ])]) self.assertEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "file002", ])]) self.assertEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "file001", ])]) self.assertEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "file002", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree9", "file001", ])]) self.assertEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[self.buildPath([ "tree9", "file002", ])]) def testGenerateDigestMap_005(self): """ Test on a non-empty list containing a non-existent file. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(16, len(backupList)) self.assertTrue(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) digestMap = backupList.generateDigestMap() self.assertEqual(6, len(digestMap)) self.assertEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "file001", ])]) self.assertEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "file002", ])]) self.assertEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "file001", ])]) self.assertEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "file002", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree9", "file001", ])]) self.assertEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[self.buildPath([ "tree9", "file002", ])]) def testGenerateDigestMap_006(self): """ Test on an empty list, passing stripPrefix not None. """ backupList = BackupFileList() prefix = "whatever" digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.assertEqual(0, len(digestMap)) def testGenerateDigestMap_007(self): """ Test on a non-empty list containing only valid entries, passing stripPrefix not None. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree9", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.assertEqual(6, len(digestMap)) self.assertEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "/", "dir001", "file001", ])]) self.assertEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "/", "dir001", "file002", ])]) self.assertEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "/", "dir002", "file001", ])]) self.assertEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "/", "dir002", "file002", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "file001", ])]) self.assertEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[buildPath([ "/", "file002", ])]) def testGenerateDigestMap_008(self): """ Test on a non-empty list containing only valid entries (some containing spaces), passing stripPrefix not None. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(13, count) self.assertEqual(13, len(backupList)) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree11", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.assertEqual(7, len(digestMap)) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "file001", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "file with spaces", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "dir002", "file001", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "dir002", "file002", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "dir002", "file003", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "dir with spaces", "file001", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "dir with spaces", "file with spaces", ])]) def testGenerateDigestMap_009(self): """ Test on a non-empty list containing a directory (which shouldn't be possible), passing stripPrefix not None. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.assertEqual(16, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001" ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree9", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.assertEqual(6, len(digestMap)) self.assertEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "/", "dir001", "file001", ])]) self.assertEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "/", "dir001", "file002", ])]) self.assertEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "/", "dir002", "file001", ])]) self.assertEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "/", "dir002", "file002", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "file001", ])]) self.assertEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[buildPath([ "/", "file002", ])]) def testGenerateDigestMap_010(self): """ Test on a non-empty list containing a non-existent file, passing stripPrefix not None. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(16, len(backupList)) self.assertTrue(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree9", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.assertEqual(6, len(digestMap)) self.assertEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "/", "dir001", "file001", ])]) self.assertEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "/", "dir001", "file002", ])]) self.assertEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "/", "dir002", "file001", ])]) self.assertEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "/", "dir002", "file002", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "file001", ])]) self.assertEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[buildPath([ "/", "file002", ])]) ######################## # Test generateFitted() ######################## def testGenerateFitted_001(self): """ Test on an empty list. """ backupList = BackupFileList() fittedList = backupList.generateFitted(2000) self.assertEqual(0, len(fittedList)) def testGenerateFitted_002(self): """ Test on a non-empty list containing only valid entries, all of which fit. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) fittedList = backupList.generateFitted(2000) self.assertEqual(15, len(fittedList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fittedList) def testGenerateFitted_003(self): """ Test on a non-empty list containing only valid entries (some containing spaces), all of which fit. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(13, count) self.assertEqual(13, len(backupList)) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) fittedList = backupList.generateFitted(2000) self.assertEqual(13, len(fittedList)) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fittedList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fittedList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fittedList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fittedList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fittedList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fittedList) def testGenerateFitted_004(self): """ Test on a non-empty list containing only valid entries, some of which fit. We can get some strange behavior on Windows, which hits the "links not supported" case. The file tree9/dir002/file002 is 74 bytes, and is supposed to be the only file included because links are not recognized. However, link004 points at file002, and apparently Windows (sometimes?) sees link004 as a real file with a size of 74 bytes. Since only one of the two fits in the fitted list, we just check for one or the other. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) fittedList = backupList.generateFitted(80) self.assertEqual(10, len(fittedList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fittedList) def testGenerateFitted_005(self): """ Test on a non-empty list containing only valid entries, none of which fit. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) fittedList = backupList.generateFitted(0) self.assertEqual(0, len(fittedList)) fittedList = backupList.generateFitted(50) self.assertEqual(9, len(fittedList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fittedList) def testGenerateFitted_006(self): """ Test on a non-empty list containing a directory (which shouldn't be possible). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.assertEqual(16, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001" ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) fittedList = backupList.generateFitted(2000) self.assertEqual(15, len(fittedList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fittedList) def testGenerateFitted_007(self): """ Test on a non-empty list containing a non-existent file. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(16, len(backupList)) self.assertTrue(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) fittedList = backupList.generateFitted(2000) self.assertEqual(15, len(fittedList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fittedList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fittedList) ###################### # Test generateSpan() ###################### def testGenerateSpan_001(self): """ Test on an empty list. """ backupList = BackupFileList() spanSet = backupList.generateSpan(2000) self.assertEqual(0, len(spanSet)) def testGenerateSpan_002(self): """ Test a set of files that all fit in one span item. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) spanSet = backupList.generateSpan(2000) self.assertEqual(1, len(spanSet)) spanItem = spanSet[0] self.assertEqual(15, len(spanItem.fileList)) self.assertEqual(1116, spanItem.size) self.assertEqual(2000, spanItem.capacity) self.assertEqual((1116.0/2000.0)*100.0, spanItem.utilization) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in spanItem.fileList) def testGenerateSpan_003(self): """ Test a set of files that all fit in two span items. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) spanSet = backupList.generateSpan(760, "best_fit") self.assertEqual(2, len(spanSet)) spanItem = spanSet[0] self.assertEqual(12, len(spanItem.fileList)) self.assertEqual(753, spanItem.size) self.assertEqual(760, spanItem.capacity) self.assertEqual((753.0/760.0)*100.0, spanItem.utilization) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in spanItem.fileList) spanItem = spanSet[1] self.assertEqual(3, len(spanItem.fileList)) self.assertEqual(363, spanItem.size) self.assertEqual(760, spanItem.capacity) self.assertEqual((363.0/760.0)*100.0, spanItem.utilization) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in spanItem.fileList) def testGenerateSpan_004(self): """ Test a set of files that all fit in three span items. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) spanSet = backupList.generateSpan(515, "best_fit") self.assertEqual(3, len(spanSet)) spanItem = spanSet[0] self.assertEqual(11, len(spanItem.fileList)) self.assertEqual(511, spanItem.size) self.assertEqual(515, spanItem.capacity) self.assertEqual((511.0/515.0)*100.0, spanItem.utilization) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in spanItem.fileList) spanItem = spanSet[1] self.assertEqual(3, len(spanItem.fileList)) self.assertEqual(471, spanItem.size) self.assertEqual(515, spanItem.capacity) self.assertEqual((471.0/515.0)*100.0, spanItem.utilization) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in spanItem.fileList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in spanItem.fileList) spanItem = spanSet[2] self.assertEqual(1, len(spanItem.fileList)) self.assertEqual(134, spanItem.size) self.assertEqual(515, spanItem.capacity) self.assertEqual((134.0/515.0)*100.0, spanItem.utilization) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in spanItem.fileList) def testGenerateSpan_005(self): """ Test a set of files where one of the files does not fit in the capacity. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) self.assertRaises(ValueError, backupList.generateSpan, 250, "best_fit") ######################### # Test generateTarfile() ######################### def testGenerateTarfile_001(self): """ Test on an empty list. """ backupList = BackupFileList() tarPath = self.buildPath(["file.tar", ]) self.assertRaises(ValueError, backupList.generateTarfile, tarPath) self.assertTrue(not os.path.exists(tarPath)) def testGenerateTarfile_002(self): """ Test on a non-empty list containing a directory (which shouldn't be possible). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.assertEqual(16, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001" ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.assertTrue(tarfile.is_tarfile(tarPath)) with tarfile.open(tarPath) as tarFile: tarList = tarFile.getnames() self.assertEqual(16, len(tarList)) self.assertTrue(self.tarPath([ "tree9", "dir001/" ]) in tarList or self.tarPath([ "tree9", "dir001//" ]) in tarList # Grr... Python 2.5 behavior differs or self.tarPath([ "tree9", "dir001", ]) in tarList) # Grr... Python 2.6 behavior differs self.assertTrue(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_003(self): """ Test on a non-empty list containing a non-existent file, ignore=False. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(16, len(backupList)) self.assertTrue(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) self.assertRaises(tarfile.TarError, backupList.generateTarfile, tarPath, ignore=False) self.assertTrue(not os.path.exists(tarPath)) def testGenerateTarfile_004(self): """ Test on a non-empty list containing a non-existent file, ignore=True. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.assertEqual(16, len(backupList)) self.assertTrue(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath, ignore=True) self.assertTrue(tarfile.is_tarfile(tarPath)) with tarfile.open(tarPath) as tarFile: tarList = tarFile.getnames() self.assertEqual(15, len(tarList)) self.assertTrue(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_005(self): """ Test on a non-empty list containing only valid entries, with an invalid mode. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) self.assertRaises(ValueError, backupList.generateTarfile, tarPath, mode="bogus") self.assertTrue(not os.path.exists(tarPath)) def testGenerateTarfile_006(self): """ Test on a non-empty list containing only valid entries, default mode. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.assertTrue(tarfile.is_tarfile(tarPath)) with tarfile.open(tarPath) as tarFile: tarList = tarFile.getnames() self.assertEqual(15, len(tarList)) self.assertTrue(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_007(self): """ Test on a non-empty list (some containing spaces), default mode. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(13, count) self.assertEqual(13, len(backupList)) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.assertTrue(tarfile.is_tarfile(tarPath)) with tarfile.open(tarPath) as tarFile: tarList = tarFile.getnames() self.assertEqual(13, len(tarList)) self.assertTrue(self.tarPath([ "tree11", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree11", "file with spaces", ]) in tarList) self.assertTrue(self.tarPath([ "tree11", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree11", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree11", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree11", "link with spaces", ]) in tarList) self.assertTrue(self.tarPath([ "tree11", "dir002", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree11", "dir002", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree11", "dir002", "file003", ]) in tarList) self.assertTrue(self.tarPath([ "tree11", "dir with spaces", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree11", "dir with spaces", "file with spaces", ]) in tarList) self.assertTrue(self.tarPath([ "tree11", "dir with spaces", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree11", "dir with spaces", "link with spaces", ]) in tarList) def testGenerateTarfile_008(self): """ Test on a non-empty list containing only valid entries, 'tar' mode. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.assertTrue(tarfile.is_tarfile(tarPath)) with tarfile.open(tarPath) as tarFile: tarList = tarFile.getnames() self.assertEqual(15, len(tarList)) self.assertTrue(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_009(self): """ Test on a non-empty list containing only valid entries, 'targz' mode. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar.gz", ]) backupList.generateTarfile(tarPath, mode="targz") self.assertTrue(tarfile.is_tarfile(tarPath)) with tarfile.open(tarPath) as tarFile: tarList = tarFile.getnames() self.assertEqual(15, len(tarList)) self.assertTrue(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_010(self): """ Test on a non-empty list containing only valid entries, 'tarbz2' mode. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar.bz2", ]) backupList.generateTarfile(tarPath, mode="tarbz2") self.assertTrue(tarfile.is_tarfile(tarPath)) with tarfile.open(tarPath) as tarFile: tarList = tarFile.getnames() self.assertEqual(15, len(tarList)) self.assertTrue(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_011(self): """ Test on a non-empty list containing only valid entries, 'tar' mode, long target name. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildRandomPath(255, ".tar") backupList.generateTarfile(tarPath, mode="tar") self.assertTrue(tarfile.is_tarfile(tarPath)) with tarfile.open(tarPath) as tarFile: tarList = tarFile.getnames() self.assertEqual(15, len(tarList)) self.assertTrue(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_012(self): """ Test on a non-empty list containing only valid entries, 'targz' mode, long target name. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildRandomPath(255, ".tar") backupList.generateTarfile(tarPath, mode="targz") self.assertTrue(tarfile.is_tarfile(tarPath)) with tarfile.open(tarPath) as tarFile: tarList = tarFile.getnames() self.assertEqual(15, len(tarList)) self.assertTrue(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_013(self): """ Test on a non-empty list containing only valid entries, 'tarbz2' mode, long target name. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildRandomPath(255, ".tar") backupList.generateTarfile(tarPath, mode="tarbz2") self.assertTrue(tarfile.is_tarfile(tarPath)) with tarfile.open(tarPath) as tarFile: tarList = tarFile.getnames() self.assertEqual(15, len(tarList)) self.assertTrue(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "file002", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link001", ]) in tarList) self.assertTrue(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_014(self): """ Test behavior of the flat flag. """ self.extractTar("tree11") backupList = BackupFileList() path = self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) backupList.addFile(path) path = self.buildPath([ "tree11", "dir with spaces", "file001", ]) backupList.addFile(path) path = self.buildPath([ "tree11", "dir002", "file002", ]) backupList.addFile(path) path = self.buildPath([ "tree11", "dir002", "file003", ]) backupList.addFile(path) self.assertEqual(4, len(backupList)) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath, flat=True) self.assertTrue(tarfile.is_tarfile(tarPath)) with tarfile.open(tarPath) as tarFile: tarList = tarFile.getnames() self.assertEqual(4, len(tarList)) self.assertTrue("file with spaces" in tarList) self.assertTrue("file001" in tarList) self.assertTrue("file002" in tarList) self.assertTrue("file003" in tarList) ######################### # Test removeUnchanged() ######################### def testRemoveUnchanged_001(self): """ Test on an empty list with an empty digest map. """ digestMap = {} backupList = BackupFileList() self.assertEqual(0, len(backupList)) count = backupList.removeUnchanged(digestMap) self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(0, count) self.assertEqual(0, len(backupList)) def testRemoveUnchanged_002(self): """ Test on an empty list with an non-empty digest map. """ digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() self.assertEqual(0, len(backupList)) count = backupList.removeUnchanged(digestMap) self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(0, count) self.assertEqual(0, len(backupList)) def testRemoveUnchanged_003(self): """ Test on an non-empty list with an empty digest map. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { } backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(0, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_004(self): """ Test with a digest map containing only entries that are not in the list. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir003", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir003", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir004", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir004", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file003", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file004", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(0, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_005(self): """ Test with a digest map containing only entries that are in the list, with non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e8AAAAAAAAAAAAAAAAAA7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecAAAAAAAAAAAAAAAAAA95d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b64AAAAAAAAAAAAAAAAAA5b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1cAAAAAAAAAAAAAAAAAA5d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237aAAAAAAAAAAAAAAAAAA555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97bAAAAAAAAAAAAAAAAAAbb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(0, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_006(self): """ Test with a digest map containing only entries that are in the list, with matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(6, count) self.assertEqual(9, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_007(self): """ Test with a digest map containing both entries that are and are not in the list, with non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531cCCCCCCCCCCCCCCCCCCCCCCCCCe77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a2CCCCCCCCCCCCCCCCCCCCCCCCCd6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26CCCCCCCCCCCCCCCCCCCCCCCCC86c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014CCCCCCCCCCCCCCCCCCCCCCCCCd26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a62CCCCCCCCCCCCCCCCCCCCCCCCC73847", self.buildPath([ "tree9", "file003", ]) :"fae89085eeCCCCCCCCCCCCCCCCCCCCCCCCC769db", } backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(0, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_008(self): """ Test with a digest map containing both entries that are and are not in the list, with matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file003", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(3, count) self.assertEqual(12, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_009(self): """ Test with a digest map containing both entries that are and are not in the list, with matching and non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531AAAAAAAAAAAAAAAAAAAAAAAe21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file003", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(2, count) self.assertEqual(13, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_010(self): """ Test on an empty list with an empty digest map. """ digestMap = {} backupList = BackupFileList() self.assertEqual(0, len(backupList)) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(0, count) self.assertEqual(0, len(backupList)) self.assertEqual(0, len(newDigest)) def testRemoveUnchanged_011(self): """ Test on an empty list with an non-empty digest map. """ digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() self.assertEqual(0, len(backupList)) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(0, count) self.assertEqual(0, len(backupList)) self.assertEqual(0, len(newDigest)) def testRemoveUnchanged_012(self): """ Test on an non-empty list with an empty digest map. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { } backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(0, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) self.assertEqual(6, len(newDigest)) self.assertEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.assertEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.assertEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.assertEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.assertEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_013(self): """ Test with a digest map containing only entries that are not in the list. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir003", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir003", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir004", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir004", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file003", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file004", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(0, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) self.assertEqual(6, len(newDigest)) self.assertEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.assertEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.assertEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.assertEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.assertEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_014(self): """ Test with a digest map containing only entries that are in the list, with non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e8AAAAAAAAAAAAAAAAAA7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecAAAAAAAAAAAAAAAAAA95d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b64AAAAAAAAAAAAAAAAAA5b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1cAAAAAAAAAAAAAAAAAA5d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237aAAAAAAAAAAAAAAAAAA555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97bAAAAAAAAAAAAAAAAAAbb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(0, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) self.assertEqual(6, len(newDigest)) self.assertEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.assertEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.assertEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.assertEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.assertEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_015(self): """ Test with a digest map containing only entries that are in the list, with matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(6, count) self.assertEqual(9, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) self.assertEqual(6, len(newDigest)) self.assertEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.assertEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.assertEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.assertEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.assertEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_016(self): """ Test with a digest map containing both entries that are and are not in the list, with non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531cCCCCCCCCCCCCCCCCCCCCCCCCCe77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a2CCCCCCCCCCCCCCCCCCCCCCCCCd6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26CCCCCCCCCCCCCCCCCCCCCCCCC86c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014CCCCCCCCCCCCCCCCCCCCCCCCCd26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a62CCCCCCCCCCCCCCCCCCCCCCCCC73847", self.buildPath([ "tree9", "file003", ]) :"fae89085eeCCCCCCCCCCCCCCCCCCCCCCCCC769db", } backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(0, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) self.assertEqual(6, len(newDigest)) self.assertEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.assertEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.assertEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.assertEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.assertEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_017(self): """ Test with a digest map containing both entries that are and are not in the list, with matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file003", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(3, count) self.assertEqual(12, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) self.assertEqual(6, len(newDigest)) self.assertEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.assertEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.assertEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.assertEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.assertEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_018(self): """ Test with a digest map containing both entries that are and are not in the list, with matching and non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531AAAAAAAAAAAAAAAAAAAAAAAe21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file003", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.assertTrue(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.assertEqual(2, count) self.assertEqual(13, len(backupList)) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in backupList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in backupList) self.assertEqual(6, len(newDigest)) self.assertEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.assertEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.assertEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.assertEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.assertEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.assertEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) ######################### # Test _generateDigest() ######################### # pylint: disable=E1101 def testGenerateDigest_001(self): """ Test that _generateDigest gives back same result as the slower simplistic implementation for a set of files (just using all of the resource files). """ for key in list(self.resources.keys()): path = self.resources[key] with open(path, mode="rb") as f: # because generateDigest also uses "rb" digest1 = hashlib.sha1(f.read()).hexdigest() digest2 = BackupFileList._generateDigest(path) self.assertEqual(digest1, digest2, "Digest for %s varies: [%s] vs [%s]." % (path, digest1, digest2)) ########################## # TestPurgeItemList class ########################## class TestPurgeItemList(unittest.TestCase): """Tests for the PurgeItemList class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def pathPattern(self, path): """Returns properly-escaped regular expression pattern matching the indicated path.""" return ".*%s.*" % path.replace("\\", "\\\\") ######################## # Test addDirContents() ######################## def testAddDirContents_001(self): """ Attempt to add a directory that doesn't exist; no exclusions. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_002(self): """ Attempt to add a file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_003(self): """ Attempt to add a soft link; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() self.assertRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() count = purgeList.addDir(path) self.assertEqual(1, count) self.assertEqual([path], purgeList) def testAddDirContents_004(self): """ Attempt to add an empty directory containing ignore file; no exclusions. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_005(self): """ Attempt to add an empty directory; no exclusions. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_006(self): """ Attempt to add an non-empty directory containing ignore file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_007(self): """ Attempt to add an non-empty directory; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.assertEqual(7, count) self.assertEqual(7, len(purgeList)) self.assertTrue(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "link001", ]) in purgeList) def testAddDirContents_008(self): """ Attempt to add a directory that doesn't exist; excludeFiles set. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludeFiles = True self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_009(self): """ Attempt to add a file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludeFiles = True self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_010(self): """ Attempt to add a soft link; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludeFiles = True self.assertRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludeFiles = True count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_011(self): """ Attempt to add an empty directory containing ignore file; excludeFiles set. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeFiles = True count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_012(self): """ Attempt to add an empty directory; excludeFiles set. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludeFiles = True count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_013(self): """ Attempt to add an non-empty directory containing ignore file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeFiles = True count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_014(self): """ Attempt to add an non-empty directory; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludeFiles = True count = purgeList.addDirContents(path) self.assertEqual(4, count) self.assertEqual(4, len(purgeList)) self.assertTrue(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) def testAddDirContents_015(self): """ Attempt to add a directory that doesn't exist; excludeDirs set. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludeDirs = True self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_016(self): """ Attempt to add a file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludeDirs = True self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_017(self): """ Attempt to add a soft link; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludeDirs = True self.assertRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_018(self): """ Attempt to add an empty directory containing ignore file; excludeDirs set. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_019(self): """ Attempt to add an empty directory; excludeDirs set. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_020(self): """ Attempt to add an non-empty directory containing ignore file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_021(self): """ Attempt to add an non-empty directory; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.assertEqual(3, count) self.assertEqual(3, len(purgeList)) self.assertTrue(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "link001", ]) in purgeList) def testAddDirContents_023(self): """ Attempt to add a directory that doesn't exist; excludeLinks set. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludeLinks = True self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_024(self): """ Attempt to add a file; excludeLinks set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludeLinks = True self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_025(self): """ Attempt to add a soft link; excludeLinks set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludeLinks = True self.assertRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_026(self): """ Attempt to add an empty directory containing ignore file; excludeLinks set. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_027(self): """ Attempt to add an empty directory; excludeLinks set. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_028(self): """ Attempt to add an non-empty directory containing ignore file; excludeLinks set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_029(self): """ Attempt to add an non-empty directory; excludeLinks set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.assertEqual(6, count) self.assertEqual(6, len(purgeList)) self.assertTrue(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) def testAddDirContents_030(self): """ Attempt to add a directory that doesn't exist; with excludePaths including the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludePaths = [ path ] self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_031(self): """ Attempt to add a file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ path ] self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_032(self): """ Attempt to add a soft link; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludePaths = [ path ] self.assertRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludePaths = [ path ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_033(self): """ Attempt to add an empty directory containing ignore file; with excludePaths including the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePaths = [ path ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_034(self): """ Attempt to add an empty directory; with excludePaths including the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ path ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_035(self): """ Attempt to add an non-empty directory containing ignore file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePaths = [ path ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_036(self): """ Attempt to add an non-empty directory; with excludePaths including the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ path ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_037(self): """ Attempt to add a directory that doesn't exist; with excludePaths not including the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_038(self): """ Attempt to add a file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_039(self): """ Attempt to add a soft link; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] self.assertRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_040(self): """ Attempt to add an empty directory containing ignore file; with excludePaths not including the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePaths = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_041(self): """ Attempt to add an empty directory; with excludePaths not including the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_042(self): """ Attempt to add an non-empty directory containing ignore file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePaths = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_043(self): """ Attempt to add an non-empty directory; with excludePaths not including the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.assertEqual(7, count) self.assertEqual(7, len(purgeList)) self.assertTrue(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "link001", ]) in purgeList) def testAddDirContents_044(self): """ Attempt to add a directory that doesn't exist; with excludePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_045(self): """ Attempt to add a file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_046(self): """ Attempt to add a soft link; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] self.assertRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_047(self): """ Attempt to add an empty directory containing ignore file; with excludePatterns matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePatterns = [ self.pathPattern(path) ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_048(self): """ Attempt to add an empty directory; with excludePatterns matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_049(self): """ Attempt to add an non-empty directory containing ignore file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePatterns = [ self.pathPattern(path) ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_050(self): """ Attempt to add an non-empty directory; with excludePatterns matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_051(self): """ Attempt to add a directory that doesn't exist; with excludePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_052(self): """ Attempt to add a file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_053(self): """ Attempt to add a soft link; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] self.assertRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_054(self): """ Attempt to add an empty directory containing ignore file; with excludePatterns not matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePatterns = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_055(self): """ Attempt to add an empty directory; with excludePatterns not matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_056(self): """ Attempt to add an non-empty directory containing ignore file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePatterns = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_057(self): """ Attempt to add an non-empty directory; with excludePatterns not matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.assertEqual(7, count) self.assertEqual(7, len(purgeList)) self.assertTrue(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "link001", ]) in purgeList) def testAddDirContents_058(self): """ Attempt to add a large tree with no exclusions. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.assertEqual(135, count) self.assertEqual(135, len(purgeList)) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_059(self): """ Attempt to add a large tree, with excludeFiles set. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludeFiles = True count = purgeList.addDirContents(path) self.assertEqual(41, count) self.assertEqual(41, len(purgeList)) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_060(self): """ Attempt to add a large tree, with excludeDirs set. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.assertEqual(94, count) self.assertEqual(94, len(purgeList)) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in purgeList) def testAddDirContents_061(self): """ Attempt to add a large tree, with excludeLinks set. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.assertEqual(95, count) self.assertEqual(95, len(purgeList)) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in purgeList) def testAddDirContents_062(self): """ Attempt to add a large tree, with excludePaths set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludePaths = [ self.buildPath([ "tree6", "dir001", "dir002", ]), self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]), self.buildPath([ "tree6", "dir003", "dir002", "file001", ]), self.buildPath([ "tree6", "dir003", "dir002", "file002", ]), ] count = purgeList.addDirContents(path) self.assertEqual(124, count) self.assertEqual(124, len(purgeList)) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_063(self): """ Attempt to add a large tree, with excludePatterns set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ ".*file001.*", r".*tree6\/dir002\/dir001.*" ] count = purgeList.addDirContents(path) self.assertEqual(107, count) self.assertEqual(107, len(purgeList)) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_064(self): """ Attempt to add a large tree, with ignoreFile set to exclude some directories. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" count = purgeList.addDirContents(path) self.assertEqual(78, count) self.assertEqual(78, len(purgeList)) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_065(self): """ Attempt to add a link to a file. """ self.extractTar("tree9") path = self.buildPath(["tree9", "dir002", "link003", ]) purgeList = PurgeItemList() self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_066(self): """ Attempt to add a link to a directory (which should add its contents). """ self.extractTar("tree9") path = self.buildPath(["tree9", "link002" ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(purgeList)) self.assertTrue(self.buildPath([ "tree9", "link002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree9", "link002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree9", "link002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree9", "link002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree9", "link002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree9", "link002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree9", "link002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree9", "link002", "link004", ]) in purgeList) def testAddDirContents_067(self): """ Attempt to add an invalid link (i.e. a link that points to something that doesn't exist). """ self.extractTar("tree10") path = self.buildPath(["tree10", "link001"]) purgeList = PurgeItemList() self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_068(self): """ Attempt to add directory containing an invalid link (i.e. a link that points to something that doesn't exist). """ self.extractTar("tree10") path = self.buildPath(["tree10"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.assertEqual(2, count) self.assertEqual(2, len(purgeList)) self.assertTrue(self.buildPath([ "tree10", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree10", "dir002", ]) in purgeList) def testAddDirContents_069(self): """ Attempt to add a directory containing items with spaces. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.assertEqual(15, count) self.assertEqual(15, len(purgeList)) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in purgeList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in purgeList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in purgeList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in purgeList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in purgeList) def testAddDirContents_070(self): """ Attempt to add a directory which has a name containing spaces. """ self.extractTar("tree11") path = self.buildPath(["tree11", "dir with spaces", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.assertEqual(4, count) self.assertEqual(4, len(purgeList)) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in purgeList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in purgeList) def testAddDirContents_071(self): """ Attempt to add a directory which has a UTF-8 filename in it. """ self.extractTar("tree12") path = self.buildPath(["tree12", "unicode", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.assertEqual(5, count) self.assertEqual(5, len(purgeList)) self.assertTrue(self.buildPath([ "tree12", "unicode", "README.strange-name", ]) in purgeList) self.assertTrue(self.buildPath([ "tree12", "unicode", "utflist.long.gz", ]) in purgeList) self.assertTrue(self.buildPath([ "tree12", "unicode", "utflist.cp437.gz", ]) in purgeList) self.assertTrue(self.buildPath([ "tree12", "unicode", "utflist.short.gz", ]) in purgeList) self.assertTrue(self.buildPath([ "tree12", "unicode", encodePath(b"\xe2\x99\xaa\xe2\x99\xac"), ]) in purgeList) def testAddDirContents_072(self): """ Attempt to add a directory which has several UTF-8 filenames in it. This test data was taken from Rick Lowe's problems around the release of v1.10. I don't run the test for Darwin (Mac OS X) because the tarball isn't valid there. All of the tests with unicode paths were incredibly painful to get working with Python 3, but these tests in particular were difficult, because character 0x82 is not a valid UTF-8 character. The key is was to get the filename into the same encoding used by methods like os.listdir(), which uses a "surrogateescape" fallback for encoding filenames. Once I switched encodePath to do the same thing, this test started passing. There's apparently no other way to represent filenames like this. """ if not platformMacOsX(): self.extractTar("tree13") path = self.buildPath(["tree13", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.assertEqual(10, count) self.assertEqual(10, len(purgeList)) self.assertTrue(self.buildPath([ "tree13", encodePath(b"Les mouvements de r\x82forme.doc"), ]) in purgeList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"l'\x82nonc\x82.sxw"), ]) in purgeList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"l\x82onard - renvois et bibliographie.sxw"), ]) in purgeList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"l\x82onard copie finale.sxw"), ]) in purgeList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"l\x82onard de vinci - page titre.sxw"), ]) in purgeList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"l\x82onard de vinci.sxw"), ]) in purgeList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"Rammstein - B\x81ck Dich.mp3"), ]) in purgeList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"megaherz - Glas Und Tr\x84nen.mp3"), ]) in purgeList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"Megaherz - Mistst\x81ck.MP3"), ]) in purgeList) self.assertTrue(self.buildPath([ "tree13", encodePath(b"Rammstein - Mutter - B\x94se.mp3"), ]) in purgeList) def testAddDirContents_073(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ INVALID_FILE ] self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_074(self): """ Attempt to add a file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "file001", ] self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_075(self): """ Attempt to add a soft link; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "link001", ] self.assertRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "link001", ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_076(self): """ Attempt to add an empty directory containing ignore file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeBasenamePatterns = [ "dir001", ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_077(self): """ Attempt to add an empty directory; with excludeBasenamePatterns matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "dir001", ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_078(self): """ Attempt to add an non-empty directory containing ignore file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeBasenamePatterns = [ "dir008", ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_079(self): """ Attempt to add an non-empty directory; with excludeBasenamePatterns matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "dir001", ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_080(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_081(self): """ Attempt to add a file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.assertRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_082(self): """ Attempt to add a soft link; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.assertRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_083(self): """ Attempt to add an empty directory containing ignore file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_084(self): """ Attempt to add an empty directory; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_085(self): """ Attempt to add an non-empty directory containing ignore file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = purgeList.addDirContents(path) self.assertEqual(0, count) self.assertEqual([], purgeList) def testAddDirContents_086(self): """ Attempt to add an non-empty directory; with excludeBasenamePatterns not matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = purgeList.addDirContents(path) self.assertEqual(7, count) self.assertEqual(7, len(purgeList)) self.assertTrue(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree5", "dir001", "link001", ]) in purgeList) def testAddDirContents_087(self): """ Attempt to add a large tree, with excludeBasenamePatterns set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "file001", "dir001", ] count = purgeList.addDirContents(path) self.assertEqual(63, count) self.assertEqual(63, len(purgeList)) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_088(self): """ Attempt to add a large tree, with excludeBasenamePatterns set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "file001", "dir001" ] count = purgeList.addDirContents(path) self.assertEqual(63, count) self.assertEqual(63, len(purgeList)) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_089(self): """ Attempt to add a large tree with no exclusions """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.assertEqual(135, count) self.assertEqual(135, len(purgeList)) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_090(self): """ Attempt to add a directory with linkDepth=1. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=1) self.assertEqual(164, count) self.assertEqual(164, len(purgeList)) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in purgeList) def testAddDirContents_091(self): """ Attempt to add a directory with linkDepth=2. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=2) self.assertEqual(240, count) self.assertEqual(240, len(purgeList)) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir001", "link001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link002", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir002", "link005", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link001", "link004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file008", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "file009", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "dir003", "link004", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "dir002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "dir002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "dir003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "file004", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "file005", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "file006", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "file007", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "ignore", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "link002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link002", "link001", "link003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree6", "link001", ]) in purgeList) def testAddDirContents_092(self): """ Attempt to add a directory with linkDepth=0, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=0, dereference=False) self.assertEqual(11, count) self.assertEqual(11, len(purgeList)) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", ]) in purgeList) def testAddDirContents_093(self): """ Attempt to add a directory with linkDepth=1, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=1, dereference=False) self.assertEqual(15, count) self.assertEqual(15, len(purgeList)) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link002", ]) in purgeList) def testAddDirContents_094(self): """ Attempt to add a directory with linkDepth=2, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=2, dereference=False) self.assertEqual(19, count) self.assertEqual(19, len(purgeList)) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link002", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link002", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link002", "link002", ]) in purgeList) def testAddDirContents_095(self): """ Attempt to add a directory with linkDepth=3, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=3, dereference=False) self.assertEqual(19, count) self.assertEqual(19, len(purgeList)) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link002", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link002", "link001", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", "link002", "link002", ]) in purgeList) def testAddDirContents_096(self): """ Attempt to add a directory with linkDepth=0, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=0, dereference=True) self.assertEqual(11, count) self.assertEqual(11, len(purgeList)) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", ]) in purgeList) def testAddDirContents_097(self): """ Attempt to add a directory with linkDepth=1, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=1, dereference=True) self.assertEqual(19, count) self.assertEqual(19, len(purgeList)) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005" ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005", "link002", ]) in purgeList) def testAddDirContents_098(self): """ Attempt to add a directory with linkDepth=2, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=2, dereference=True) self.assertEqual(31, count) self.assertEqual(31, len(purgeList)) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir002", "file009", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir004", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir004", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir004", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir004", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir006", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir006", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir006", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir006", "link002", ]) in purgeList) def testAddDirContents_099(self): """ Attempt to add a directory with linkDepth=3, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=3, dereference=True) self.assertEqual(34, count) self.assertEqual(34, len(purgeList)) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir003", "link003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir001", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir001", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir001", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir002", "file004", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir002", "file005", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir002", "file009", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir004", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir004", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir004", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir004", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005", "file002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005", "file003", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir005", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir006", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir006", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir006", "link001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir006", "link002", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir007", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir007", "file001", ]) in purgeList) self.assertTrue(self.buildPath(["tree22", "dir008", "file001", ]) in purgeList) #################### # Test removeAged() #################### def testRemoveYoungFiles_001(self): """ Test on an empty list, daysOld < 0. """ daysOld = -1 purgeList = PurgeItemList() self.assertRaises(ValueError, purgeList.removeYoungFiles, daysOld) def testRemoveYoungFiles_002(self): """ Test on a non-empty list, daysOld < 0. """ daysOld = -1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree1", ])) purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) self.assertRaises(ValueError, purgeList.removeYoungFiles, daysOld) def testRemoveYoungFiles_003(self): """ Test on an empty list, daysOld = 0 """ daysOld = 0 purgeList = PurgeItemList() count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_004(self): """ Test on a non-empty list containing only directories, daysOld = 0. """ daysOld = 0 self.extractTar("tree2") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree2", ])) purgeList.addDir(self.buildPath([ "tree2", "dir001", ])) purgeList.addDir(self.buildPath([ "tree2", "dir002", ])) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertEqual(3, len(purgeList)) self.assertTrue(self.buildPath([ "tree2", ]) in purgeList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in purgeList) def testRemoveYoungFiles_005(self): """ Test on a non-empty list containing only links, daysOld = 0. """ daysOld = 0 self.extractTar("tree9") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree9", "link001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", "link001", ])) purgeList.addFile(self.buildPath([ "tree9", "dir002", "link004", ])) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertEqual(3, len(purgeList)) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in purgeList) def testRemoveYoungFiles_006(self): """ Test on a non-empty list containing only non-existent files, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.append(self.buildPath([ "tree1", "stuff001", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff002", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff003", ])) # append, since it doesn't exist on disk count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertEqual(3, len(purgeList)) self.assertTrue(self.buildPath([ "tree1", "stuff001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "stuff002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "stuff003", ]) in purgeList) def testRemoveYoungFiles_007(self): """ Test on a non-empty list containing existing files "touched" to current time, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ])) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ])) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_008(self): """ Test on a non-empty list containing existing files "touched" to being 1 hour old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_1_HOUR) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_1_HOUR) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_009(self): """ Test on a non-empty list containing existing files "touched" to being 2 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_2_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_2_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_010(self): """ Test on a non-empty list containing existing files "touched" to being 12 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_12_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_12_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_011(self): """ Test on a non-empty list containing existing files "touched" to being 23 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_23_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_23_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_012(self): """ Test on a non-empty list containing existing files "touched" to being 24 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_24_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_24_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_013(self): """ Test on a non-empty list containing existing files "touched" to being 25 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_25_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_25_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_014(self): """ Test on a non-empty list containing existing files "touched" to being 47 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_47_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_47_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_015(self): """ Test on a non-empty list containing existing files "touched" to being 48 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_48_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_48_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_016(self): """ Test on a non-empty list containing existing files "touched" to being 49 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_49_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_49_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_017(self): """ Test on an empty list, daysOld = 1 """ daysOld = 1 purgeList = PurgeItemList() count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_018(self): """ Test on a non-empty list containing only directories, daysOld = 1. """ daysOld = 1 self.extractTar("tree2") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree2", ])) purgeList.addDir(self.buildPath([ "tree2", "dir001", ])) purgeList.addDir(self.buildPath([ "tree2", "dir002", ])) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertEqual(3, len(purgeList)) self.assertTrue(self.buildPath([ "tree2", ]) in purgeList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in purgeList) def testRemoveYoungFiles_019(self): """ Test on a non-empty list containing only links, daysOld = 1. """ daysOld = 1 self.extractTar("tree9") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree9", "link001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", "link001", ])) purgeList.addFile(self.buildPath([ "tree9", "dir002", "link004", ])) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertEqual(3, len(purgeList)) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in purgeList) def testRemoveYoungFiles_020(self): """ Test on a non-empty list containing only non-existent files, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.append(self.buildPath([ "tree1", "stuff001", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff002", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff003", ])) # append, since it doesn't exist on disk count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertEqual(3, len(purgeList)) self.assertTrue(self.buildPath([ "tree1", "stuff001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "stuff002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "stuff003", ]) in purgeList) def testRemoveYoungFiles_021(self): """ Test on a non-empty list containing existing files "touched" to current time, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ])) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ])) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_022(self): """ Test on a non-empty list containing existing files "touched" to being 1 hour old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_1_HOUR) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_1_HOUR) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_023(self): """ Test on a non-empty list containing existing files "touched" to being 2 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_2_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_2_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_024(self): """ Test on a non-empty list containing existing files "touched" to being 12 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_12_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_12_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_025(self): """ Test on a non-empty list containing existing files "touched" to being 23 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_23_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_23_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_026(self): """ Test on a non-empty list containing existing files "touched" to being 24 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_24_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_24_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(2, count) self.assertEqual(2, len(purgeList)) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_027(self): """ Test on a non-empty list containing existing files "touched" to being 25 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_25_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_25_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(2, count) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_028(self): """ Test on a non-empty list containing existing files "touched" to being 47 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_47_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_47_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(2, count) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_029(self): """ Test on a non-empty list containing existing files "touched" to being 48 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_48_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_48_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(2, count) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_030(self): """ Test on a non-empty list containing existing files "touched" to being 49 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_49_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_49_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(2, count) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_031(self): """ Test on an empty list, daysOld = 2 """ daysOld = 2 purgeList = PurgeItemList() count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_032(self): """ Test on a non-empty list containing only directories, daysOld = 2. """ daysOld = 2 self.extractTar("tree2") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree2", ])) purgeList.addDir(self.buildPath([ "tree2", "dir001", ])) purgeList.addDir(self.buildPath([ "tree2", "dir002", ])) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertEqual(3, len(purgeList)) self.assertTrue(self.buildPath([ "tree2", ]) in purgeList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in purgeList) def testRemoveYoungFiles_033(self): """ Test on a non-empty list containing only links, daysOld = 2. """ daysOld = 2 self.extractTar("tree9") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree9", "link001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", "link001", ])) purgeList.addFile(self.buildPath([ "tree9", "dir002", "link004", ])) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertEqual(3, len(purgeList)) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in purgeList) def testRemoveYoungFiles_034(self): """ Test on a non-empty list containing only non-existent files, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.append(self.buildPath([ "tree1", "stuff001", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff002", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff003", ])) # append, since it doesn't exist on disk count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertEqual(3, len(purgeList)) self.assertTrue(self.buildPath([ "tree1", "stuff001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "stuff002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "stuff003", ]) in purgeList) def testRemoveYoungFiles_035(self): """ Test on a non-empty list containing existing files "touched" to current time, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ])) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ])) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_036(self): """ Test on a non-empty list containing existing files "touched" to being 1 hour old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_1_HOUR) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_1_HOUR) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_037(self): """ Test on a non-empty list containing existing files "touched" to being 2 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_2_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_2_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_038(self): """ Test on a non-empty list containing existing files "touched" to being 12 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_12_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_12_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_039(self): """ Test on a non-empty list containing existing files "touched" to being 23 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_23_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_23_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_040(self): """ Test on a non-empty list containing existing files "touched" to being 24 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_24_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_24_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_041(self): """ Test on a non-empty list containing existing files "touched" to being 25 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_25_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_25_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_042(self): """ Test on a non-empty list containing existing files "touched" to being 47 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_47_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_47_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_043(self): """ Test on a non-empty list containing existing files "touched" to being 48 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_48_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_48_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(2, count) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_044(self): """ Test on a non-empty list containing existing files "touched" to being 49 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_49_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_49_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(2, count) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_045(self): """ Test on an empty list, daysOld = 3 """ daysOld = 3 purgeList = PurgeItemList() count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_046(self): """ Test on a non-empty list containing only directories, daysOld = 3. """ daysOld = 3 self.extractTar("tree2") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree2", ])) purgeList.addDir(self.buildPath([ "tree2", "dir001", ])) purgeList.addDir(self.buildPath([ "tree2", "dir002", ])) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertEqual(3, len(purgeList)) self.assertTrue(self.buildPath([ "tree2", ]) in purgeList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in purgeList) def testRemoveYoungFiles_047(self): """ Test on a non-empty list containing only links, daysOld = 3. """ daysOld = 3 self.extractTar("tree9") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree9", "link001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", "link001", ])) purgeList.addFile(self.buildPath([ "tree9", "dir002", "link004", ])) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertEqual(3, len(purgeList)) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in purgeList) def testRemoveYoungFiles_048(self): """ Test on a non-empty list containing only non-existent files, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.append(self.buildPath([ "tree1", "stuff001", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff002", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff003", ])) # append, since it doesn't exist on disk count = purgeList.removeYoungFiles(daysOld) self.assertEqual(0, count) self.assertEqual(3, len(purgeList)) self.assertTrue(self.buildPath([ "tree1", "stuff001", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "stuff002", ]) in purgeList) self.assertTrue(self.buildPath([ "tree1", "stuff003", ]) in purgeList) def testRemoveYoungFiles_049(self): """ Test on a non-empty list containing existing files "touched" to current time, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ])) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ])) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_050(self): """ Test on a non-empty list containing existing files "touched" to being 1 hour old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_1_HOUR) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_1_HOUR) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_051(self): """ Test on a non-empty list containing existing files "touched" to being 2 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_2_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_2_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_052(self): """ Test on a non-empty list containing existing files "touched" to being 12 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_12_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_12_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_053(self): """ Test on a non-empty list containing existing files "touched" to being 23 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_23_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_23_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_054(self): """ Test on a non-empty list containing existing files "touched" to being 24 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_24_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_24_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_055(self): """ Test on a non-empty list containing existing files "touched" to being 25 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_25_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_25_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_056(self): """ Test on a non-empty list containing existing files "touched" to being 47 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_47_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_47_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_057(self): """ Test on a non-empty list containing existing files "touched" to being 48 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_48_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_48_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) def testRemoveYoungFiles_058(self): """ Test on a non-empty list containing existing files "touched" to being 49 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_49_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_49_HOURS) count = purgeList.removeYoungFiles(daysOld) self.assertEqual(4, count) self.assertEqual([], purgeList) #################### # Test purgeItems() #################### def testPurgeItems_001(self): """ Test with an empty list. """ purgeList = PurgeItemList() (files, dirs) = purgeList.purgeItems() self.assertEqual(0, files) self.assertEqual(0, dirs) def testPurgeItems_002(self): """ Test with a list containing only non-empty directories. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree9", ])) purgeList.addDir(self.buildPath([ "tree9", "dir001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", ])) (files, dirs) = purgeList.purgeItems() self.assertEqual(0, files) self.assertEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) def testPurgeItems_003(self): """ Test with a list containing only empty directories. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(11, count) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath([ "tree2", "dir010", ]) in fsList) purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree2", "dir001", ])) purgeList.addDir(self.buildPath([ "tree2", "dir002", ])) purgeList.addDir(self.buildPath([ "tree2", "dir003", ])) purgeList.addDir(self.buildPath([ "tree2", "dir004", ])) purgeList.addDir(self.buildPath([ "tree2", "dir005", ])) purgeList.addDir(self.buildPath([ "tree2", "dir006", ])) purgeList.addDir(self.buildPath([ "tree2", "dir007", ])) purgeList.addDir(self.buildPath([ "tree2", "dir008", ])) purgeList.addDir(self.buildPath([ "tree2", "dir009", ])) purgeList.addDir(self.buildPath([ "tree2", "dir010", ])) (files, dirs) = purgeList.purgeItems() self.assertEqual(0, files) self.assertEqual(10, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(1, count) self.assertEqual(1, len(fsList)) self.assertTrue(self.buildPath([ "tree2", ]) in fsList) def testPurgeItems_004(self): """ Test with a list containing only files. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) purgeList.addFile(self.buildPath([ "tree1", "file005", ])) purgeList.addFile(self.buildPath([ "tree1", "file006", ])) purgeList.addFile(self.buildPath([ "tree1", "file007", ])) (files, dirs) = purgeList.purgeItems() self.assertEqual(7, files) self.assertEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(1, count) self.assertEqual(1, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) def testPurgeItems_005(self): """ Test with a list containing a directory and some of the files in that directory. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree1", ])) purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) (files, dirs) = purgeList.purgeItems() self.assertEqual(4, files) self.assertEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(4, count) self.assertEqual(4, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) def testPurgeItems_006(self): """ Test with a list containing a directory and all of the files in that directory. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree1", ])) purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) purgeList.addFile(self.buildPath([ "tree1", "file005", ])) purgeList.addFile(self.buildPath([ "tree1", "file006", ])) purgeList.addFile(self.buildPath([ "tree1", "file007", ])) (files, dirs) = purgeList.purgeItems() self.assertEqual(7, files) self.assertEqual(1, dirs) self.assertRaises(ValueError, fsList.addDirContents, path) self.assertTrue(not os.path.exists(path)) def testPurgeItems_007(self): """ Test with a list containing various kinds of entries, including links, files and directories. Make sure that removing a link doesn't remove the file the link points toward. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(22, count) self.assertEqual(22, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree9", "dir001", "link001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", "dir001", ])) purgeList.addFile(self.buildPath([ "tree9", "file001", ])) (files, dirs) = purgeList.purgeItems() self.assertEqual(2, files) self.assertEqual(1, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(18, count) self.assertEqual(18, len(fsList)) self.assertTrue(self.buildPath([ "tree9", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.assertTrue(os.path.islink(self.buildPath([ "tree9", "dir002", "link001", ]))) # won't be included in list, though self.assertTrue(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree9", "link002", ]) in fsList) def testPurgeItems_008(self): """ Test with a list containing non-existent entries. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(8, count) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree1", ])) purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) purgeList.append(self.buildPath([ "tree1", INVALID_FILE, ])) # bypass validations (files, dirs) = purgeList.purgeItems() self.assertEqual(4, files) self.assertEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(4, count) self.assertEqual(4, len(fsList)) self.assertTrue(self.buildPath([ "tree1", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath([ "tree1", "file007", ]) in fsList) def testPurgeItems_009(self): """ Test with a list containing entries containing spaces. """ self.extractTar("tree11") path = self.buildPath(["tree11"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(16, count) self.assertEqual(16, len(fsList)) self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree11", "file with spaces", ])) purgeList.addFile(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ])) (files, dirs) = purgeList.purgeItems() self.assertEqual(2, files) self.assertEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.assertEqual(12, count) self.assertEqual(12, len(fsList)) self.assertTrue(self.buildPath([ "tree11", "link with spaces", ]) not in fsList) # file it points to was removed self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link002", ]) not in fsList) # file it points to was removed self.assertTrue(self.buildPath([ "tree11", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "link003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.assertTrue(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the various public functions.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname, within=None): """Extracts a tarfile with a particular name.""" if within is None: extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) else: path = os.path.join(self.tmpdir, within) os.mkdir(path) extractTar(path, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) ######################### # Test compareContents() ######################### def testCompareContents_001(self): """ Compare two empty directories. """ self.extractTar("tree2", within="path1") self.extractTar("tree2", within="path2") path1 = self.buildPath(["path1", "tree2", "dir001", ]) path2 = self.buildPath(["path2", "tree2", "dir002", ]) compareContents(path1, path2) compareContents(path1, path2, verbose=True) def testCompareContents_002(self): """ Compare one empty and one non-empty directory containing only directories. """ self.extractTar("tree2", within="path1") self.extractTar("tree2", within="path2") path1 = self.buildPath(["path1", "tree2", "dir001", ]) path2 = self.buildPath(["path2", "tree2", ]) compareContents(path1, path2) compareContents(path1, path2, verbose=True) def testCompareContents_003(self): """ Compare one empty and one non-empty directory containing only files. """ self.extractTar("tree2", within="path1") self.extractTar("tree1", within="path2") path1 = self.buildPath(["path1", "tree2", "dir001", ]) path2 = self.buildPath(["path2", "tree1", ]) self.assertRaises(ValueError, compareContents, path1, path2) self.assertRaises(ValueError, compareContents, path1, path2, verbose=True) def testCompareContents_004(self): """ Compare two directories containing only directories, same. """ self.extractTar("tree2", within="path1") self.extractTar("tree2", within="path2") path1 = self.buildPath(["path1", "tree2", ]) path2 = self.buildPath(["path2", "tree2", ]) compareContents(path1, path2) compareContents(path1, path2, verbose=True) def testCompareContents_005(self): """ Compare two directories containing only directories, different set. """ self.extractTar("tree2", within="path1") self.extractTar("tree3", within="path2") path1 = self.buildPath(["path1", "tree2", ]) path2 = self.buildPath(["path2", "tree3", ]) compareContents(path1, path2) # no error, since directories don't count compareContents(path1, path2, verbose=True) # no error, since directories don't count def testCompareContents_006(self): """ Compare two directories containing only files, same. """ self.extractTar("tree1", within="path1") self.extractTar("tree1", within="path2") path1 = self.buildPath(["path1", "tree1", ]) path2 = self.buildPath(["path2", "tree1", ]) compareContents(path1, path2) compareContents(path1, path2, verbose=True) def testCompareContents_007(self): """ Compare two directories containing only files, different contents. """ self.extractTar("tree1", within="path1") self.extractTar("tree1", within="path2") path1 = self.buildPath(["path1", "tree1", ]) path2 = self.buildPath(["path2", "tree1", ]) with open(self.buildPath(["path1", "tree1", "file004", ]), "a") as f: f.write("BOGUS") # change content self.assertRaises(ValueError, compareContents, path1, path2) self.assertRaises(ValueError, compareContents, path1, path2, verbose=True) def testCompareContents_008(self): """ Compare two directories containing only files, different set. """ self.extractTar("tree1", within="path1") self.extractTar("tree7", within="path2") path1 = self.buildPath(["path1", "tree1", ]) path2 = self.buildPath(["path2", "tree7", "dir001", ]) self.assertRaises(ValueError, compareContents, path1, path2) self.assertRaises(ValueError, compareContents, path1, path2, verbose=True) def testCompareContents_009(self): """ Compare two directories containing files and directories, same. """ self.extractTar("tree9", within="path1") self.extractTar("tree9", within="path2") path1 = self.buildPath(["path1", "tree9", ]) path2 = self.buildPath(["path2", "tree9", ]) compareContents(path1, path2) compareContents(path1, path2, verbose=True) def testCompareContents_010(self): """ Compare two directories containing files and directories, different contents. """ self.extractTar("tree9", within="path1") self.extractTar("tree9", within="path2") path1 = self.buildPath(["path1", "tree9", ]) path2 = self.buildPath(["path2", "tree9", ]) with open(self.buildPath(["path2", "tree9", "dir001", "file002", ]), "a") as f: f.write("whoops") # change content self.assertRaises(ValueError, compareContents, path1, path2) self.assertRaises(ValueError, compareContents, path1, path2, verbose=True) def testCompareContents_011(self): """ Compare two directories containing files and directories, different set. """ self.extractTar("tree9", within="path1") self.extractTar("tree6", within="path2") path1 = self.buildPath(["path1", "tree9", ]) path2 = self.buildPath(["path2", "tree6", ]) self.assertRaises(ValueError, compareContents, path1, path2) self.assertRaises(ValueError, compareContents, path1, path2, verbose=True) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" tests = [ ] tests.append(unittest.makeSuite(TestFilesystemList, 'test')) tests.append(unittest.makeSuite(TestBackupFileList, 'test')) tests.append(unittest.makeSuite(TestPurgeItemList, 'test')) tests.append(unittest.makeSuite(TestFunctions, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/knapsacktests.py0000664000175000017500000027272212560007330022545 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2005,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests knapsack functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/knapsack.py. Code Coverage ============= This module contains individual tests for each of the public functions implemented in knapsack.py: C{firstFit()}, C{bestFit()}, C{worstFit()} and C{alternateFit()}. Note that the tests for each function are pretty much identical and so there's pretty much code duplication. In production code, I would argue that this implies some refactoring is needed. In here, however, I prefer having lots of individual test cases even if there is duplication, because I think this makes it easier to judge the extent of a problem when one exists. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a KNAPSACKTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # Import standard modules import unittest from CedarBackup3.knapsack import firstFit, bestFit, worstFit, alternateFit ####################################################################### # Module-wide configuration and constants ####################################################################### # These all have random letters for keys because the original data had a,b,c,d, # etc. in ascending order, which actually masked a sorting bug in the implementation. ITEMS_01 = { } ITEMS_02 = { "z" : 0, "^" : 0, "3" : 0, "(" : 0, "[" : 0, "/" : 0, "a" : 0, "r" : 0, } ITEMS_03 = { "k" : 0, "*" : 1, "u" : 10, "$" : 100, "h" : 1000, "?" : 10000, "b" : 100000, "s" : 1000000, } ITEMS_04 = { "l" : 1000000, "G" : 100000, "h" : 10000, "#" : 1000, "a" : 100, "'" : 10, "c" : 1, "t" : 0, } ITEMS_05 = { "n" : 1, "N" : 1, "z" : 1, "@" : 1, "c" : 1, "h" : 1, "d" : 1, "u" : 1, } ITEMS_06 = { "o" : 10, "b" : 10, "G" : 10, "+" : 10, "B" : 10, "O" : 10, "e" : 10, "v" : 10, } ITEMS_07 = { "$" : 100, "K" : 100, "f" : 100, "=" : 100, "n" : 100, "I" : 100, "F" : 100, "w" : 100, } ITEMS_08 = { "y" : 1000, "C" : 1000, "s" : 1000, "f" : 1000, "a" : 1000, "U" : 1000, "g" : 1000, "x" : 1000, } ITEMS_09 = { "7" : 10000, "d" : 10000, "f" : 10000, "g" : 10000, "t" : 10000, "l" : 10000, "h" : 10000, "y" : 10000, } ITEMS_10 = { "5" : 100000, "#" : 100000, "l" : 100000, "t" : 100000, "6" : 100000, "T" : 100000, "i" : 100000, "z" : 100000, } ITEMS_11 = { "t" : 1, "d" : 1, "k" : 100000, "l" : 100000, "7" : 100000, "G" : 100000, "j" : 1, "1" : 1, } ITEMS_12 = { "a" : 10, "e" : 10, "M" : 100000, "u" : 100000, "y" : 100000, "f" : 100000, "k" : 10, "2" : 10, } ITEMS_13 = { "n" : 100, "p" : 100, "b" : 100000, "i" : 100000, "$" : 100000, "/" : 100000, "l" : 100, "3" : 100, } ITEMS_14 = { "b" : 1000, ":" : 1000, "e" : 100000, "O" : 100000, "o" : 100000, "#" : 100000, "m" : 1000, "4" : 1000, } ITEMS_15 = { "c" : 1, "j" : 1, "e" : 1, "H" : 100000, "n" : 100000, "h" : 1, "N" : 1, "5" : 1, } ITEMS_16 = { "a" : 10, "M" : 10, "%" : 10, "'" : 100000, "l" : 100000, "?" : 10, "o" : 10, "6" : 10, } ITEMS_17 = { "h" : 100, "z" : 100, "(" : 100, "?" : 100000, "k" : 100000, "|" : 100, "p" : 100, "7" : 100, } ITEMS_18 = { "[" : 1000, "l" : 1000, "*" : 1000, "/" : 100000, "z" : 100000, "|" : 1000, "q" : 1000, "h" : 1000, } # This is a more realistic example, taken from tree9.tar.gz ITEMS_19 = { 'dir001/file001': 243, 'dir001/file002': 268, 'dir002/file001': 134, 'dir002/file002': 74, 'file001' : 155, 'file002' : 242, 'link001' : 0, 'link002' : 0, } ####################################################################### # Utility functions ####################################################################### def buildItemDict(origDict): """ Creates an item dictionary suitable for passing to a knapsack function. The knapsack functions take a dictionary, keyed on item, of (item, size) tuples. This function converts a simple item/size dictionary to a knapsack dictionary. It exists for convenience. @param origDict: Dictionary to convert @type origDict: Simple dictionary mapping item to size, like C{ITEMS_02} @return: Dictionary suitable for passing to a knapsack function. """ itemDict = { } for key in list(origDict.keys()): itemDict[key] = (key, origDict[key]) return itemDict ####################################################################### # Test Case Classes ####################################################################### ##################### # TestKnapsack class ##################### class TestKnapsack(unittest.TestCase): """Tests for the various knapsack functions.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ################################ # Tests for firstFit() function ################################ def testFirstFit_001(self): """ Test firstFit() behavior for an empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 0 result = firstFit(items, capacity) self.assertEqual(([], 0), result) def testFirstFit_002(self): """ Test firstFit() behavior for an empty items dictionary, non-zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 10000 result = firstFit(items, capacity) self.assertEqual(([], 0), result) def testFirstFit_003(self): """ Test firstFit() behavior for an non-empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = firstFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_04) capacity = 0 result = firstFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_13) capacity = 0 result = firstFit(items, capacity) self.assertEqual(([], 0), result) def testFirstFit_004(self): """ Test firstFit() behavior for non-empty items dictionary with zero-sized items, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = firstFit(items, capacity) self.assertEqual(([], 0), result) def testFirstFit_005(self): """ Test firstFit() behavior for items dictionary where only one item fits. """ items = buildItemDict(ITEMS_05) capacity = 1 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(1, result[1]) items = buildItemDict(ITEMS_06) capacity = 10 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(10, result[1]) items = buildItemDict(ITEMS_07) capacity = 100 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(100, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(1000, result[1]) items = buildItemDict(ITEMS_09) capacity = 10000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(10000, result[1]) items = buildItemDict(ITEMS_10) capacity = 100000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(100000, result[1]) def testFirstFit_006(self): """ Test firstFit() behavior for items dictionary where only 25% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 2 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(2, result[1]) items = buildItemDict(ITEMS_06) capacity = 25 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(20, result[1]) items = buildItemDict(ITEMS_07) capacity = 250 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(200, result[1]) items = buildItemDict(ITEMS_08) capacity = 2500 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(2000, result[1]) items = buildItemDict(ITEMS_09) capacity = 25000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(20000, result[1]) items = buildItemDict(ITEMS_10) capacity = 250000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(200000, result[1]) items = buildItemDict(ITEMS_11) capacity = 2 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(2, result[1]) items = buildItemDict(ITEMS_12) capacity = 25 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(20, result[1]) items = buildItemDict(ITEMS_13) capacity = 250 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(200, result[1]) items = buildItemDict(ITEMS_14) capacity = 2500 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(2000, result[1]) def testFirstFit_007(self): """ Test firstFit() behavior for items dictionary where only 50% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 4 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4, result[1]) items = buildItemDict(ITEMS_06) capacity = 45 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(40, result[1]) items = buildItemDict(ITEMS_07) capacity = 450 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(400, result[1]) items = buildItemDict(ITEMS_08) capacity = 4500 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4000, result[1]) items = buildItemDict(ITEMS_09) capacity = 45000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(40000, result[1]) items = buildItemDict(ITEMS_10) capacity = 450000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(400000, result[1]) items = buildItemDict(ITEMS_11) capacity = 4 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 45 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 450 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 4500 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4000, result[1]) def testFirstFit_008(self): """ Test firstFit() behavior for items dictionary where only 75% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 6 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(6, result[1]) items = buildItemDict(ITEMS_06) capacity = 65 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(60, result[1]) items = buildItemDict(ITEMS_07) capacity = 650 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(600, result[1]) items = buildItemDict(ITEMS_08) capacity = 6500 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(6000, result[1]) items = buildItemDict(ITEMS_09) capacity = 65000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(60000, result[1]) items = buildItemDict(ITEMS_10) capacity = 650000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(600000, result[1]) items = buildItemDict(ITEMS_15) capacity = 7 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(6, result[1]) items = buildItemDict(ITEMS_16) capacity = 65 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(60, result[1]) items = buildItemDict(ITEMS_17) capacity = 650 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(600, result[1]) items = buildItemDict(ITEMS_18) capacity = 6500 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(6000, result[1]) def testFirstFit_009(self): """ Test firstFit() behavior for items dictionary where all items individually exceed the capacity. """ items = buildItemDict(ITEMS_06) capacity = 9 result = firstFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_07) capacity = 99 result = firstFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_08) capacity = 999 result = firstFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_09) capacity = 9999 result = firstFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_10) capacity = 99999 result = firstFit(items, capacity) self.assertEqual(([], 0), result) def testFirstFit_010(self): """ Test firstFit() behavior for items dictionary where first half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 200 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(111, result[1]) def testFirstFit_011(self): """ Test firstFit() behavior for items dictionary where middle half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 5 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 50 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 500 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 5000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4000, result[1]) def testFirstFit_012(self): """ Test firstFit() behavior for items dictionary where second half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 200 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(111, result[1]) def testFirstFit_013(self): """ Test firstFit() behavior for items dictionary where first half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 50 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) def testFirstFit_014(self): """ Test firstFit() behavior for items dictionary where middle half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 3 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_12) capacity = 35 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_13) capacity = 350 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_14) capacity = 3500 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) def testFirstFit_015(self): """ Test firstFit() behavior for items dictionary where second half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 50 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) def testFirstFit_016(self): """ Test firstFit() behavior for items dictionary where all items fit. """ items = buildItemDict(ITEMS_02) capacity = 1000000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(0, result[1]) items = buildItemDict(ITEMS_03) capacity = 2000000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(1111111, result[1]) items = buildItemDict(ITEMS_04) capacity = 2000000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(1111111, result[1]) items = buildItemDict(ITEMS_05) capacity = 1000000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(8, result[1]) items = buildItemDict(ITEMS_06) capacity = 1000000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(80, result[1]) items = buildItemDict(ITEMS_07) capacity = 1000000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(800, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(8000, result[1]) items = buildItemDict(ITEMS_09) capacity = 1000000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(80000, result[1]) items = buildItemDict(ITEMS_10) capacity = 1000000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(800000, result[1]) items = buildItemDict(ITEMS_11) capacity = 1000000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(400004, result[1]) items = buildItemDict(ITEMS_12) capacity = 1000000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(400040, result[1]) items = buildItemDict(ITEMS_13) capacity = 1000000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(400400, result[1]) items = buildItemDict(ITEMS_14) capacity = 1000000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(404000, result[1]) items = buildItemDict(ITEMS_15) capacity = 1000000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(200006, result[1]) items = buildItemDict(ITEMS_16) capacity = 1000000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(200060, result[1]) items = buildItemDict(ITEMS_17) capacity = 1000000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(200600, result[1]) items = buildItemDict(ITEMS_18) capacity = 1000000 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(206000, result[1]) def testFirstFit_017(self): """ Test firstFit() behavior for a more realistic set of items """ items = buildItemDict(ITEMS_19) capacity = 760 result = firstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) # Unfortunately, can't test any more than this, since dict keys come out in random order ############################### # Tests for bestFit() function ############################### def testBestFit_001(self): """ Test bestFit() behavior for an empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 0 result = bestFit(items, capacity) self.assertEqual(([], 0), result) def testBestFit_002(self): """ Test bestFit() behavior for an empty items dictionary, non-zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 10000 result = bestFit(items, capacity) self.assertEqual(([], 0), result) def testBestFit_003(self): """ Test bestFit() behavior for an non-empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = bestFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_04) capacity = 0 result = bestFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_13) capacity = 0 result = bestFit(items, capacity) self.assertEqual(([], 0), result) def testBestFit_004(self): """ Test bestFit() behavior for non-empty items dictionary with zero-sized items, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = bestFit(items, capacity) self.assertEqual(([], 0), result) def testBestFit_005(self): """ Test bestFit() behavior for items dictionary where only one item fits. """ items = buildItemDict(ITEMS_05) capacity = 1 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(1, result[1]) items = buildItemDict(ITEMS_06) capacity = 10 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(10, result[1]) items = buildItemDict(ITEMS_07) capacity = 100 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(100, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(1000, result[1]) items = buildItemDict(ITEMS_09) capacity = 10000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(10000, result[1]) items = buildItemDict(ITEMS_10) capacity = 100000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(100000, result[1]) def testBestFit_006(self): """ Test bestFit() behavior for items dictionary where only 25% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 2 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(2, result[1]) items = buildItemDict(ITEMS_06) capacity = 25 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(20, result[1]) items = buildItemDict(ITEMS_07) capacity = 250 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(200, result[1]) items = buildItemDict(ITEMS_08) capacity = 2500 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(2000, result[1]) items = buildItemDict(ITEMS_09) capacity = 25000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(20000, result[1]) items = buildItemDict(ITEMS_10) capacity = 250000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(200000, result[1]) items = buildItemDict(ITEMS_11) capacity = 2 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(2, result[1]) items = buildItemDict(ITEMS_12) capacity = 25 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(20, result[1]) items = buildItemDict(ITEMS_13) capacity = 250 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(200, result[1]) items = buildItemDict(ITEMS_14) capacity = 2500 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(2000, result[1]) def testBestFit_007(self): """ Test bestFit() behavior for items dictionary where only 50% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 4 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4, result[1]) items = buildItemDict(ITEMS_06) capacity = 45 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(40, result[1]) items = buildItemDict(ITEMS_07) capacity = 450 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(400, result[1]) items = buildItemDict(ITEMS_08) capacity = 4500 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4000, result[1]) items = buildItemDict(ITEMS_09) capacity = 45000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(40000, result[1]) items = buildItemDict(ITEMS_10) capacity = 450000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(400000, result[1]) items = buildItemDict(ITEMS_11) capacity = 4 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 45 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 450 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 4500 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4000, result[1]) def testBestFit_008(self): """ Test bestFit() behavior for items dictionary where only 75% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 6 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(6, result[1]) items = buildItemDict(ITEMS_06) capacity = 65 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(60, result[1]) items = buildItemDict(ITEMS_07) capacity = 650 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(600, result[1]) items = buildItemDict(ITEMS_08) capacity = 6500 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(6000, result[1]) items = buildItemDict(ITEMS_09) capacity = 65000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(60000, result[1]) items = buildItemDict(ITEMS_10) capacity = 650000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(600000, result[1]) items = buildItemDict(ITEMS_15) capacity = 7 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(6, result[1]) items = buildItemDict(ITEMS_16) capacity = 65 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(60, result[1]) items = buildItemDict(ITEMS_17) capacity = 650 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(600, result[1]) items = buildItemDict(ITEMS_18) capacity = 6500 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(6000, result[1]) def testBestFit_009(self): """ Test bestFit() behavior for items dictionary where all items individually exceed the capacity. """ items = buildItemDict(ITEMS_06) capacity = 9 result = bestFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_07) capacity = 99 result = bestFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_08) capacity = 999 result = bestFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_09) capacity = 9999 result = bestFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_10) capacity = 99999 result = bestFit(items, capacity) self.assertEqual(([], 0), result) def testBestFit_010(self): """ Test bestFit() behavior for items dictionary where first half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 200 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(111, result[1]) def testBestFit_011(self): """ Test bestFit() behavior for items dictionary where middle half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 5 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 50 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 500 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 5000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4000, result[1]) def testBestFit_012(self): """ Test bestFit() behavior for items dictionary where second half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 200 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(111, result[1]) def testBestFit_013(self): """ Test bestFit() behavior for items dictionary where first half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 50 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) def testBestFit_014(self): """ Test bestFit() behavior for items dictionary where middle half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 3 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_12) capacity = 35 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_13) capacity = 350 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_14) capacity = 3500 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) def testBestFit_015(self): """ Test bestFit() behavior for items dictionary where second half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 50 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) def testBestFit_016(self): """ Test bestFit() behavior for items dictionary where all items fit. """ items = buildItemDict(ITEMS_02) capacity = 1000000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(0, result[1]) items = buildItemDict(ITEMS_03) capacity = 2000000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(1111111, result[1]) items = buildItemDict(ITEMS_04) capacity = 2000000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(1111111, result[1]) items = buildItemDict(ITEMS_05) capacity = 1000000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(8, result[1]) items = buildItemDict(ITEMS_06) capacity = 1000000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(80, result[1]) items = buildItemDict(ITEMS_07) capacity = 1000000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(800, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(8000, result[1]) items = buildItemDict(ITEMS_09) capacity = 1000000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(80000, result[1]) items = buildItemDict(ITEMS_10) capacity = 1000000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(800000, result[1]) items = buildItemDict(ITEMS_11) capacity = 1000000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(400004, result[1]) items = buildItemDict(ITEMS_12) capacity = 1000000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(400040, result[1]) items = buildItemDict(ITEMS_13) capacity = 1000000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(400400, result[1]) items = buildItemDict(ITEMS_14) capacity = 1000000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(404000, result[1]) items = buildItemDict(ITEMS_15) capacity = 1000000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(200006, result[1]) items = buildItemDict(ITEMS_16) capacity = 1000000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(200060, result[1]) items = buildItemDict(ITEMS_17) capacity = 1000000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(200600, result[1]) items = buildItemDict(ITEMS_18) capacity = 1000000 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(206000, result[1]) def testBestFit_017(self): """ Test bestFit() behavior for a more realistic set of items """ items = buildItemDict(ITEMS_19) capacity = 760 result = bestFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(5, len(result[0])) self.assertEqual(753, result[1]) self.assertTrue('dir001/file001' in result[0]) self.assertTrue('dir001/file002' in result[0]) self.assertTrue('file002' in result[0]) self.assertTrue('link001' in result[0]) self.assertTrue('link002' in result[0]) ################################ # Tests for worstFit() function ################################ def testWorstFit_001(self): """ Test worstFit() behavior for an empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 0 result = worstFit(items, capacity) self.assertEqual(([], 0), result) def testWorstFit_002(self): """ Test worstFit() behavior for an empty items dictionary, non-zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 10000 result = worstFit(items, capacity) self.assertEqual(([], 0), result) def testWorstFit_003(self): """ Test worstFit() behavior for an non-empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = worstFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_04) capacity = 0 result = worstFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_13) capacity = 0 result = worstFit(items, capacity) self.assertEqual(([], 0), result) def testWorstFit_004(self): """ Test worstFit() behavior for non-empty items dictionary with zero-sized items, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = worstFit(items, capacity) self.assertEqual(([], 0), result) def testWorstFit_005(self): """ Test worstFit() behavior for items dictionary where only one item fits. """ items = buildItemDict(ITEMS_05) capacity = 1 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(1, result[1]) items = buildItemDict(ITEMS_06) capacity = 10 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(10, result[1]) items = buildItemDict(ITEMS_07) capacity = 100 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(100, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(1000, result[1]) items = buildItemDict(ITEMS_09) capacity = 10000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(10000, result[1]) items = buildItemDict(ITEMS_10) capacity = 100000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(100000, result[1]) def testWorstFit_006(self): """ Test worstFit() behavior for items dictionary where only 25% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 2 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(2, result[1]) items = buildItemDict(ITEMS_06) capacity = 25 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(20, result[1]) items = buildItemDict(ITEMS_07) capacity = 250 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(200, result[1]) items = buildItemDict(ITEMS_08) capacity = 2500 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(2000, result[1]) items = buildItemDict(ITEMS_09) capacity = 25000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(20000, result[1]) items = buildItemDict(ITEMS_10) capacity = 250000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(200000, result[1]) items = buildItemDict(ITEMS_11) capacity = 2 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(2, result[1]) items = buildItemDict(ITEMS_12) capacity = 25 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(20, result[1]) items = buildItemDict(ITEMS_13) capacity = 250 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(200, result[1]) items = buildItemDict(ITEMS_14) capacity = 2500 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(2000, result[1]) def testWorstFit_007(self): """ Test worstFit() behavior for items dictionary where only 50% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 4 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4, result[1]) items = buildItemDict(ITEMS_06) capacity = 45 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(40, result[1]) items = buildItemDict(ITEMS_07) capacity = 450 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(400, result[1]) items = buildItemDict(ITEMS_08) capacity = 4500 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4000, result[1]) items = buildItemDict(ITEMS_09) capacity = 45000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(40000, result[1]) items = buildItemDict(ITEMS_10) capacity = 450000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(400000, result[1]) items = buildItemDict(ITEMS_11) capacity = 4 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 45 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 450 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 4500 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4000, result[1]) def testWorstFit_008(self): """ Test worstFit() behavior for items dictionary where only 75% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 6 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(6, result[1]) items = buildItemDict(ITEMS_06) capacity = 65 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(60, result[1]) items = buildItemDict(ITEMS_07) capacity = 650 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(600, result[1]) items = buildItemDict(ITEMS_08) capacity = 6500 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(6000, result[1]) items = buildItemDict(ITEMS_09) capacity = 65000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(60000, result[1]) items = buildItemDict(ITEMS_10) capacity = 650000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(600000, result[1]) items = buildItemDict(ITEMS_15) capacity = 7 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(6, result[1]) items = buildItemDict(ITEMS_16) capacity = 65 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(60, result[1]) items = buildItemDict(ITEMS_17) capacity = 650 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(600, result[1]) items = buildItemDict(ITEMS_18) capacity = 6500 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(6000, result[1]) def testWorstFit_009(self): """ Test worstFit() behavior for items dictionary where all items individually exceed the capacity. """ items = buildItemDict(ITEMS_06) capacity = 9 result = worstFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_07) capacity = 99 result = worstFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_08) capacity = 999 result = worstFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_09) capacity = 9999 result = worstFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_10) capacity = 99999 result = worstFit(items, capacity) self.assertEqual(([], 0), result) def testWorstFit_010(self): """ Test worstFit() behavior for items dictionary where first half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 200 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(111, result[1]) def testWorstFit_011(self): """ Test worstFit() behavior for items dictionary where middle half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 5 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 50 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 500 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 5000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4000, result[1]) def testWorstFit_012(self): """ Test worstFit() behavior for items dictionary where second half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 200 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(111, result[1]) def testWorstFit_013(self): """ Test worstFit() behavior for items dictionary where first half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 50 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) def testWorstFit_014(self): """ Test worstFit() behavior for items dictionary where middle half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 3 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_12) capacity = 35 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_13) capacity = 350 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_14) capacity = 3500 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) def testWorstFit_015(self): """ Test worstFit() behavior for items dictionary where second half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 50 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) def testWorstFit_016(self): """ Test worstFit() behavior for items dictionary where all items fit. """ items = buildItemDict(ITEMS_02) capacity = 1000000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(0, result[1]) items = buildItemDict(ITEMS_03) capacity = 2000000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(1111111, result[1]) items = buildItemDict(ITEMS_04) capacity = 2000000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(1111111, result[1]) items = buildItemDict(ITEMS_05) capacity = 1000000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(8, result[1]) items = buildItemDict(ITEMS_06) capacity = 1000000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(80, result[1]) items = buildItemDict(ITEMS_07) capacity = 1000000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(800, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(8000, result[1]) items = buildItemDict(ITEMS_09) capacity = 1000000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(80000, result[1]) items = buildItemDict(ITEMS_10) capacity = 1000000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(800000, result[1]) items = buildItemDict(ITEMS_11) capacity = 1000000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(400004, result[1]) items = buildItemDict(ITEMS_12) capacity = 1000000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(400040, result[1]) items = buildItemDict(ITEMS_13) capacity = 1000000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(400400, result[1]) items = buildItemDict(ITEMS_14) capacity = 1000000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(404000, result[1]) items = buildItemDict(ITEMS_15) capacity = 1000000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(200006, result[1]) items = buildItemDict(ITEMS_16) capacity = 1000000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(200060, result[1]) items = buildItemDict(ITEMS_17) capacity = 1000000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(200600, result[1]) items = buildItemDict(ITEMS_18) capacity = 1000000 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(206000, result[1]) def testWorstFit_017(self): """ Test worstFit() behavior for a more realistic set of items """ items = buildItemDict(ITEMS_19) capacity = 760 result = worstFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(605, result[1]) self.assertTrue('dir002/file001' in result[0]) self.assertTrue('dir002/file002' in result[0]) self.assertTrue('file001' in result[0]) self.assertTrue('file002' in result[0]) self.assertTrue('link001' in result[0]) self.assertTrue('link002' in result[0]) #################################### # Tests for alternateFit() function #################################### def testAlternateFit_001(self): """ Test alternateFit() behavior for an empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 0 result = alternateFit(items, capacity) self.assertEqual(([], 0), result) def testAlternateFit_002(self): """ Test alternateFit() behavior for an empty items dictionary, non-zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 10000 result = alternateFit(items, capacity) self.assertEqual(([], 0), result) def testAlternateFit_003(self): """ Test alternateFit() behavior for an non-empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = alternateFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_04) capacity = 0 result = alternateFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_13) capacity = 0 result = alternateFit(items, capacity) self.assertEqual(([], 0), result) def testAlternateFit_004(self): """ Test alternateFit() behavior for non-empty items dictionary with zero-sized items, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = alternateFit(items, capacity) self.assertEqual(([], 0), result) def testAlternateFit_005(self): """ Test alternateFit() behavior for items dictionary where only one item fits. """ items = buildItemDict(ITEMS_05) capacity = 1 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(1, result[1]) items = buildItemDict(ITEMS_06) capacity = 10 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(10, result[1]) items = buildItemDict(ITEMS_07) capacity = 100 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(100, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(1000, result[1]) items = buildItemDict(ITEMS_09) capacity = 10000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(10000, result[1]) items = buildItemDict(ITEMS_10) capacity = 100000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(1, len(result[0])) self.assertEqual(100000, result[1]) def testAlternateFit_006(self): """ Test alternateFit() behavior for items dictionary where only 25% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 2 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(2, result[1]) items = buildItemDict(ITEMS_06) capacity = 25 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(20, result[1]) items = buildItemDict(ITEMS_07) capacity = 250 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(200, result[1]) items = buildItemDict(ITEMS_08) capacity = 2500 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(2000, result[1]) items = buildItemDict(ITEMS_09) capacity = 25000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(20000, result[1]) items = buildItemDict(ITEMS_10) capacity = 250000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(200000, result[1]) items = buildItemDict(ITEMS_11) capacity = 2 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(2, result[1]) items = buildItemDict(ITEMS_12) capacity = 25 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(20, result[1]) items = buildItemDict(ITEMS_13) capacity = 250 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(200, result[1]) items = buildItemDict(ITEMS_14) capacity = 2500 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(2, len(result[0])) self.assertEqual(2000, result[1]) def testAlternateFit_007(self): """ Test alternateFit() behavior for items dictionary where only 50% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 4 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4, result[1]) items = buildItemDict(ITEMS_06) capacity = 45 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(40, result[1]) items = buildItemDict(ITEMS_07) capacity = 450 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(400, result[1]) items = buildItemDict(ITEMS_08) capacity = 4500 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4000, result[1]) items = buildItemDict(ITEMS_09) capacity = 45000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(40000, result[1]) items = buildItemDict(ITEMS_10) capacity = 450000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(400000, result[1]) items = buildItemDict(ITEMS_11) capacity = 4 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 45 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 450 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 4500 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4000, result[1]) def testAlternateFit_008(self): """ Test alternateFit() behavior for items dictionary where only 75% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 6 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(6, result[1]) items = buildItemDict(ITEMS_06) capacity = 65 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(60, result[1]) items = buildItemDict(ITEMS_07) capacity = 650 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(600, result[1]) items = buildItemDict(ITEMS_08) capacity = 6500 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(6000, result[1]) items = buildItemDict(ITEMS_09) capacity = 65000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(60000, result[1]) items = buildItemDict(ITEMS_10) capacity = 650000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(600000, result[1]) items = buildItemDict(ITEMS_15) capacity = 7 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(6, result[1]) items = buildItemDict(ITEMS_16) capacity = 65 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(60, result[1]) items = buildItemDict(ITEMS_17) capacity = 650 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(600, result[1]) items = buildItemDict(ITEMS_18) capacity = 6500 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(6000, result[1]) def testAlternateFit_009(self): """ Test alternateFit() behavior for items dictionary where all items individually exceed the capacity. """ items = buildItemDict(ITEMS_06) capacity = 9 result = alternateFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_07) capacity = 99 result = alternateFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_08) capacity = 999 result = alternateFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_09) capacity = 9999 result = alternateFit(items, capacity) self.assertEqual(([], 0), result) items = buildItemDict(ITEMS_10) capacity = 99999 result = alternateFit(items, capacity) self.assertEqual(([], 0), result) def testAlternateFit_010(self): """ Test alternateFit() behavior for items dictionary where first half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 200 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(111, result[1]) def testAlternateFit_011(self): """ Test alternateFit() behavior for items dictionary where middle half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 5 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 50 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 500 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 5000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(4000, result[1]) def testAlternateFit_012(self): """ Test alternateFit() behavior for items dictionary where second half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 200 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(4, len(result[0])) self.assertEqual(111, result[1]) def testAlternateFit_013(self): """ Test alternateFit() behavior for items dictionary where first half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 50 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) def testAlternateFit_014(self): """ Test alternateFit() behavior for items dictionary where middle half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 3 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_12) capacity = 35 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_13) capacity = 350 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_14) capacity = 3500 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) def testAlternateFit_015(self): """ Test alternateFit() behavior for items dictionary where second half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 50 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertTrue(len(result[0]) < 4, "%s < 4" % len(result[0])) def testAlternateFit_016(self): """ Test alternateFit() behavior for items dictionary where all items fit. """ items = buildItemDict(ITEMS_02) capacity = 1000000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(0, result[1]) items = buildItemDict(ITEMS_03) capacity = 2000000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(1111111, result[1]) items = buildItemDict(ITEMS_04) capacity = 2000000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(1111111, result[1]) items = buildItemDict(ITEMS_05) capacity = 1000000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(8, result[1]) items = buildItemDict(ITEMS_06) capacity = 1000000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(80, result[1]) items = buildItemDict(ITEMS_07) capacity = 1000000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(800, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(8000, result[1]) items = buildItemDict(ITEMS_09) capacity = 1000000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(80000, result[1]) items = buildItemDict(ITEMS_10) capacity = 1000000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(800000, result[1]) items = buildItemDict(ITEMS_11) capacity = 1000000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(400004, result[1]) items = buildItemDict(ITEMS_12) capacity = 1000000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(400040, result[1]) items = buildItemDict(ITEMS_13) capacity = 1000000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(400400, result[1]) items = buildItemDict(ITEMS_14) capacity = 1000000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(404000, result[1]) items = buildItemDict(ITEMS_15) capacity = 1000000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(200006, result[1]) items = buildItemDict(ITEMS_16) capacity = 1000000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(200060, result[1]) items = buildItemDict(ITEMS_17) capacity = 1000000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(200600, result[1]) items = buildItemDict(ITEMS_18) capacity = 1000000 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(8, len(result[0])) self.assertEqual(206000, result[1]) def testAlternateFit_017(self): """ Test alternateFit() behavior for a more realistic set of items """ items = buildItemDict(ITEMS_19) capacity = 760 result = alternateFit(items, capacity) self.assertTrue(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.assertEqual(6, len(result[0])) self.assertEqual(719, result[1]) self.assertTrue('link001' in result[0]) self.assertTrue('dir001/file002' in result[0]) self.assertTrue('link002' in result[0]) self.assertTrue('dir001/file001' in result[0]) self.assertTrue('dir002/file002' in result[0]) self.assertTrue('dir002/file001' in result[0]) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" tests = [ ] tests.append(unittest.makeSuite(TestKnapsack, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/testcase/encrypttests.py0000664000175000017500000015070712560007330022434 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 3 (>= 3.4) # Project : Cedar Backup, release 3 # Purpose : Tests encrypt extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup3/extend/encrypt.py. Code Coverage ============= This module contains individual tests for the the public classes implemented in extend/encrypt.py. There are also tests for some of the private functions. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. If you want to run all of the tests, set ENCRYPTTESTS_FULL to "Y" in the environment. In this module, the primary dependency is that for some tests, GPG must have access to the public key EFD75934. There is also an assumption that GPG does I{not} have access to a public key for anyone named "Bogus J. User" (so we can test failure scenarios). @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest import os import tempfile # Cedar Backup modules from CedarBackup3.filesystem import FilesystemList from CedarBackup3.testutil import findResources, buildPath, removedir, extractTar, failUnlessAssignRaises from CedarBackup3.xmlutil import createOutputDom, serializeDom from CedarBackup3.extend.encrypt import LocalConfig, EncryptConfig from CedarBackup3.extend.encrypt import _encryptFileWithGpg, _encryptFile, _encryptDailyDir ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "encrypt.conf.1", "encrypt.conf.2", "tree1.tar.gz", "tree2.tar.gz", "tree8.tar.gz", "tree15.tar.gz", "tree16.tar.gz", "tree17.tar.gz", "tree18.tar.gz", "tree19.tar.gz", "tree20.tar.gz", ] VALID_GPG_RECIPIENT = "EFD75934" INVALID_GPG_RECIPIENT = "Bogus J. User" INVALID_PATH = "bogus" # This path name should never exist ####################################################################### # Utility functions ####################################################################### def runAllTests(): """Returns true/false depending on whether the full test suite should be run.""" if "ENCRYPTTESTS_FULL" in os.environ: return os.environ["ENCRYPTTESTS_FULL"] == "Y" else: return False ####################################################################### # Test Case Classes ####################################################################### ########################## # TestEncryptConfig class ########################## class TestEncryptConfig(unittest.TestCase): """Tests for the EncryptConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = EncryptConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ encrypt = EncryptConfig() self.assertEqual(None, encrypt.encryptMode) self.assertEqual(None, encrypt.encryptTarget) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ encrypt = EncryptConfig("gpg", "Backup User") self.assertEqual("gpg", encrypt.encryptMode) self.assertEqual("Backup User", encrypt.encryptTarget) def testConstructor_003(self): """ Test assignment of encryptMode attribute, None value. """ encrypt = EncryptConfig(encryptMode="gpg") self.assertEqual("gpg", encrypt.encryptMode) encrypt.encryptMode = None self.assertEqual(None, encrypt.encryptMode) def testConstructor_004(self): """ Test assignment of encryptMode attribute, valid value. """ encrypt = EncryptConfig() self.assertEqual(None, encrypt.encryptMode) encrypt.encryptMode = "gpg" self.assertEqual("gpg", encrypt.encryptMode) def testConstructor_005(self): """ Test assignment of encryptMode attribute, invalid value (empty). """ encrypt = EncryptConfig() self.assertEqual(None, encrypt.encryptMode) self.failUnlessAssignRaises(ValueError, encrypt, "encryptMode", "") self.assertEqual(None, encrypt.encryptMode) def testConstructor_006(self): """ Test assignment of encryptTarget attribute, None value. """ encrypt = EncryptConfig(encryptTarget="Backup User") self.assertEqual("Backup User", encrypt.encryptTarget) encrypt.encryptTarget = None self.assertEqual(None, encrypt.encryptTarget) def testConstructor_007(self): """ Test assignment of encryptTarget attribute, valid value. """ encrypt = EncryptConfig() self.assertEqual(None, encrypt.encryptTarget) encrypt.encryptTarget = "Backup User" self.assertEqual("Backup User", encrypt.encryptTarget) def testConstructor_008(self): """ Test assignment of encryptTarget attribute, invalid value (empty). """ encrypt = EncryptConfig() self.assertEqual(None, encrypt.encryptTarget) self.failUnlessAssignRaises(ValueError, encrypt, "encryptTarget", "") self.assertEqual(None, encrypt.encryptTarget) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ encrypt1 = EncryptConfig() encrypt2 = EncryptConfig() self.assertEqual(encrypt1, encrypt2) self.assertTrue(encrypt1 == encrypt2) self.assertTrue(not encrypt1 < encrypt2) self.assertTrue(encrypt1 <= encrypt2) self.assertTrue(not encrypt1 > encrypt2) self.assertTrue(encrypt1 >= encrypt2) self.assertTrue(not encrypt1 != encrypt2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ encrypt1 = EncryptConfig("gpg", "Backup User") encrypt2 = EncryptConfig("gpg", "Backup User") self.assertEqual(encrypt1, encrypt2) self.assertTrue(encrypt1 == encrypt2) self.assertTrue(not encrypt1 < encrypt2) self.assertTrue(encrypt1 <= encrypt2) self.assertTrue(not encrypt1 > encrypt2) self.assertTrue(encrypt1 >= encrypt2) self.assertTrue(not encrypt1 != encrypt2) def testComparison_003(self): """ Test comparison of two differing objects, encryptMode differs (one None). """ encrypt1 = EncryptConfig() encrypt2 = EncryptConfig(encryptMode="gpg") self.assertNotEqual(encrypt1, encrypt2) self.assertTrue(not encrypt1 == encrypt2) self.assertTrue(encrypt1 < encrypt2) self.assertTrue(encrypt1 <= encrypt2) self.assertTrue(not encrypt1 > encrypt2) self.assertTrue(not encrypt1 >= encrypt2) self.assertTrue(encrypt1 != encrypt2) # Note: no test to check when encrypt mode differs, since only one value is allowed def testComparison_004(self): """ Test comparison of two differing objects, encryptTarget differs (one None). """ encrypt1 = EncryptConfig() encrypt2 = EncryptConfig(encryptTarget="Backup User") self.assertNotEqual(encrypt1, encrypt2) self.assertTrue(not encrypt1 == encrypt2) self.assertTrue(encrypt1 < encrypt2) self.assertTrue(encrypt1 <= encrypt2) self.assertTrue(not encrypt1 > encrypt2) self.assertTrue(not encrypt1 >= encrypt2) self.assertTrue(encrypt1 != encrypt2) def testComparison_005(self): """ Test comparison of two differing objects, encryptTarget differs. """ encrypt1 = EncryptConfig("gpg", "Another User") encrypt2 = EncryptConfig("gpg", "Backup User") self.assertNotEqual(encrypt1, encrypt2) self.assertTrue(not encrypt1 == encrypt2) self.assertTrue(encrypt1 < encrypt2) self.assertTrue(encrypt1 <= encrypt2) self.assertTrue(not encrypt1 > encrypt2) self.assertTrue(not encrypt1 >= encrypt2) self.assertTrue(encrypt1 != encrypt2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the encrypt configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.assertEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.assertEqual(None, config.encrypt) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.assertEqual(None, config.encrypt) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["encrypt.conf.1"] with open(path) as f: contents = f.read() self.assertRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of encrypt attribute, None value. """ config = LocalConfig() config.encrypt = None self.assertEqual(None, config.encrypt) def testConstructor_005(self): """ Test assignment of encrypt attribute, valid value. """ config = LocalConfig() config.encrypt = EncryptConfig() self.assertEqual(EncryptConfig(), config.encrypt) def testConstructor_006(self): """ Test assignment of encrypt attribute, invalid value (not EncryptConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "encrypt", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.encrypt = EncryptConfig() config2 = LocalConfig() config2.encrypt = EncryptConfig() self.assertEqual(config1, config2) self.assertTrue(config1 == config2) self.assertTrue(not config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(config1 >= config2) self.assertTrue(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, encrypt differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.encrypt = EncryptConfig() self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, encrypt differs. """ config1 = LocalConfig() config1.encrypt = EncryptConfig(encryptTarget="Another User") config2 = LocalConfig() config2.encrypt = EncryptConfig(encryptTarget="Backup User") self.assertNotEqual(config1, config2) self.assertTrue(not config1 == config2) self.assertTrue(config1 < config2) self.assertTrue(config1 <= config2) self.assertTrue(not config1 > config2) self.assertTrue(not config1 >= config2) self.assertTrue(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None encrypt section. """ config = LocalConfig() config.encrypt = None self.assertRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty encrypt section. """ config = LocalConfig() config.encrypt = EncryptConfig() self.assertRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty encrypt section with no values filled in. """ config = LocalConfig() config.encrypt = EncryptConfig(None, None) self.assertRaises(ValueError, config.validate) def testValidate_004(self): """ Test validate on a non-empty encrypt section with only one value filled in. """ config = LocalConfig() config.encrypt = EncryptConfig("gpg", None) self.assertRaises(ValueError, config.validate) config.encrypt = EncryptConfig(None, "Backup User") self.assertRaises(ValueError, config.validate) def testValidate_005(self): """ Test validate on a non-empty encrypt section with valid values filled in. """ config = LocalConfig() config.encrypt = EncryptConfig("gpg", "Backup User") config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["encrypt.conf.1"] with open(path) as f: contents = f.read() self.assertRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.assertRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.assertEqual(None, config.encrypt) config = LocalConfig(xmlData=contents, validate=False) self.assertEqual(None, config.encrypt) def testParse_002(self): """ Parse config document with filled-in values. """ path = self.resources["encrypt.conf.2"] with open(path) as f: contents = f.read() config = LocalConfig(xmlPath=path, validate=False) self.assertNotEqual(None, config.encrypt) self.assertEqual("gpg", config.encrypt.encryptMode) self.assertEqual("Backup User", config.encrypt.encryptTarget) config = LocalConfig(xmlData=contents, validate=False) self.assertNotEqual(None, config.encrypt) self.assertEqual("gpg", config.encrypt.encryptMode) self.assertEqual("Backup User", config.encrypt.encryptTarget) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document. """ encrypt = EncryptConfig() config = LocalConfig() config.encrypt = encrypt self.validateAddConfig(config) def testAddConfig_002(self): """ Test with values set. """ encrypt = EncryptConfig(encryptMode="gpg", encryptTarget="Backup User") config = LocalConfig() config.encrypt = encrypt self.validateAddConfig(config) ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the functions in encrypt.py.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception as e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) ############################# # Test _encryptFileWithGpg() ############################# def testEncryptFileWithGpg_001(self): """ Test for a non-existent file in a non-existent directory. """ sourceFile = self.buildPath([INVALID_PATH, INVALID_PATH]) self.assertRaises(IOError, _encryptFileWithGpg, sourceFile, INVALID_GPG_RECIPIENT) def testEncryptFileWithGpg_002(self): """ Test for a non-existent file in an existing directory. """ self.extractTar("tree8") sourceFile = self.buildPath(["tree8", "dir001", INVALID_PATH, ]) self.assertRaises(IOError, _encryptFileWithGpg, sourceFile, INVALID_GPG_RECIPIENT) def testEncryptFileWithGpg_003(self): """ Test for an unknown recipient. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) self.assertRaises(IOError, _encryptFileWithGpg, sourceFile, INVALID_GPG_RECIPIENT) self.assertFalse(os.path.exists(expectedFile)) self.assertTrue(os.path.exists(sourceFile)) def testEncryptFileWithGpg_004(self): """ Test for a valid recipient. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) actualFile = _encryptFileWithGpg(sourceFile, VALID_GPG_RECIPIENT) self.assertEqual(actualFile, expectedFile) self.assertTrue(os.path.exists(sourceFile)) self.assertTrue(os.path.exists(actualFile)) ###################### # Test _encryptFile() ###################### def testEncryptFile_001(self): """ Test for a mode other than "gpg". """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) self.assertRaises(ValueError, _encryptFile, sourceFile, "pgp", INVALID_GPG_RECIPIENT, None, None, removeSource=False) self.assertTrue(os.path.exists(sourceFile)) self.assertFalse(os.path.exists(expectedFile)) def testEncryptFile_002(self): """ Test for a source path that does not exist. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", INVALID_PATH ]) expectedFile = self.buildPath(["tree1", "%s.gpg" % INVALID_PATH ]) self.assertRaises(ValueError, _encryptFile, sourceFile, "gpg", INVALID_GPG_RECIPIENT, None, None, removeSource=False) self.assertFalse(os.path.exists(sourceFile)) self.assertFalse(os.path.exists(expectedFile)) def testEncryptFile_003(self): """ Test "gpg" mode with a valid source path and invalid recipient, removeSource=False. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) self.assertRaises(IOError, _encryptFile, sourceFile, "gpg", INVALID_GPG_RECIPIENT, None, None, removeSource=False) self.assertTrue(os.path.exists(sourceFile)) self.assertFalse(os.path.exists(expectedFile)) def testEncryptFile_004(self): """ Test "gpg" mode with a valid source path and invalid recipient, removeSource=True. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) self.assertRaises(IOError, _encryptFile, sourceFile, "gpg", INVALID_GPG_RECIPIENT, None, None, removeSource=True) self.assertTrue(os.path.exists(sourceFile)) self.assertFalse(os.path.exists(expectedFile)) def testEncryptFile_005(self): """ Test "gpg" mode with a valid source path and recipient, removeSource=False. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) actualFile = _encryptFile(sourceFile, "gpg", VALID_GPG_RECIPIENT, None, None, removeSource=False) self.assertEqual(actualFile, expectedFile) self.assertTrue(os.path.exists(sourceFile)) self.assertTrue(os.path.exists(actualFile)) def testEncryptFile_006(self): """ Test "gpg" mode with a valid source path and recipient, removeSource=True. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) actualFile = _encryptFile(sourceFile, "gpg", VALID_GPG_RECIPIENT, None, None, removeSource=True) self.assertEqual(actualFile, expectedFile) self.assertFalse(os.path.exists(sourceFile)) self.assertTrue(os.path.exists(actualFile)) ########################## # Test _encryptDailyDir() ########################## def testEncryptDailyDir_001(self): """ Test with a nonexistent daily staging directory. """ self.extractTar("tree1") dailyDir = self.buildPath(["tree1", "dir001" ]) self.assertRaises(ValueError, _encryptDailyDir, dailyDir, "gpg", VALID_GPG_RECIPIENT, None, None) def testEncryptDailyDir_002(self): """ Test with a valid staging directory containing only links. """ self.extractTar("tree15") dailyDir = self.buildPath(["tree15", "dir001" ]) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.assertEqual(3, len(fsList)) self.assertTrue(self.buildPath(["tree15", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree15", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree15", "dir001", "link002", ]) in fsList) _encryptDailyDir(dailyDir, "gpg", VALID_GPG_RECIPIENT, None, None) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.assertEqual(3, len(fsList)) self.assertTrue(self.buildPath(["tree15", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree15", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree15", "dir001", "link002", ]) in fsList) def testEncryptDailyDir_003(self): """ Test with a valid staging directory containing only directories. """ self.extractTar("tree2") dailyDir = self.buildPath(["tree2"]) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath(["tree2", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir010", ]) in fsList) _encryptDailyDir(dailyDir, "gpg", VALID_GPG_RECIPIENT, None, None) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.assertEqual(11, len(fsList)) self.assertTrue(self.buildPath(["tree2", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir004", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir005", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir006", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir007", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir008", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir009", ]) in fsList) self.assertTrue(self.buildPath(["tree2", "dir010", ]) in fsList) def testEncryptDailyDir_004(self): """ Test with a valid staging directory containing only files. """ self.extractTar("tree1") dailyDir = self.buildPath(["tree1"]) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath(["tree1" ]) in fsList) self.assertTrue(self.buildPath(["tree1", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree1", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree1", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree1", "file004", ]) in fsList) self.assertTrue(self.buildPath(["tree1", "file005", ]) in fsList) self.assertTrue(self.buildPath(["tree1", "file006", ]) in fsList) self.assertTrue(self.buildPath(["tree1", "file007", ]) in fsList) _encryptDailyDir(dailyDir, "gpg", VALID_GPG_RECIPIENT, None, None) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.assertEqual(8, len(fsList)) self.assertTrue(self.buildPath(["tree1" ]) in fsList) self.assertTrue(self.buildPath(["tree1", "file001.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree1", "file002.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree1", "file003.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree1", "file004.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree1", "file005.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree1", "file006.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree1", "file007.gpg", ]) in fsList) def testEncryptDailyDir_005(self): """ Test with a valid staging directory containing files, directories and links, including various files that match the general Cedar Backup indicator file pattern ("cback."). """ self.extractTar("tree16") dailyDir = self.buildPath(["tree16"]) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.assertEqual(122, len(fsList)) self.assertTrue(self.buildPath(["tree16", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir001", "cback.encrypt", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", "cback.encrypt", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", "file005", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", "link003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", "link004", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", "link005", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "cback.", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir002", "cback.encrypt", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir003", "cback.encrypt", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir003", "cback.store", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "file004", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "file005", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "file006", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "file007", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "file008", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "cback.", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "cback.collect", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir002", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir002", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir002", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir002", "file004", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir002", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir002", "cback.encrypt", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir003", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir003", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir003", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir003", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir003", "link002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir003", "cback.encrypt", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "cback.encrypt", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "file004", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "file005", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "file006", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "file007", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "file008", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "cback.store", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", "file001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", "file002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", "file003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", "file004", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", "file005", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", "file006", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", "file007", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", "file008", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", "link001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "cback.collect", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "cback.stage", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "cback.store", ]) in fsList) _encryptDailyDir(dailyDir, "gpg", VALID_GPG_RECIPIENT, None, None) fsList = FilesystemList() fsList.addDirContents(dailyDir) # since all links are to files, and the files all changed names, the links are invalid and disappear self.assertEqual(102, len(fsList)) self.assertTrue(self.buildPath(["tree16", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", "file001.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", "file002.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", "file003.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", "file004.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", "file005.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", "file006.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", "file007.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir001", "file008.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir002", "file001.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir002", "file002.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir002", "file003.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir002", "file004.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir003", "file001.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir003", "file002.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir001", "dir003", "file003.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir001", "cback.encrypt", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir001", "file001.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir001", "file002.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir001", "file003.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", "cback.encrypt", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", "file001.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", "file002.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", "file003.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", "file004.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir002", "dir002", "file005.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "file001.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "file002.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "file003.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "file004.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "file005.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "file006.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "file007.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "file008.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir001", "cback.", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir002", "file001.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir002", "file002.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir002", "file003.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir002", "file004.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir002", "cback.encrypt", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir003", "file001.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir003", "file002.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir003", "file003.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir003", "cback.encrypt", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir003", "dir003", "cback.store", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "file001.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "file002.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "file003.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "file004.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "file005.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "file006.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "file007.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "file008.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "cback.", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir001", "cback.collect", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir002", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir002", "file001.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir002", "file002.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir002", "file003.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir002", "file004.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir002", "cback.encrypt", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir003", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir003", "file001.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir003", "file002.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir003", "file003.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir003", "cback.encrypt", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "cback.encrypt", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "file001.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "file002.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "file003.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "file004.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "file005.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "file006.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "file007.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "file008.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir004", "cback.store", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", "file001.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", "file002.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", "file003.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", "file004.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", "file005.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", "file006.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", "file007.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "dir004", "dir005", "file008.gpg", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "cback.collect", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "cback.stage", ]) in fsList) self.assertTrue(self.buildPath(["tree16", "cback.store", ]) in fsList) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" if runAllTests(): tests = [ ] tests.append(unittest.makeSuite(TestEncryptConfig, 'test')) tests.append(unittest.makeSuite(TestLocalConfig, 'test')) tests.append(unittest.makeSuite(TestFunctions, 'test')) return unittest.TestSuite(tests) else: tests = [ ] tests.append(unittest.makeSuite(TestEncryptConfig, 'test')) tests.append(unittest.makeSuite(TestLocalConfig, 'test')) return unittest.TestSuite(tests) CedarBackup3-3.1.6/doc/0002775000175000017500000000000012657665551016263 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/doc/cback3-span.10000664000175000017500000001351412555750775020434 0ustar pronovicpronovic00000000000000.\" vim: set ft=nroff .\" .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" # .\" # C E D A R .\" # S O L U T I O N S "Software done right." .\" # S O F T W A R E .\" # .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" # .\" # Author : Kenneth J. Pronovici .\" # Language : nroff .\" # Project : Cedar Backup, release 3 .\" # Purpose : Manpage for cback3-span script .\" # .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" .TH cback3\-span "1" "July 2015" "Cedar Backup 3" "Kenneth J. Pronovici" .SH NAME cback3\-span \- Span staged data among multiple discs .SH SYNOPSIS .B cback3\-span [\fIswitches\fR] .SH DESCRIPTION .PP This is the Cedar Backup 3 span tool. It is intended for use by people who back up more data than can fit on a single disc. It allows a user to split (span) staged data between more than one disc. It can't be a Cedar Backup extension in the usual sense because it requires user input when switching media. .PP Generally, one can run the cback3\-span command with no arguments. This will start it using the default configuration file, the default log file, etc. You only need to use the switches if you need to change the default behavior. .PP This command takes most of its configuration from the Cedar Backup configuration file, specifically the store section. Then, more information is gathered from the user interactively while the command is running. .SH MIGRATING FROM VERSION 2 TO VERSION 3 .PP The main difference between Cedar Backup version 2 and Cedar Backup version 3 is the targeted Python interpreter. For most users, migration should be straightforward. See the discussion found at cback3(1) or reference the Cedar Backup user guide. .SH SWITCHES .TP \fB\-h\fR, \fB\-\-help\fR Display usage/help listing. .TP \fB\-V\fR, \fB\-\-version\fR Display version information. .TP \fB\-b\fR, \fB\-\-verbose\fR Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. .TP \fB\-c\fR, \fB\-\-config\fR Specify the path to an alternate configuration file. The default configuration file is \fI/etc/cback3.conf\fR. .TP \fB\-l\fR, \fB\-\-logfile\fR Specify the path to an alternate logfile. The default logfile file is \fI/var/log/cback3.log\fR. .TP \fB\-o\fR, \fB\-\-owner\fR Specify the ownership of the logfile, in the form user:group. The default ownership is \fIroot:adm\fR, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback3 script is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. .TP \fB\-m\fR, \fB\-\-mode\fR Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is \fI640\fR (\-rw\-r\-\-\-\-\-). This value will only be used when creating a new logfile. If the logfile already exists when the cback3 script is executed, it will retain its existing ownership and mode. .TP \fB\-O\fR, \fB\-\-output\fR Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. .TP \fB\-d\fR, \fB\-\-debug\fR Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the \-\-output option, as well. .TP \fB\-s\fR, \fB\-\-stack\fR Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just progating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. .TP \fB\-D\fR, \fB\-\-diagnostics\fR Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report. .SH RETURN VALUES .PP This command returns 0 (zero) upon normal completion, and six other error codes related to particular errors. .TP \fB1\fR The Python interpreter version is < 3.4. .TP \fB2\fR Error processing command\-line arguments. .TP \fB3\fR Error configuring logging. .TP \fB4\fR Error parsing indicated configuration file. .TP \fB5\fR Backup was interrupted with a CTRL\-C or similar. .TP \fB6\fR Other error during processing. .SH NOTES .PP Cedar Backup itself is designed to run as root, since otherwise it's difficult to back up system directories or write the CD or DVD device. However, cback3\-span can be run safely as any user that has read access to the Cedar Backup staging directories and write access to the CD or DVD device. .SH SEE ALSO cback3(1) .SH FILES .TP \fI/etc/cback3.conf\fR - Default configuration file .TP \fI/var/log/cback3.log\fR - Default log file .SH URLS .TP The project homepage is: \fIhttps://bitbucket.org/cedarsolutions/cedar\-backup3\fR .SH BUGS .PP If you find a bug, please report it. .PP If possible, give me the output from \-\-diagnostics, all of the error messages that the script printed into its log, and also any stack\-traces (exceptions) that Python printed. It would be even better if you could tell me how to reproduce the problem, for instance by sending me your configuration file. .PP Report bugs to or by using the BitBucket issue tracker. .SH AUTHOR Written and maintained by Kenneth J. Pronovici with contributions from others. .SH COPYRIGHT Copyright (c) 2004\-2011,2013\-2015 Kenneth J. Pronovici. .PP This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. CedarBackup3-3.1.6/doc/cback3.10000664000175000017500000002735012555751024017464 0ustar pronovicpronovic00000000000000.\" vim: set ft=nroff .\" .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" # .\" # C E D A R .\" # S O L U T I O N S "Software done right." .\" # S O F T W A R E .\" # .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" # .\" # Author : Kenneth J. Pronovici .\" # Language : nroff .\" # Project : Cedar Backup, release 3 .\" # Purpose : Manpage for cback3 script .\" # .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" .TH cback3 "1" "July 2015" "Cedar Backup 3" "Kenneth J. Pronovici" .SH NAME cback3 \- Local and remote backups to CD or DVD media or Amazon S3 storage .SH SYNOPSIS .B cback3 [\fIswitches\fR] action(s) .SH DESCRIPTION .PP The cback3 script provides the command\-line interface for Cedar Backup 3. Cedar Backup 3 is a software package designed to manage system backups for a pool of local and remote machines. It understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. .PP Cedar Backup 3 is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. .PP Alternately, Cedar Backup 3 can write your backups to the Amazon S3 cloud rather than relying on physical media. .SH BACKUP CONCEPTS .PP There are two kinds of machines in a Cedar Backup pool. One machine (the \fImaster\fR) has a CD or DVD writer on it and is where the backup is written to disc. The others (\fIclients\fR) collect data to be written to disc by the master. Collectively, the master and client machines in a pool are all referred to as \fIpeer\fR machines. There are four actions that take place as part of the backup process: \fIcollect\fR, \fIstage\fR, \fIstore\fR and \fIpurge\fR. Both the master and the clients execute the collect and purge actions, but only the master executes the stage and store actions. The configuration file \fI/etc/cback3.conf\fR controls the actions taken during the collect, stage, store and purge actions. .PP Cedar Backup also supports the concept of \fImanaged clients\fR. Managed clients have their entire backup process managed by the master via a remote shell. The same actions are run as part of the backup process, but the master controls when the actions are executed on the clients rather than the clients controlling it for themselves. This facility is intended for use in environments where a scheduler like cron is not available. .SH MIGRATING FROM VERSION 2 TO VERSION 3 .PP The main difference between Cedar Backup version 2 and Cedar Backup version 3 is the targeted Python interpreter. Cedar Backup version 2 was designed for Python 2, while version 3 is a conversion of the original code to Python 3. Other than that, both versions are functionally equivalent. The configuration format is unchanged, and you can mix\-and\-match masters and clients of different versions in the same backup pool. Both versions will be fully supported until around the time of the Python 2 end\-of\-life in 2020, but you should plan to migrate sooner than that if possible. .PP A major design goal for version 3 was to facilitate easy migration testing for users, by making it possible to install version 3 on the same server where version 2 was already in use. A side effect of this design choice is that all of the executables, configuration files, and logs changed names in version 3. Where version 2 used \fIcback\fR, version 3 uses \fIcback3\fR: \fIcback3.conf\fR instead of \fIcback.conf\fR, \fIcback3.log\fR instead of \fIcback.log\fR, etc. .PP So, while migrating from version 2 to version 3 is relatively straightforward, you will have to make some changes manually. You will need to create a new configuration file (or soft link to the old one), modify your cron jobs to use the new executable name, etc. You can migrate one server at a time in your pool with no ill effects, or even incrementally migrate a single server by using version 2 and version 3 on different days of the week or for different parts of the backup. .SH SWITCHES .TP \fB\-h\fR, \fB\-\-help\fR Display usage/help listing. .TP \fB\-V\fR, \fB\-\-version\fR Display version information. .TP \fB\-b\fR, \fB\-\-verbose\fR Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. .TP \fB\-q\fR, \fB\-\-quiet\fR Run quietly (display no output to the screen). .TP \fB\-c\fR, \fB\-\-config\fR Specify the path to an alternate configuration file. The default configuration file is \fI/etc/cback3.conf\fR. .TP \fB\-f\fR, \fB\-\-full\fR Perform a full backup, regardless of configuration. For the collect action, this means that any existing information related to incremental backups will be ignored and rewritten; for the store action, this means that a new disc will be started. .TP \fB\-M\fR, \fB\-\-managed\fR Include managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client after executing the action locally. .TP \fB\-N\fR, \fB\-\-managed-only\fR Include only managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client, but do not execute the action locally. .TP \fB\-l\fR, \fB\-\-logfile\fR Specify the path to an alternate logfile. The default logfile file is \fI/var/log/cback3.log\fR. .TP \fB\-o\fR, \fB\-\-owner\fR Specify the ownership of the logfile, in the form user:group. The default ownership is \fIroot:adm\fR, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback3 script is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. .TP \fB\-m\fR, \fB\-\-mode\fR Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is \fI640\fR (\-rw\-r\-\-\-\-\-). This value will only be used when creating a new logfile. If the logfile already exists when the cback3 script is executed, it will retain its existing ownership and mode. .TP \fB\-O\fR, \fB\-\-output\fR Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. .TP \fB\-d\fR, \fB\-\-debug\fR Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the \-\-output option, as well. .TP \fB\-s\fR, \fB\-\-stack\fR Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just progating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. .TP \fB\-D\fR, \fB\-\-diagnostics\fR Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report. .SH ACTIONS .TP \fBall\fR Take all normal actions (collect, stage, store, purge), in that order. .TP \fBcollect\fR Take the collect action, creating tarfiles for each directory specified in the collect section of the configuration file. .TP \fBstage\fR Take the stage action, copying tarfiles from each peer in the backup pool to the daily staging directory, based on the stage section of the configuration file. .TP \fBstore\fR Take the store action, writing the daily staging directory to disc based on the store section of the configuration file. .TP \fBpurge\fR Take the purge action, removing old and outdated files as specified in the purge section of the configuration file. .TP \fBrebuild\fR The rebuild action attempts to rebuild "this week's" disc from any remaining unpurged staging directories. Typically, it is used to make a copy of a backup, replace lost or damaged media, or to switch to new media mid-week for some other reason. .TP \fBvalidate\fR Ensure that configuration is valid, but take no other action. Validation checks that the configuration file can be found and can be parsed, and also checks for typical configuration problems, such as directories that are not writable or problems with the target SCSI device. .SH RETURN VALUES .PP Cedar Backup returns 0 (zero) upon normal completion, and six other error codes related to particular errors. .TP \fB1\fR The Python interpreter version is < 3.4. .TP \fB2\fR Error processing command\-line arguments. .TP \fB3\fR Error configuring logging. .TP \fB4\fR Error parsing indicated configuration file. .TP \fB5\fR Backup was interrupted with a CTRL\-C or similar. .TP \fB6\fR Error executing specified backup actions. .SH NOTES .PP The script is designed to run as root, since otherwise it's difficult to back up system directories or write the CD or DVD device. However, pains are taken to switch to a backup user (specified in configuration) when appropriate. .PP To use the script, you must specify at least one action to take. More than one of the "collect", "stage", "store" or "purge" actions may be specified, in any arbitrary order. The "all", "rebuild" or "validate" actions may not be combined with other actions. If more than one action is specified, then actions will be taken in a sensible order (generally collect, followed by stage, followed by store, followed by purge). .PP If you have configured any Cedar Backup extensions, then the actions associated with those extensions may also be specified on the command line. If you specify any other actions along with an extended action, the actions will be executed in a sensible order per configuration. However, the "all" action never executes extended actions. .PP Note that there is no facility for restoring backups. It is assumed that the user can deal with copying tarfiles off disc and using them to restore missing files as needed. The user manual provides detailed intructions in Appendix C. .PP Finally, you should be aware that backups to CD or DVD can probably be read by any user which has permissions to mount the CD or DVD drive. If you intend to leave the backup disc in the drive at all times, you may want to consider this when setting up device permissions on your machine. You might also want to investigate the encrypt extension. .SH FILES .TP \fI/etc/cback3.conf\fR - Default configuration file .TP \fI/var/log/cback3.log\fR - Default log file .SH URLS .TP The project homepage is: \fIhttps://bitbucket.org/cedarsolutions/cedar\-backup3\fR .SH BUGS .PP There probably are bugs in this code. However, it is in active use for my own backups, and I fix problems as I notice them. If you find a bug, please report it. .PP If possible, give me the output from \-\-diagnostics, all of the error messages that the script printed into its log, and also any stack\-traces (exceptions) that Python printed. It would be even better if you could tell me how to reproduce the problem, for instance by sending me your configuration file. .PP Report bugs to or by using the BitBucket issue tracker. .SH AUTHOR Written and maintained by Kenneth J. Pronovici with contributions from others. .SH COPYRIGHT Copyright (c) 2004\-2011,2013\-2015 Kenneth J. Pronovici. .PP This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. CedarBackup3-3.1.6/doc/docbook.txt0000664000175000017500000000363712555004756020442 0ustar pronovicpronovic00000000000000The Cedar Backup Software Manual, found in manual/src, is written in DocBook Lite. All of the docbook functionality used to build the actual documentation that I distribute is based around a Debian system (or a system with equivalent functionality) as the development system. I built the entire docbook infrastructure based on the Subversion book: http://svnbook.red-bean.com http://svn.collab.net/repos/svn/branches/1.0.x/doc/book/ Some other links that might be useful to you: http://www.sagehill.net/docbookxsl/index.html http://tldp.org/HOWTO/DocBook-Demystification-HOWTO/index.html http://www.vim.org/scripts/script.php?script_id=301 This is the official Docbook XSL documentation. http://wiki.docbook.org/topic/ http://wiki.docbook.org/topic/DocBookDocumentation http://wiki.docbook.org/topic/DocBookXslStylesheetDocs http://docbook.sourceforge.net/release/xsl/current/doc/fo/ These official Docbook documentation is where you want to look for stylesheet options, etc. For instance, these are the docs I used when I wanted to figure out how to put items on new pages in PDF output. The following items need to be installed to build the user manual: apt-get install docbook-xsl apt-get install xsltproc apt-get install sp # for nsgmls You also need a working XML catalog on your system, because the various DTDs and stylesheets depend on that. There's no point in harcoding paths and keeping local copies of things if the catalog can do that for you. However, if you don't have a catalog, you can probably force things to work. See notes at the top of the various files in util/docbook. The util/validate script is a thin wrapper around the nsgmls validating parser. I took the syntax directly from the Subversion book documentation. http://svn.collab.net/repos/svn/branches/1.0.x/doc/book/README You should run 'make validate' against the manual before checking it in. CedarBackup3-3.1.6/doc/cback3-amazons3-sync.10000664000175000017500000001404112555750765022173 0ustar pronovicpronovic00000000000000.\" vim: set ft=nroff .\" .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" # .\" # C E D A R .\" # S O L U T I O N S "Software done right." .\" # S O F T W A R E .\" # .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" # .\" # Author : Kenneth J. Pronovici .\" # Language : nroff .\" # Project : Cedar Backup, release 3 .\" # Purpose : Manpage for cback3-amazons3-sync script .\" # .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" .TH cback3\-amazons3-sync "1" "July 2015" "Cedar Backup 3" "Kenneth J. Pronovici" .SH NAME cback3\-amazons3-sync \- Synchronize a local directory with an Amazon S3 bucket .SH SYNOPSIS .B cback3\-amazons3\-sync [\fIswitches\fR] sourceDir s3BucketUrl .SH DESCRIPTION .PP This is the Cedar Backup 3 Amazon S3 sync tool. It synchronizes a local directory to an Amazon S3 cloud storage bucket. After the sync is complete, a validation step is taken. An error is reported if the contents of the bucket do not match the source directory, or if the indicated size for any file differs. .PP Generally, one can run the cback3\-amazons3\-sync command with no special switches. This will start it using the default Cedar Backup log file, etc. You only need to use the switches if you need to change the default behavior. .SH MIGRATING FROM VERSION 2 TO VERSION 3 .PP The main difference between Cedar Backup version 2 and Cedar Backup version 3 is the targeted Python interpreter. For most users, migration should be straightforward. See the discussion found at cback3(1) or reference the Cedar Backup user guide. .SH ARGUMENTS .TP \fBsourceDir\fR The source directory on a local disk. .TP \fBs3BucketUrl\fR The URL specifying the location of the Amazon S3 cloud storage bucket to synchronize with, like \fIs3://example.com\-backup/subdir\fR. .SH SWITCHES .TP \fB\-h\fR, \fB\-\-help\fR Display usage/help listing. .TP \fB\-V\fR, \fB\-\-version\fR Display version information. .TP \fB\-b\fR, \fB\-\-verbose\fR Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. .TP \fB\-l\fR, \fB\-\-logfile\fR Specify the path to an alternate logfile. The default logfile file is \fI/var/log/cback3.log\fR. .TP \fB\-o\fR, \fB\-\-owner\fR Specify the ownership of the logfile, in the form user:group. The default ownership is \fIroot:adm\fR, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback3 script is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. .TP \fB\-m\fR, \fB\-\-mode\fR Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is \fI640\fR (\-rw\-r\-\-\-\-\-). This value will only be used when creating a new logfile. If the logfile already exists when the cback3 script is executed, it will retain its existing ownership and mode. .TP \fB\-O\fR, \fB\-\-output\fR Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. .TP \fB\-d\fR, \fB\-\-debug\fR Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the \-\-output option, as well. .TP \fB\-s\fR, \fB\-\-stack\fR Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just progating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. .TP \fB\-D\fR, \fB\-\-diagnostics\fR Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report. .SH RETURN VALUES .PP This command returns 0 (zero) upon normal completion, and several other error codes related to particular errors. .TP \fB1\fR The Python interpreter version is < 3.4. .TP \fB2\fR Error processing command\-line arguments. .TP \fB3\fR Error configuring logging. .TP \fB5\fR Backup was interrupted with a CTRL\-C or similar. .TP \fB6\fR Other error during processing. .SH NOTES .PP This tool is a wrapper over the Amazon AWS CLI interface found in the aws(1) command. Specifically, cback3\-amazons3\-sync invokes "aws s3 sync" followed by "aws s3api list\-objects". .PP Cedar Backup itself is designed to run as root. However, cback3\-amazons3\-sync can be run safely as any user that is configured to use the Amazon AWS CLI interface. The aws(1) command will be executed by the same user which is executing cback3\-amazons3\-sync. .PP You must configure the AWS CLI interface to have a valid connection to Amazon S3 infrastructure before using cback3\-amazons3\-sync. For more information about how to accomplish this, see the Cedar Backup user guide. .SH SEE ALSO cback3(1) .SH FILES .TP \fI/var/log/cback3.log\fR - Default log file .SH URLS .TP The project homepage is: \fIhttps://bitbucket.org/cedarsolutions/cedar\-backup3\fR .SH BUGS .PP If you find a bug, please report it. .PP If possible, give me the output from \-\-diagnostics, all of the error messages that the script printed into its log, and also any stack\-traces (exceptions) that Python printed. It would be even better if you could tell me how to reproduce the problem, for instance by sending me your configuration file. .PP Report bugs to or by using the BitBucket issue tracker. .SH AUTHOR Written and maintained by Kenneth J. Pronovici with contributions from others. .SH COPYRIGHT Copyright (c) 2004\-2011,2013\-2015 Kenneth J. Pronovici. .PP This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. CedarBackup3-3.1.6/doc/interface/0002775000175000017500000000000012657665551020223 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.extend.split-module.html0000664000175000017500000000411412657665544030026 0ustar pronovicpronovic00000000000000 split

    Module split


    Classes

    LocalConfig
    SplitConfig

    Functions

    executeAction

    Variables

    SPLIT_COMMAND
    SPLIT_INDICATOR
    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.mbox-pysrc.html0000664000175000017500000223023612657665545026741 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.mbox
    Package CedarBackup3 :: Package extend :: Module mbox
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.extend.mbox

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2006-2007,2010,2015 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python 3 (>= 3.4) 
      29  # Project  : Official Cedar Backup Extensions 
      30  # Purpose  : Provides an extension to back up mbox email files. 
      31  # 
      32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      33   
      34  ######################################################################## 
      35  # Module documentation 
      36  ######################################################################## 
      37   
      38  """ 
      39  Provides an extension to back up mbox email files. 
      40   
      41  Backing up email 
      42  ================ 
      43   
      44     Email folders (often stored as mbox flatfiles) are not well-suited being backed 
      45     up with an incremental backup like the one offered by Cedar Backup.  This is 
      46     because mbox files often change on a daily basis, forcing the incremental 
      47     backup process to back them up every day in order to avoid losing data.  This 
      48     can result in quite a bit of wasted space when backing up large folders.  (Note 
      49     that the alternative maildir format does not share this problem, since it 
      50     typically uses one file per message.) 
      51   
      52     One solution to this problem is to design a smarter incremental backup process, 
      53     which backs up baseline content on the first day of the week, and then backs up 
      54     only new messages added to that folder on every other day of the week.  This way, 
      55     the backup for any single day is only as large as the messages placed into the 
      56     folder on that day.  The backup isn't as "perfect" as the incremental backup 
      57     process, because it doesn't preserve information about messages deleted from 
      58     the backed-up folder.  However, it should be much more space-efficient, and 
      59     in a recovery situation, it seems better to restore too much data rather 
      60     than too little. 
      61   
      62  What is this extension? 
      63  ======================= 
      64   
      65     This is a Cedar Backup extension used to back up mbox email files via the Cedar 
      66     Backup command line.  Individual mbox files or directories containing mbox 
      67     files can be backed up using the same collect modes allowed for filesystems in 
      68     the standard Cedar Backup collect action: weekly, daily, incremental.  It 
      69     implements the "smart" incremental backup process discussed above, using 
      70     functionality provided by the C{grepmail} utility. 
      71   
      72     This extension requires a new configuration section <mbox> and is intended to 
      73     be run either immediately before or immediately after the standard collect 
      74     action.  Aside from its own configuration, it requires the options and collect 
      75     configuration sections in the standard Cedar Backup configuration file. 
      76   
      77     The mbox action is conceptually similar to the standard collect action, 
      78     except that mbox directories are not collected recursively.  This implies 
      79     some configuration changes (i.e. there's no need for global exclusions or an 
      80     ignore file).  If you back up a directory, all of the mbox files in that 
      81     directory are backed up into a single tar file using the indicated 
      82     compression method. 
      83   
      84  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      85  """ 
      86   
      87  ######################################################################## 
      88  # Imported modules 
      89  ######################################################################## 
      90   
      91  # System modules 
      92  import os 
      93  import logging 
      94  import datetime 
      95  import pickle 
      96  import tempfile 
      97  from bz2 import BZ2File 
      98  from gzip import GzipFile 
      99  from functools import total_ordering 
     100   
     101  # Cedar Backup modules 
     102  from CedarBackup3.filesystem import FilesystemList, BackupFileList 
     103  from CedarBackup3.xmlutil import createInputDom, addContainerNode, addStringNode 
     104  from CedarBackup3.xmlutil import isElement, readChildren, readFirstChild, readString, readStringList 
     105  from CedarBackup3.config import VALID_COLLECT_MODES, VALID_COMPRESS_MODES 
     106  from CedarBackup3.util import isStartOfWeek, buildNormalizedPath 
     107  from CedarBackup3.util import resolveCommand, executeCommand 
     108  from CedarBackup3.util import ObjectTypeList, UnorderedList, RegexList, encodePath, changeOwnership 
     109   
     110   
     111  ######################################################################## 
     112  # Module-wide constants and variables 
     113  ######################################################################## 
     114   
     115  logger = logging.getLogger("CedarBackup3.log.extend.mbox") 
     116   
     117  GREPMAIL_COMMAND = [ "grepmail", ] 
     118  REVISION_PATH_EXTENSION = "mboxlast" 
    
    119 120 121 ######################################################################## 122 # MboxFile class definition 123 ######################################################################## 124 125 @total_ordering 126 -class MboxFile(object):
    127 128 """ 129 Class representing mbox file configuration.. 130 131 The following restrictions exist on data in this class: 132 133 - The absolute path must be absolute. 134 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 135 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 136 137 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 138 absolutePath, collectMode, compressMode 139 """ 140
    141 - def __init__(self, absolutePath=None, collectMode=None, compressMode=None):
    142 """ 143 Constructor for the C{MboxFile} class. 144 145 You should never directly instantiate this class. 146 147 @param absolutePath: Absolute path to an mbox file on disk. 148 @param collectMode: Overridden collect mode for this directory. 149 @param compressMode: Overridden compression mode for this directory. 150 """ 151 self._absolutePath = None 152 self._collectMode = None 153 self._compressMode = None 154 self.absolutePath = absolutePath 155 self.collectMode = collectMode 156 self.compressMode = compressMode
    157
    158 - def __repr__(self):
    159 """ 160 Official string representation for class instance. 161 """ 162 return "MboxFile(%s, %s, %s)" % (self.absolutePath, self.collectMode, self.compressMode)
    163
    164 - def __str__(self):
    165 """ 166 Informal string representation for class instance. 167 """ 168 return self.__repr__()
    169
    170 - def __eq__(self, other):
    171 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 172 return self.__cmp__(other) == 0
    173
    174 - def __lt__(self, other):
    175 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 176 return self.__cmp__(other) < 0
    177
    178 - def __gt__(self, other):
    179 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 180 return self.__cmp__(other) > 0
    181
    182 - def __cmp__(self, other):
    183 """ 184 Original Python 2 comparison operator. 185 @param other: Other object to compare to. 186 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 187 """ 188 if other is None: 189 return 1 190 if self.absolutePath != other.absolutePath: 191 if str(self.absolutePath or "") < str(other.absolutePath or ""): 192 return -1 193 else: 194 return 1 195 if self.collectMode != other.collectMode: 196 if str(self.collectMode or "") < str(other.collectMode or ""): 197 return -1 198 else: 199 return 1 200 if self.compressMode != other.compressMode: 201 if str(self.compressMode or "") < str(other.compressMode or ""): 202 return -1 203 else: 204 return 1 205 return 0
    206
    207 - def _setAbsolutePath(self, value):
    208 """ 209 Property target used to set the absolute path. 210 The value must be an absolute path if it is not C{None}. 211 It does not have to exist on disk at the time of assignment. 212 @raise ValueError: If the value is not an absolute path. 213 @raise ValueError: If the value cannot be encoded properly. 214 """ 215 if value is not None: 216 if not os.path.isabs(value): 217 raise ValueError("Absolute path must be, er, an absolute path.") 218 self._absolutePath = encodePath(value)
    219
    220 - def _getAbsolutePath(self):
    221 """ 222 Property target used to get the absolute path. 223 """ 224 return self._absolutePath
    225
    226 - def _setCollectMode(self, value):
    227 """ 228 Property target used to set the collect mode. 229 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 230 @raise ValueError: If the value is not valid. 231 """ 232 if value is not None: 233 if value not in VALID_COLLECT_MODES: 234 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 235 self._collectMode = value
    236
    237 - def _getCollectMode(self):
    238 """ 239 Property target used to get the collect mode. 240 """ 241 return self._collectMode
    242
    243 - def _setCompressMode(self, value):
    244 """ 245 Property target used to set the compress mode. 246 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 247 @raise ValueError: If the value is not valid. 248 """ 249 if value is not None: 250 if value not in VALID_COMPRESS_MODES: 251 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 252 self._compressMode = value
    253
    254 - def _getCompressMode(self):
    255 """ 256 Property target used to get the compress mode. 257 """ 258 return self._compressMode
    259 260 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path to the mbox file.") 261 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this mbox file.") 262 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this mbox file.")
    263
    264 265 ######################################################################## 266 # MboxDir class definition 267 ######################################################################## 268 269 @total_ordering 270 -class MboxDir(object):
    271 272 """ 273 Class representing mbox directory configuration.. 274 275 The following restrictions exist on data in this class: 276 277 - The absolute path must be absolute. 278 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 279 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 280 281 Unlike collect directory configuration, this is the only place exclusions 282 are allowed (no global exclusions at the <mbox> configuration level). Also, 283 we only allow relative exclusions and there is no configured ignore file. 284 This is because mbox directory backups are not recursive. 285 286 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 287 absolutePath, collectMode, compressMode, relativeExcludePaths, 288 excludePatterns 289 """ 290
    291 - def __init__(self, absolutePath=None, collectMode=None, compressMode=None, 292 relativeExcludePaths=None, excludePatterns=None):
    293 """ 294 Constructor for the C{MboxDir} class. 295 296 You should never directly instantiate this class. 297 298 @param absolutePath: Absolute path to a mbox file on disk. 299 @param collectMode: Overridden collect mode for this directory. 300 @param compressMode: Overridden compression mode for this directory. 301 @param relativeExcludePaths: List of relative paths to exclude. 302 @param excludePatterns: List of regular expression patterns to exclude 303 """ 304 self._absolutePath = None 305 self._collectMode = None 306 self._compressMode = None 307 self._relativeExcludePaths = None 308 self._excludePatterns = None 309 self.absolutePath = absolutePath 310 self.collectMode = collectMode 311 self.compressMode = compressMode 312 self.relativeExcludePaths = relativeExcludePaths 313 self.excludePatterns = excludePatterns
    314
    315 - def __repr__(self):
    316 """ 317 Official string representation for class instance. 318 """ 319 return "MboxDir(%s, %s, %s, %s, %s)" % (self.absolutePath, self.collectMode, self.compressMode, 320 self.relativeExcludePaths, self.excludePatterns)
    321
    322 - def __str__(self):
    323 """ 324 Informal string representation for class instance. 325 """ 326 return self.__repr__()
    327
    328 - def __eq__(self, other):
    329 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 330 return self.__cmp__(other) == 0
    331
    332 - def __lt__(self, other):
    333 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 334 return self.__cmp__(other) < 0
    335
    336 - def __gt__(self, other):
    337 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 338 return self.__cmp__(other) > 0
    339
    340 - def __cmp__(self, other):
    341 """ 342 Original Python 2 comparison operator. 343 @param other: Other object to compare to. 344 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 345 """ 346 if other is None: 347 return 1 348 if self.absolutePath != other.absolutePath: 349 if str(self.absolutePath or "") < str(other.absolutePath or ""): 350 return -1 351 else: 352 return 1 353 if self.collectMode != other.collectMode: 354 if str(self.collectMode or "") < str(other.collectMode or ""): 355 return -1 356 else: 357 return 1 358 if self.compressMode != other.compressMode: 359 if str(self.compressMode or "") < str(other.compressMode or ""): 360 return -1 361 else: 362 return 1 363 if self.relativeExcludePaths != other.relativeExcludePaths: 364 if self.relativeExcludePaths < other.relativeExcludePaths: 365 return -1 366 else: 367 return 1 368 if self.excludePatterns != other.excludePatterns: 369 if self.excludePatterns < other.excludePatterns: 370 return -1 371 else: 372 return 1 373 return 0
    374
    375 - def _setAbsolutePath(self, value):
    376 """ 377 Property target used to set the absolute path. 378 The value must be an absolute path if it is not C{None}. 379 It does not have to exist on disk at the time of assignment. 380 @raise ValueError: If the value is not an absolute path. 381 @raise ValueError: If the value cannot be encoded properly. 382 """ 383 if value is not None: 384 if not os.path.isabs(value): 385 raise ValueError("Absolute path must be, er, an absolute path.") 386 self._absolutePath = encodePath(value)
    387
    388 - def _getAbsolutePath(self):
    389 """ 390 Property target used to get the absolute path. 391 """ 392 return self._absolutePath
    393
    394 - def _setCollectMode(self, value):
    395 """ 396 Property target used to set the collect mode. 397 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 398 @raise ValueError: If the value is not valid. 399 """ 400 if value is not None: 401 if value not in VALID_COLLECT_MODES: 402 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 403 self._collectMode = value
    404
    405 - def _getCollectMode(self):
    406 """ 407 Property target used to get the collect mode. 408 """ 409 return self._collectMode
    410
    411 - def _setCompressMode(self, value):
    412 """ 413 Property target used to set the compress mode. 414 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 415 @raise ValueError: If the value is not valid. 416 """ 417 if value is not None: 418 if value not in VALID_COMPRESS_MODES: 419 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 420 self._compressMode = value
    421
    422 - def _getCompressMode(self):
    423 """ 424 Property target used to get the compress mode. 425 """ 426 return self._compressMode
    427
    428 - def _setRelativeExcludePaths(self, value):
    429 """ 430 Property target used to set the relative exclude paths list. 431 Elements do not have to exist on disk at the time of assignment. 432 """ 433 if value is None: 434 self._relativeExcludePaths = None 435 else: 436 try: 437 saved = self._relativeExcludePaths 438 self._relativeExcludePaths = UnorderedList() 439 self._relativeExcludePaths.extend(value) 440 except Exception as e: 441 self._relativeExcludePaths = saved 442 raise e
    443
    444 - def _getRelativeExcludePaths(self):
    445 """ 446 Property target used to get the relative exclude paths list. 447 """ 448 return self._relativeExcludePaths
    449
    450 - def _setExcludePatterns(self, value):
    451 """ 452 Property target used to set the exclude patterns list. 453 """ 454 if value is None: 455 self._excludePatterns = None 456 else: 457 try: 458 saved = self._excludePatterns 459 self._excludePatterns = RegexList() 460 self._excludePatterns.extend(value) 461 except Exception as e: 462 self._excludePatterns = saved 463 raise e
    464
    465 - def _getExcludePatterns(self):
    466 """ 467 Property target used to get the exclude patterns list. 468 """ 469 return self._excludePatterns
    470 471 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path to the mbox directory.") 472 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this mbox directory.") 473 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this mbox directory.") 474 relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") 475 excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.")
    476
    477 478 ######################################################################## 479 # MboxConfig class definition 480 ######################################################################## 481 482 @total_ordering 483 -class MboxConfig(object):
    484 485 """ 486 Class representing mbox configuration. 487 488 Mbox configuration is used for backing up mbox email files. 489 490 The following restrictions exist on data in this class: 491 492 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 493 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 494 - The C{mboxFiles} list must be a list of C{MboxFile} objects 495 - The C{mboxDirs} list must be a list of C{MboxDir} objects 496 497 For the C{mboxFiles} and C{mboxDirs} lists, validation is accomplished 498 through the L{util.ObjectTypeList} list implementation that overrides common 499 list methods and transparently ensures that each element is of the proper 500 type. 501 502 Unlike collect configuration, no global exclusions are allowed on this 503 level. We only allow relative exclusions at the mbox directory level. 504 Also, there is no configured ignore file. This is because mbox directory 505 backups are not recursive. 506 507 @note: Lists within this class are "unordered" for equality comparisons. 508 509 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 510 collectMode, compressMode, mboxFiles, mboxDirs 511 """ 512
    513 - def __init__(self, collectMode=None, compressMode=None, mboxFiles=None, mboxDirs=None):
    514 """ 515 Constructor for the C{MboxConfig} class. 516 517 @param collectMode: Default collect mode. 518 @param compressMode: Default compress mode. 519 @param mboxFiles: List of mbox files to back up 520 @param mboxDirs: List of mbox directories to back up 521 522 @raise ValueError: If one of the values is invalid. 523 """ 524 self._collectMode = None 525 self._compressMode = None 526 self._mboxFiles = None 527 self._mboxDirs = None 528 self.collectMode = collectMode 529 self.compressMode = compressMode 530 self.mboxFiles = mboxFiles 531 self.mboxDirs = mboxDirs
    532
    533 - def __repr__(self):
    534 """ 535 Official string representation for class instance. 536 """ 537 return "MboxConfig(%s, %s, %s, %s)" % (self.collectMode, self.compressMode, self.mboxFiles, self.mboxDirs)
    538
    539 - def __str__(self):
    540 """ 541 Informal string representation for class instance. 542 """ 543 return self.__repr__()
    544
    545 - def __eq__(self, other):
    546 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 547 return self.__cmp__(other) == 0
    548
    549 - def __lt__(self, other):
    550 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 551 return self.__cmp__(other) < 0
    552
    553 - def __gt__(self, other):
    554 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 555 return self.__cmp__(other) > 0
    556
    557 - def __cmp__(self, other):
    558 """ 559 Original Python 2 comparison operator. 560 Lists within this class are "unordered" for equality comparisons. 561 @param other: Other object to compare to. 562 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 563 """ 564 if other is None: 565 return 1 566 if self.collectMode != other.collectMode: 567 if str(self.collectMode or "") < str(other.collectMode or ""): 568 return -1 569 else: 570 return 1 571 if self.compressMode != other.compressMode: 572 if str(self.compressMode or "") < str(other.compressMode or ""): 573 return -1 574 else: 575 return 1 576 if self.mboxFiles != other.mboxFiles: 577 if self.mboxFiles < other.mboxFiles: 578 return -1 579 else: 580 return 1 581 if self.mboxDirs != other.mboxDirs: 582 if self.mboxDirs < other.mboxDirs: 583 return -1 584 else: 585 return 1 586 return 0
    587
    588 - def _setCollectMode(self, value):
    589 """ 590 Property target used to set the collect mode. 591 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 592 @raise ValueError: If the value is not valid. 593 """ 594 if value is not None: 595 if value not in VALID_COLLECT_MODES: 596 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 597 self._collectMode = value
    598
    599 - def _getCollectMode(self):
    600 """ 601 Property target used to get the collect mode. 602 """ 603 return self._collectMode
    604
    605 - def _setCompressMode(self, value):
    606 """ 607 Property target used to set the compress mode. 608 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 609 @raise ValueError: If the value is not valid. 610 """ 611 if value is not None: 612 if value not in VALID_COMPRESS_MODES: 613 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 614 self._compressMode = value
    615
    616 - def _getCompressMode(self):
    617 """ 618 Property target used to get the compress mode. 619 """ 620 return self._compressMode
    621
    622 - def _setMboxFiles(self, value):
    623 """ 624 Property target used to set the mboxFiles list. 625 Either the value must be C{None} or each element must be an C{MboxFile}. 626 @raise ValueError: If the value is not an C{MboxFile} 627 """ 628 if value is None: 629 self._mboxFiles = None 630 else: 631 try: 632 saved = self._mboxFiles 633 self._mboxFiles = ObjectTypeList(MboxFile, "MboxFile") 634 self._mboxFiles.extend(value) 635 except Exception as e: 636 self._mboxFiles = saved 637 raise e
    638
    639 - def _getMboxFiles(self):
    640 """ 641 Property target used to get the mboxFiles list. 642 """ 643 return self._mboxFiles
    644
    645 - def _setMboxDirs(self, value):
    646 """ 647 Property target used to set the mboxDirs list. 648 Either the value must be C{None} or each element must be an C{MboxDir}. 649 @raise ValueError: If the value is not an C{MboxDir} 650 """ 651 if value is None: 652 self._mboxDirs = None 653 else: 654 try: 655 saved = self._mboxDirs 656 self._mboxDirs = ObjectTypeList(MboxDir, "MboxDir") 657 self._mboxDirs.extend(value) 658 except Exception as e: 659 self._mboxDirs = saved 660 raise e
    661
    662 - def _getMboxDirs(self):
    663 """ 664 Property target used to get the mboxDirs list. 665 """ 666 return self._mboxDirs
    667 668 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Default collect mode.") 669 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Default compress mode.") 670 mboxFiles = property(_getMboxFiles, _setMboxFiles, None, doc="List of mbox files to back up.") 671 mboxDirs = property(_getMboxDirs, _setMboxDirs, None, doc="List of mbox directories to back up.")
    672
    673 674 ######################################################################## 675 # LocalConfig class definition 676 ######################################################################## 677 678 @total_ordering 679 -class LocalConfig(object):
    680 681 """ 682 Class representing this extension's configuration document. 683 684 This is not a general-purpose configuration object like the main Cedar 685 Backup configuration object. Instead, it just knows how to parse and emit 686 Mbox-specific configuration values. Third parties who need to read and 687 write configuration related to this extension should access it through the 688 constructor, C{validate} and C{addConfig} methods. 689 690 @note: Lists within this class are "unordered" for equality comparisons. 691 692 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, mbox, 693 validate, addConfig 694 """ 695
    696 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    697 """ 698 Initializes a configuration object. 699 700 If you initialize the object without passing either C{xmlData} or 701 C{xmlPath} then configuration will be empty and will be invalid until it 702 is filled in properly. 703 704 No reference to the original XML data or original path is saved off by 705 this class. Once the data has been parsed (successfully or not) this 706 original information is discarded. 707 708 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 709 method will be called (with its default arguments) against configuration 710 after successfully parsing any passed-in XML. Keep in mind that even if 711 C{validate} is C{False}, it might not be possible to parse the passed-in 712 XML document if lower-level validations fail. 713 714 @note: It is strongly suggested that the C{validate} option always be set 715 to C{True} (the default) unless there is a specific need to read in 716 invalid configuration from disk. 717 718 @param xmlData: XML data representing configuration. 719 @type xmlData: String data. 720 721 @param xmlPath: Path to an XML file on disk. 722 @type xmlPath: Absolute path to a file on disk. 723 724 @param validate: Validate the document after parsing it. 725 @type validate: Boolean true/false. 726 727 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 728 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 729 @raise ValueError: If the parsed configuration document is not valid. 730 """ 731 self._mbox = None 732 self.mbox = None 733 if xmlData is not None and xmlPath is not None: 734 raise ValueError("Use either xmlData or xmlPath, but not both.") 735 if xmlData is not None: 736 self._parseXmlData(xmlData) 737 if validate: 738 self.validate() 739 elif xmlPath is not None: 740 with open(xmlPath) as f: 741 xmlData = f.read() 742 self._parseXmlData(xmlData) 743 if validate: 744 self.validate()
    745
    746 - def __repr__(self):
    747 """ 748 Official string representation for class instance. 749 """ 750 return "LocalConfig(%s)" % (self.mbox)
    751
    752 - def __str__(self):
    753 """ 754 Informal string representation for class instance. 755 """ 756 return self.__repr__()
    757
    758 - def __eq__(self, other):
    759 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 760 return self.__cmp__(other) == 0
    761
    762 - def __lt__(self, other):
    763 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 764 return self.__cmp__(other) < 0
    765
    766 - def __gt__(self, other):
    767 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 768 return self.__cmp__(other) > 0
    769
    770 - def __cmp__(self, other):
    771 """ 772 Original Python 2 comparison operator. 773 Lists within this class are "unordered" for equality comparisons. 774 @param other: Other object to compare to. 775 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 776 """ 777 if other is None: 778 return 1 779 if self.mbox != other.mbox: 780 if self.mbox < other.mbox: 781 return -1 782 else: 783 return 1 784 return 0
    785
    786 - def _setMbox(self, value):
    787 """ 788 Property target used to set the mbox configuration value. 789 If not C{None}, the value must be a C{MboxConfig} object. 790 @raise ValueError: If the value is not a C{MboxConfig} 791 """ 792 if value is None: 793 self._mbox = None 794 else: 795 if not isinstance(value, MboxConfig): 796 raise ValueError("Value must be a C{MboxConfig} object.") 797 self._mbox = value
    798
    799 - def _getMbox(self):
    800 """ 801 Property target used to get the mbox configuration value. 802 """ 803 return self._mbox
    804 805 mbox = property(_getMbox, _setMbox, None, "Mbox configuration in terms of a C{MboxConfig} object.") 806
    807 - def validate(self):
    808 """ 809 Validates configuration represented by the object. 810 811 Mbox configuration must be filled in. Within that, the collect mode and 812 compress mode are both optional, but the list of repositories must 813 contain at least one entry. 814 815 Each configured file or directory must contain an absolute path, and then 816 must be either able to take collect mode and compress mode configuration 817 from the parent C{MboxConfig} object, or must set each value on its own. 818 819 @raise ValueError: If one of the validations fails. 820 """ 821 if self.mbox is None: 822 raise ValueError("Mbox section is required.") 823 if (self.mbox.mboxFiles is None or len(self.mbox.mboxFiles) < 1) and \ 824 (self.mbox.mboxDirs is None or len(self.mbox.mboxDirs) < 1): 825 raise ValueError("At least one mbox file or directory must be configured.") 826 if self.mbox.mboxFiles is not None: 827 for mboxFile in self.mbox.mboxFiles: 828 if mboxFile.absolutePath is None: 829 raise ValueError("Each mbox file must set an absolute path.") 830 if self.mbox.collectMode is None and mboxFile.collectMode is None: 831 raise ValueError("Collect mode must either be set in parent mbox section or individual mbox file.") 832 if self.mbox.compressMode is None and mboxFile.compressMode is None: 833 raise ValueError("Compress mode must either be set in parent mbox section or individual mbox file.") 834 if self.mbox.mboxDirs is not None: 835 for mboxDir in self.mbox.mboxDirs: 836 if mboxDir.absolutePath is None: 837 raise ValueError("Each mbox directory must set an absolute path.") 838 if self.mbox.collectMode is None and mboxDir.collectMode is None: 839 raise ValueError("Collect mode must either be set in parent mbox section or individual mbox directory.") 840 if self.mbox.compressMode is None and mboxDir.compressMode is None: 841 raise ValueError("Compress mode must either be set in parent mbox section or individual mbox directory.")
    842
    843 - def addConfig(self, xmlDom, parentNode):
    844 """ 845 Adds an <mbox> configuration section as the next child of a parent. 846 847 Third parties should use this function to write configuration related to 848 this extension. 849 850 We add the following fields to the document:: 851 852 collectMode //cb_config/mbox/collectMode 853 compressMode //cb_config/mbox/compressMode 854 855 We also add groups of the following items, one list element per 856 item:: 857 858 mboxFiles //cb_config/mbox/file 859 mboxDirs //cb_config/mbox/dir 860 861 The mbox files and mbox directories are added by L{_addMboxFile} and 862 L{_addMboxDir}. 863 864 @param xmlDom: DOM tree as from C{impl.createDocument()}. 865 @param parentNode: Parent that the section should be appended to. 866 """ 867 if self.mbox is not None: 868 sectionNode = addContainerNode(xmlDom, parentNode, "mbox") 869 addStringNode(xmlDom, sectionNode, "collect_mode", self.mbox.collectMode) 870 addStringNode(xmlDom, sectionNode, "compress_mode", self.mbox.compressMode) 871 if self.mbox.mboxFiles is not None: 872 for mboxFile in self.mbox.mboxFiles: 873 LocalConfig._addMboxFile(xmlDom, sectionNode, mboxFile) 874 if self.mbox.mboxDirs is not None: 875 for mboxDir in self.mbox.mboxDirs: 876 LocalConfig._addMboxDir(xmlDom, sectionNode, mboxDir)
    877
    878 - def _parseXmlData(self, xmlData):
    879 """ 880 Internal method to parse an XML string into the object. 881 882 This method parses the XML document into a DOM tree (C{xmlDom}) and then 883 calls a static method to parse the mbox configuration section. 884 885 @param xmlData: XML data to be parsed 886 @type xmlData: String data 887 888 @raise ValueError: If the XML cannot be successfully parsed. 889 """ 890 (xmlDom, parentNode) = createInputDom(xmlData) 891 self._mbox = LocalConfig._parseMbox(parentNode)
    892 893 @staticmethod
    894 - def _parseMbox(parent):
    895 """ 896 Parses an mbox configuration section. 897 898 We read the following individual fields:: 899 900 collectMode //cb_config/mbox/collect_mode 901 compressMode //cb_config/mbox/compress_mode 902 903 We also read groups of the following item, one list element per 904 item:: 905 906 mboxFiles //cb_config/mbox/file 907 mboxDirs //cb_config/mbox/dir 908 909 The mbox files are parsed by L{_parseMboxFiles} and the mbox 910 directories are parsed by L{_parseMboxDirs}. 911 912 @param parent: Parent node to search beneath. 913 914 @return: C{MboxConfig} object or C{None} if the section does not exist. 915 @raise ValueError: If some filled-in value is invalid. 916 """ 917 mbox = None 918 section = readFirstChild(parent, "mbox") 919 if section is not None: 920 mbox = MboxConfig() 921 mbox.collectMode = readString(section, "collect_mode") 922 mbox.compressMode = readString(section, "compress_mode") 923 mbox.mboxFiles = LocalConfig._parseMboxFiles(section) 924 mbox.mboxDirs = LocalConfig._parseMboxDirs(section) 925 return mbox
    926 927 @staticmethod
    928 - def _parseMboxFiles(parent):
    929 """ 930 Reads a list of C{MboxFile} objects from immediately beneath the parent. 931 932 We read the following individual fields:: 933 934 absolutePath abs_path 935 collectMode collect_mode 936 compressMode compess_mode 937 938 @param parent: Parent node to search beneath. 939 940 @return: List of C{MboxFile} objects or C{None} if none are found. 941 @raise ValueError: If some filled-in value is invalid. 942 """ 943 lst = [] 944 for entry in readChildren(parent, "file"): 945 if isElement(entry): 946 mboxFile = MboxFile() 947 mboxFile.absolutePath = readString(entry, "abs_path") 948 mboxFile.collectMode = readString(entry, "collect_mode") 949 mboxFile.compressMode = readString(entry, "compress_mode") 950 lst.append(mboxFile) 951 if lst == []: 952 lst = None 953 return lst
    954 955 @staticmethod
    956 - def _parseMboxDirs(parent):
    957 """ 958 Reads a list of C{MboxDir} objects from immediately beneath the parent. 959 960 We read the following individual fields:: 961 962 absolutePath abs_path 963 collectMode collect_mode 964 compressMode compess_mode 965 966 We also read groups of the following items, one list element per 967 item:: 968 969 relativeExcludePaths exclude/rel_path 970 excludePatterns exclude/pattern 971 972 The exclusions are parsed by L{_parseExclusions}. 973 974 @param parent: Parent node to search beneath. 975 976 @return: List of C{MboxDir} objects or C{None} if none are found. 977 @raise ValueError: If some filled-in value is invalid. 978 """ 979 lst = [] 980 for entry in readChildren(parent, "dir"): 981 if isElement(entry): 982 mboxDir = MboxDir() 983 mboxDir.absolutePath = readString(entry, "abs_path") 984 mboxDir.collectMode = readString(entry, "collect_mode") 985 mboxDir.compressMode = readString(entry, "compress_mode") 986 (mboxDir.relativeExcludePaths, mboxDir.excludePatterns) = LocalConfig._parseExclusions(entry) 987 lst.append(mboxDir) 988 if lst == []: 989 lst = None 990 return lst
    991 992 @staticmethod
    993 - def _parseExclusions(parentNode):
    994 """ 995 Reads exclusions data from immediately beneath the parent. 996 997 We read groups of the following items, one list element per item:: 998 999 relative exclude/rel_path 1000 patterns exclude/pattern 1001 1002 If there are none of some pattern (i.e. no relative path items) then 1003 C{None} will be returned for that item in the tuple. 1004 1005 @param parentNode: Parent node to search beneath. 1006 1007 @return: Tuple of (relative, patterns) exclusions. 1008 """ 1009 section = readFirstChild(parentNode, "exclude") 1010 if section is None: 1011 return (None, None) 1012 else: 1013 relative = readStringList(section, "rel_path") 1014 patterns = readStringList(section, "pattern") 1015 return (relative, patterns)
    1016 1017 @staticmethod
    1018 - def _addMboxFile(xmlDom, parentNode, mboxFile):
    1019 """ 1020 Adds an mbox file container as the next child of a parent. 1021 1022 We add the following fields to the document:: 1023 1024 absolutePath file/abs_path 1025 collectMode file/collect_mode 1026 compressMode file/compress_mode 1027 1028 The <file> node itself is created as the next child of the parent node. 1029 This method only adds one mbox file node. The parent must loop for each 1030 mbox file in the C{MboxConfig} object. 1031 1032 If C{mboxFile} is C{None}, this method call will be a no-op. 1033 1034 @param xmlDom: DOM tree as from C{impl.createDocument()}. 1035 @param parentNode: Parent that the section should be appended to. 1036 @param mboxFile: MboxFile to be added to the document. 1037 """ 1038 if mboxFile is not None: 1039 sectionNode = addContainerNode(xmlDom, parentNode, "file") 1040 addStringNode(xmlDom, sectionNode, "abs_path", mboxFile.absolutePath) 1041 addStringNode(xmlDom, sectionNode, "collect_mode", mboxFile.collectMode) 1042 addStringNode(xmlDom, sectionNode, "compress_mode", mboxFile.compressMode)
    1043 1044 @staticmethod
    1045 - def _addMboxDir(xmlDom, parentNode, mboxDir):
    1046 """ 1047 Adds an mbox directory container as the next child of a parent. 1048 1049 We add the following fields to the document:: 1050 1051 absolutePath dir/abs_path 1052 collectMode dir/collect_mode 1053 compressMode dir/compress_mode 1054 1055 We also add groups of the following items, one list element per item:: 1056 1057 relativeExcludePaths dir/exclude/rel_path 1058 excludePatterns dir/exclude/pattern 1059 1060 The <dir> node itself is created as the next child of the parent node. 1061 This method only adds one mbox directory node. The parent must loop for 1062 each mbox directory in the C{MboxConfig} object. 1063 1064 If C{mboxDir} is C{None}, this method call will be a no-op. 1065 1066 @param xmlDom: DOM tree as from C{impl.createDocument()}. 1067 @param parentNode: Parent that the section should be appended to. 1068 @param mboxDir: MboxDir to be added to the document. 1069 """ 1070 if mboxDir is not None: 1071 sectionNode = addContainerNode(xmlDom, parentNode, "dir") 1072 addStringNode(xmlDom, sectionNode, "abs_path", mboxDir.absolutePath) 1073 addStringNode(xmlDom, sectionNode, "collect_mode", mboxDir.collectMode) 1074 addStringNode(xmlDom, sectionNode, "compress_mode", mboxDir.compressMode) 1075 if ((mboxDir.relativeExcludePaths is not None and mboxDir.relativeExcludePaths != []) or 1076 (mboxDir.excludePatterns is not None and mboxDir.excludePatterns != [])): 1077 excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") 1078 if mboxDir.relativeExcludePaths is not None: 1079 for relativePath in mboxDir.relativeExcludePaths: 1080 addStringNode(xmlDom, excludeNode, "rel_path", relativePath) 1081 if mboxDir.excludePatterns is not None: 1082 for pattern in mboxDir.excludePatterns: 1083 addStringNode(xmlDom, excludeNode, "pattern", pattern)
    1084
    1085 1086 ######################################################################## 1087 # Public functions 1088 ######################################################################## 1089 1090 ########################### 1091 # executeAction() function 1092 ########################### 1093 1094 -def executeAction(configPath, options, config):
    1095 """ 1096 Executes the mbox backup action. 1097 1098 @param configPath: Path to configuration file on disk. 1099 @type configPath: String representing a path on disk. 1100 1101 @param options: Program command-line options. 1102 @type options: Options object. 1103 1104 @param config: Program configuration. 1105 @type config: Config object. 1106 1107 @raise ValueError: Under many generic error conditions 1108 @raise IOError: If a backup could not be written for some reason. 1109 """ 1110 logger.debug("Executing mbox extended action.") 1111 newRevision = datetime.datetime.today() # mark here so all actions are after this date/time 1112 if config.options is None or config.collect is None: 1113 raise ValueError("Cedar Backup configuration is not properly filled in.") 1114 local = LocalConfig(xmlPath=configPath) 1115 todayIsStart = isStartOfWeek(config.options.startingDay) 1116 fullBackup = options.full or todayIsStart 1117 logger.debug("Full backup flag is [%s]", fullBackup) 1118 if local.mbox.mboxFiles is not None: 1119 for mboxFile in local.mbox.mboxFiles: 1120 logger.debug("Working with mbox file [%s]", mboxFile.absolutePath) 1121 collectMode = _getCollectMode(local, mboxFile) 1122 compressMode = _getCompressMode(local, mboxFile) 1123 lastRevision = _loadLastRevision(config, mboxFile, fullBackup, collectMode) 1124 if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): 1125 logger.debug("Mbox file meets criteria to be backed up today.") 1126 _backupMboxFile(config, mboxFile.absolutePath, fullBackup, 1127 collectMode, compressMode, lastRevision, newRevision) 1128 else: 1129 logger.debug("Mbox file will not be backed up, per collect mode.") 1130 if collectMode == 'incr': 1131 _writeNewRevision(config, mboxFile, newRevision) 1132 if local.mbox.mboxDirs is not None: 1133 for mboxDir in local.mbox.mboxDirs: 1134 logger.debug("Working with mbox directory [%s]", mboxDir.absolutePath) 1135 collectMode = _getCollectMode(local, mboxDir) 1136 compressMode = _getCompressMode(local, mboxDir) 1137 lastRevision = _loadLastRevision(config, mboxDir, fullBackup, collectMode) 1138 (excludePaths, excludePatterns) = _getExclusions(mboxDir) 1139 if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): 1140 logger.debug("Mbox directory meets criteria to be backed up today.") 1141 _backupMboxDir(config, mboxDir.absolutePath, 1142 fullBackup, collectMode, compressMode, 1143 lastRevision, newRevision, 1144 excludePaths, excludePatterns) 1145 else: 1146 logger.debug("Mbox directory will not be backed up, per collect mode.") 1147 if collectMode == 'incr': 1148 _writeNewRevision(config, mboxDir, newRevision) 1149 logger.info("Executed the mbox extended action successfully.")
    1150
    1151 -def _getCollectMode(local, item):
    1152 """ 1153 Gets the collect mode that should be used for an mbox file or directory. 1154 Use file- or directory-specific value if possible, otherwise take from mbox section. 1155 @param local: LocalConfig object. 1156 @param item: Mbox file or directory 1157 @return: Collect mode to use. 1158 """ 1159 if item.collectMode is None: 1160 collectMode = local.mbox.collectMode 1161 else: 1162 collectMode = item.collectMode 1163 logger.debug("Collect mode is [%s]", collectMode) 1164 return collectMode
    1165
    1166 -def _getCompressMode(local, item):
    1167 """ 1168 Gets the compress mode that should be used for an mbox file or directory. 1169 Use file- or directory-specific value if possible, otherwise take from mbox section. 1170 @param local: LocalConfig object. 1171 @param item: Mbox file or directory 1172 @return: Compress mode to use. 1173 """ 1174 if item.compressMode is None: 1175 compressMode = local.mbox.compressMode 1176 else: 1177 compressMode = item.compressMode 1178 logger.debug("Compress mode is [%s]", compressMode) 1179 return compressMode
    1180
    1181 -def _getRevisionPath(config, item):
    1182 """ 1183 Gets the path to the revision file associated with a repository. 1184 @param config: Cedar Backup configuration. 1185 @param item: Mbox file or directory 1186 @return: Absolute path to the revision file associated with the repository. 1187 """ 1188 normalized = buildNormalizedPath(item.absolutePath) 1189 filename = "%s.%s" % (normalized, REVISION_PATH_EXTENSION) 1190 revisionPath = os.path.join(config.options.workingDir, filename) 1191 logger.debug("Revision file path is [%s]", revisionPath) 1192 return revisionPath
    1193
    1194 -def _loadLastRevision(config, item, fullBackup, collectMode):
    1195 """ 1196 Loads the last revision date for this item from disk and returns it. 1197 1198 If this is a full backup, or if the revision file cannot be loaded for some 1199 reason, then C{None} is returned. This indicates that there is no previous 1200 revision, so the entire mail file or directory should be backed up. 1201 1202 @note: We write the actual revision object to disk via pickle, so we don't 1203 deal with the datetime precision or format at all. Whatever's in the object 1204 is what we write. 1205 1206 @param config: Cedar Backup configuration. 1207 @param item: Mbox file or directory 1208 @param fullBackup: Indicates whether this is a full backup 1209 @param collectMode: Indicates the collect mode for this item 1210 1211 @return: Revision date as a datetime.datetime object or C{None}. 1212 """ 1213 revisionPath = _getRevisionPath(config, item) 1214 if fullBackup: 1215 revisionDate = None 1216 logger.debug("Revision file ignored because this is a full backup.") 1217 elif collectMode in ['weekly', 'daily']: 1218 revisionDate = None 1219 logger.debug("No revision file based on collect mode [%s].", collectMode) 1220 else: 1221 logger.debug("Revision file will be used for non-full incremental backup.") 1222 if not os.path.isfile(revisionPath): 1223 revisionDate = None 1224 logger.debug("Revision file [%s] does not exist on disk.", revisionPath) 1225 else: 1226 try: 1227 with open(revisionPath, "rb") as f: 1228 revisionDate = pickle.load(f, fix_imports=True) # be compatible with Python 2 1229 logger.debug("Loaded revision file [%s] from disk: [%s]", revisionPath, revisionDate) 1230 except Exception as e: 1231 revisionDate = None 1232 logger.error("Failed loading revision file [%s] from disk: %s", revisionPath, e) 1233 return revisionDate
    1234
    1235 -def _writeNewRevision(config, item, newRevision):
    1236 """ 1237 Writes new revision information to disk. 1238 1239 If we can't write the revision file successfully for any reason, we'll log 1240 the condition but won't throw an exception. 1241 1242 @note: We write the actual revision object to disk via pickle, so we don't 1243 deal with the datetime precision or format at all. Whatever's in the object 1244 is what we write. 1245 1246 @param config: Cedar Backup configuration. 1247 @param item: Mbox file or directory 1248 @param newRevision: Revision date as a datetime.datetime object. 1249 """ 1250 revisionPath = _getRevisionPath(config, item) 1251 try: 1252 with open(revisionPath, "wb") as f: 1253 pickle.dump(newRevision, f, 0, fix_imports=True) # be compatible with Python 2 1254 changeOwnership(revisionPath, config.options.backupUser, config.options.backupGroup) 1255 logger.debug("Wrote new revision file [%s] to disk: [%s]", revisionPath, newRevision) 1256 except Exception as e: 1257 logger.error("Failed to write revision file [%s] to disk: %s", revisionPath, e)
    1258
    1259 -def _getExclusions(mboxDir):
    1260 """ 1261 Gets exclusions (file and patterns) associated with an mbox directory. 1262 1263 The returned files value is a list of absolute paths to be excluded from the 1264 backup for a given directory. It is derived from the mbox directory's 1265 relative exclude paths. 1266 1267 The returned patterns value is a list of patterns to be excluded from the 1268 backup for a given directory. It is derived from the mbox directory's list 1269 of patterns. 1270 1271 @param mboxDir: Mbox directory object. 1272 1273 @return: Tuple (files, patterns) indicating what to exclude. 1274 """ 1275 paths = [] 1276 if mboxDir.relativeExcludePaths is not None: 1277 for relativePath in mboxDir.relativeExcludePaths: 1278 paths.append(os.path.join(mboxDir.absolutePath, relativePath)) 1279 patterns = [] 1280 if mboxDir.excludePatterns is not None: 1281 patterns.extend(mboxDir.excludePatterns) 1282 logger.debug("Exclude paths: %s", paths) 1283 logger.debug("Exclude patterns: %s", patterns) 1284 return(paths, patterns)
    1285
    1286 -def _getBackupPath(config, mboxPath, compressMode, newRevision, targetDir=None):
    1287 """ 1288 Gets the backup file path (including correct extension) associated with an mbox path. 1289 1290 We assume that if the target directory is passed in, that we're backing up a 1291 directory. Under these circumstances, we'll just use the basename of the 1292 individual path as the output file. 1293 1294 @note: The backup path only contains the current date in YYYYMMDD format, 1295 but that's OK because the index information (stored elsewhere) is the actual 1296 date object. 1297 1298 @param config: Cedar Backup configuration. 1299 @param mboxPath: Path to the indicated mbox file or directory 1300 @param compressMode: Compress mode to use for this mbox path 1301 @param newRevision: Revision this backup path represents 1302 @param targetDir: Target directory in which the path should exist 1303 1304 @return: Absolute path to the backup file associated with the repository. 1305 """ 1306 if targetDir is None: 1307 normalizedPath = buildNormalizedPath(mboxPath) 1308 revisionDate = newRevision.strftime("%Y%m%d") 1309 filename = "mbox-%s-%s" % (revisionDate, normalizedPath) 1310 else: 1311 filename = os.path.basename(mboxPath) 1312 if compressMode == 'gzip': 1313 filename = "%s.gz" % filename 1314 elif compressMode == 'bzip2': 1315 filename = "%s.bz2" % filename 1316 if targetDir is None: 1317 backupPath = os.path.join(config.collect.targetDir, filename) 1318 else: 1319 backupPath = os.path.join(targetDir, filename) 1320 logger.debug("Backup file path is [%s]", backupPath) 1321 return backupPath
    1322
    1323 -def _getTarfilePath(config, mboxPath, compressMode, newRevision):
    1324 """ 1325 Gets the tarfile backup file path (including correct extension) associated 1326 with an mbox path. 1327 1328 Along with the path, the tar archive mode is returned in a form that can 1329 be used with L{BackupFileList.generateTarfile}. 1330 1331 @note: The tarfile path only contains the current date in YYYYMMDD format, 1332 but that's OK because the index information (stored elsewhere) is the actual 1333 date object. 1334 1335 @param config: Cedar Backup configuration. 1336 @param mboxPath: Path to the indicated mbox file or directory 1337 @param compressMode: Compress mode to use for this mbox path 1338 @param newRevision: Revision this backup path represents 1339 1340 @return: Tuple of (absolute path to tarfile, tar archive mode) 1341 """ 1342 normalizedPath = buildNormalizedPath(mboxPath) 1343 revisionDate = newRevision.strftime("%Y%m%d") 1344 filename = "mbox-%s-%s.tar" % (revisionDate, normalizedPath) 1345 if compressMode == 'gzip': 1346 filename = "%s.gz" % filename 1347 archiveMode = "targz" 1348 elif compressMode == 'bzip2': 1349 filename = "%s.bz2" % filename 1350 archiveMode = "tarbz2" 1351 else: 1352 archiveMode = "tar" 1353 tarfilePath = os.path.join(config.collect.targetDir, filename) 1354 logger.debug("Tarfile path is [%s]", tarfilePath) 1355 return (tarfilePath, archiveMode)
    1356
    1357 -def _getOutputFile(backupPath, compressMode):
    1358 """ 1359 Opens the output file used for saving backup information. 1360 1361 If the compress mode is "gzip", we'll open a C{GzipFile}, and if the 1362 compress mode is "bzip2", we'll open a C{BZ2File}. Otherwise, we'll just 1363 return an object from the normal C{open()} method. 1364 1365 @param backupPath: Path to file to open. 1366 @param compressMode: Compress mode of file ("none", "gzip", "bzip"). 1367 1368 @return: Output file object, opened in binary mode for use with executeCommand() 1369 """ 1370 if compressMode == "gzip": 1371 return GzipFile(backupPath, "wb") 1372 elif compressMode == "bzip2": 1373 return BZ2File(backupPath, "wb") 1374 else: 1375 return open(backupPath, "wb")
    1376
    1377 -def _backupMboxFile(config, absolutePath, 1378 fullBackup, collectMode, compressMode, 1379 lastRevision, newRevision, targetDir=None):
    1380 """ 1381 Backs up an individual mbox file. 1382 1383 @param config: Cedar Backup configuration. 1384 @param absolutePath: Path to mbox file to back up. 1385 @param fullBackup: Indicates whether this should be a full backup. 1386 @param collectMode: Indicates the collect mode for this item 1387 @param compressMode: Compress mode of file ("none", "gzip", "bzip") 1388 @param lastRevision: Date of last backup as datetime.datetime 1389 @param newRevision: Date of new (current) backup as datetime.datetime 1390 @param targetDir: Target directory to write the backed-up file into 1391 1392 @raise ValueError: If some value is missing or invalid. 1393 @raise IOError: If there is a problem backing up the mbox file. 1394 """ 1395 if fullBackup or collectMode != "incr" or lastRevision is None: 1396 args = [ "-a", "-u", absolutePath, ] # remove duplicates but fetch entire mailbox 1397 else: 1398 revisionDate = lastRevision.strftime("%Y-%m-%dT%H:%M:%S") # ISO-8601 format; grepmail calls Date::Parse::str2time() 1399 args = [ "-a", "-u", "-d", "since %s" % revisionDate, absolutePath, ] 1400 command = resolveCommand(GREPMAIL_COMMAND) 1401 backupPath = _getBackupPath(config, absolutePath, compressMode, newRevision, targetDir=targetDir) 1402 with _getOutputFile(backupPath, compressMode) as outputFile: 1403 result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile)[0] 1404 if result != 0: 1405 raise IOError("Error [%d] executing grepmail on [%s]." % (result, absolutePath)) 1406 logger.debug("Completed backing up mailbox [%s].", absolutePath) 1407 return backupPath
    1408
    1409 -def _backupMboxDir(config, absolutePath, 1410 fullBackup, collectMode, compressMode, 1411 lastRevision, newRevision, 1412 excludePaths, excludePatterns):
    1413 """ 1414 Backs up a directory containing mbox files. 1415 1416 @param config: Cedar Backup configuration. 1417 @param absolutePath: Path to mbox directory to back up. 1418 @param fullBackup: Indicates whether this should be a full backup. 1419 @param collectMode: Indicates the collect mode for this item 1420 @param compressMode: Compress mode of file ("none", "gzip", "bzip") 1421 @param lastRevision: Date of last backup as datetime.datetime 1422 @param newRevision: Date of new (current) backup as datetime.datetime 1423 @param excludePaths: List of absolute paths to exclude. 1424 @param excludePatterns: List of patterns to exclude. 1425 1426 @raise ValueError: If some value is missing or invalid. 1427 @raise IOError: If there is a problem backing up the mbox file. 1428 """ 1429 try: 1430 tmpdir = tempfile.mkdtemp(dir=config.options.workingDir) 1431 mboxList = FilesystemList() 1432 mboxList.excludeDirs = True 1433 mboxList.excludePaths = excludePaths 1434 mboxList.excludePatterns = excludePatterns 1435 mboxList.addDirContents(absolutePath, recursive=False) 1436 tarList = BackupFileList() 1437 for item in mboxList: 1438 backupPath = _backupMboxFile(config, item, fullBackup, 1439 collectMode, "none", # no need to compress inside compressed tar 1440 lastRevision, newRevision, 1441 targetDir=tmpdir) 1442 tarList.addFile(backupPath) 1443 (tarfilePath, archiveMode) = _getTarfilePath(config, absolutePath, compressMode, newRevision) 1444 tarList.generateTarfile(tarfilePath, archiveMode, ignore=True, flat=True) 1445 changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) 1446 logger.debug("Completed backing up directory [%s].", absolutePath) 1447 finally: 1448 try: 1449 for cleanitem in tarList: 1450 if os.path.exists(cleanitem): 1451 try: 1452 os.remove(cleanitem) 1453 except: pass 1454 except: pass 1455 try: 1456 os.rmdir(tmpdir) 1457 except: pass
    1458

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.Config-class.html0000664000175000017500000056472012657665544027131 0ustar pronovicpronovic00000000000000 CedarBackup3.config.Config
    Package CedarBackup3 :: Module config :: Class Config
    [hide private]
    [frames] | no frames]

    Class Config

    source code

    object --+
             |
            Config
    

    Class representing a Cedar Backup XML configuration document.

    The Config class is a Python object representation of a Cedar Backup XML configuration file. It is intended to be the only Python-language interface to Cedar Backup configuration on disk for both Cedar Backup itself and for external applications.

    The object representation is two-way: XML data can be used to create a Config object, and then changes to the object can be propogated back to disk. A Config object can even be used to create a configuration file from scratch programmatically.

    This class and the classes it is composed from often use Python's property construct to validate input and limit access to values. Some validations can only be done once a document is considered "complete" (see module notes for more details).

    Assignments to the various instance variables must match the expected type, i.e. reference must be a ReferenceConfig. The internal check uses the built-in isinstance function, so it should be OK to use subclasses if you want to.

    If an instance variable is not set, its value will be None. When an object is initialized without using an XML document, all of the values will be None. Even when an object is initialized using XML, some of the values might be None because not every section is required.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    extractXml(self, xmlPath=None, validate=True)
    Extracts configuration into an XML document.
    source code
     
    validate(self, requireOneAction=True, requireReference=False, requireExtensions=False, requireOptions=True, requireCollect=False, requireStage=False, requireStore=False, requirePurge=False, requirePeers=False)
    Validates configuration represented by the object.
    source code
     
    _getReference(self)
    Property target used to get the reference configuration value.
    source code
     
    _setReference(self, value)
    Property target used to set the reference configuration value.
    source code
     
    _getExtensions(self)
    Property target used to get the extensions configuration value.
    source code
     
    _setExtensions(self, value)
    Property target used to set the extensions configuration value.
    source code
     
    _getOptions(self)
    Property target used to get the options configuration value.
    source code
     
    _setOptions(self, value)
    Property target used to set the options configuration value.
    source code
     
    _getPeers(self)
    Property target used to get the peers configuration value.
    source code
     
    _setPeers(self, value)
    Property target used to set the peers configuration value.
    source code
     
    _getCollect(self)
    Property target used to get the collect configuration value.
    source code
     
    _setCollect(self, value)
    Property target used to set the collect configuration value.
    source code
     
    _getStage(self)
    Property target used to get the stage configuration value.
    source code
     
    _setStage(self, value)
    Property target used to set the stage configuration value.
    source code
     
    _getStore(self)
    Property target used to get the store configuration value.
    source code
     
    _setStore(self, value)
    Property target used to set the store configuration value.
    source code
     
    _getPurge(self)
    Property target used to get the purge configuration value.
    source code
     
    _setPurge(self, value)
    Property target used to set the purge configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code
     
    _extractXml(self)
    Internal method to extract configuration into an XML string.
    source code
     
    _validateContents(self)
    Validates configuration contents per rules discussed in module documentation.
    source code
     
    _validateReference(self)
    Validates reference configuration.
    source code
     
    _validateExtensions(self)
    Validates extensions configuration.
    source code
     
    _validateOptions(self)
    Validates options configuration.
    source code
     
    _validatePeers(self)
    Validates peers configuration per rules in _validatePeerList.
    source code
     
    _validateCollect(self)
    Validates collect configuration.
    source code
     
    _validateStage(self)
    Validates stage configuration.
    source code
     
    _validateStore(self)
    Validates store configuration.
    source code
     
    _validatePurge(self)
    Validates purge configuration.
    source code
     
    _validatePeerList(self, localPeers, remotePeers)
    Validates the set of local and remote peers.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseReference(parentNode)
    Parses a reference configuration section.
    source code
     
    _parseExtensions(parentNode)
    Parses an extensions configuration section.
    source code
     
    _parseOptions(parentNode)
    Parses a options configuration section.
    source code
     
    _parsePeers(parentNode)
    Parses a peers configuration section.
    source code
     
    _parseCollect(parentNode)
    Parses a collect configuration section.
    source code
     
    _parseStage(parentNode)
    Parses a stage configuration section.
    source code
     
    _parseStore(parentNode)
    Parses a store configuration section.
    source code
     
    _parsePurge(parentNode)
    Parses a purge configuration section.
    source code
     
    _parseExtendedActions(parentNode)
    Reads extended actions data from immediately beneath the parent.
    source code
     
    _parseExclusions(parentNode)
    Reads exclusions data from immediately beneath the parent.
    source code
     
    _parseOverrides(parentNode)
    Reads a list of CommandOverride objects from immediately beneath the parent.
    source code
     
    _parseHooks(parentNode)
    Reads a list of ActionHook objects from immediately beneath the parent.
    source code
     
    _parseCollectFiles(parentNode)
    Reads a list of CollectFile objects from immediately beneath the parent.
    source code
     
    _parseCollectDirs(parentNode)
    Reads a list of CollectDir objects from immediately beneath the parent.
    source code
     
    _parsePurgeDirs(parentNode)
    Reads a list of PurgeDir objects from immediately beneath the parent.
    source code
     
    _parsePeerList(parentNode)
    Reads remote and local peer data from immediately beneath the parent.
    source code
     
    _parseDependencies(parentNode)
    Reads extended action dependency information from a parent node.
    source code
     
    _parseBlankBehavior(parentNode)
    Reads a single BlankBehavior object from immediately beneath the parent.
    source code
     
    _addReference(xmlDom, parentNode, referenceConfig)
    Adds a <reference> configuration section as the next child of a parent.
    source code
     
    _addExtensions(xmlDom, parentNode, extensionsConfig)
    Adds an <extensions> configuration section as the next child of a parent.
    source code
     
    _addOptions(xmlDom, parentNode, optionsConfig)
    Adds a <options> configuration section as the next child of a parent.
    source code
     
    _addPeers(xmlDom, parentNode, peersConfig)
    Adds a <peers> configuration section as the next child of a parent.
    source code
     
    _addCollect(xmlDom, parentNode, collectConfig)
    Adds a <collect> configuration section as the next child of a parent.
    source code
     
    _addStage(xmlDom, parentNode, stageConfig)
    Adds a <stage> configuration section as the next child of a parent.
    source code
     
    _addStore(xmlDom, parentNode, storeConfig)
    Adds a <store> configuration section as the next child of a parent.
    source code
     
    _addPurge(xmlDom, parentNode, purgeConfig)
    Adds a <purge> configuration section as the next child of a parent.
    source code
     
    _addExtendedAction(xmlDom, parentNode, action)
    Adds an extended action container as the next child of a parent.
    source code
     
    _addOverride(xmlDom, parentNode, override)
    Adds a command override container as the next child of a parent.
    source code
     
    _addHook(xmlDom, parentNode, hook)
    Adds an action hook container as the next child of a parent.
    source code
     
    _addCollectFile(xmlDom, parentNode, collectFile)
    Adds a collect file container as the next child of a parent.
    source code
     
    _addCollectDir(xmlDom, parentNode, collectDir)
    Adds a collect directory container as the next child of a parent.
    source code
     
    _addLocalPeer(xmlDom, parentNode, localPeer)
    Adds a local peer container as the next child of a parent.
    source code
     
    _addRemotePeer(xmlDom, parentNode, remotePeer)
    Adds a remote peer container as the next child of a parent.
    source code
     
    _addPurgeDir(xmlDom, parentNode, purgeDir)
    Adds a purge directory container as the next child of a parent.
    source code
     
    _addDependencies(xmlDom, parentNode, dependencies)
    Adds a extended action dependencies to parent node.
    source code
     
    _buildCommaSeparatedString(valueList)
    Creates a comma-separated string from a list of values.
    source code
     
    _addBlankBehavior(xmlDom, parentNode, blankBehavior)
    Adds a blanking behavior container as the next child of a parent.
    source code
    Properties [hide private]
      reference
    Reference configuration in terms of a ReferenceConfig object.
      extensions
    Extensions configuration in terms of a ExtensionsConfig object.
      options
    Options configuration in terms of a OptionsConfig object.
      collect
    Collect configuration in terms of a CollectConfig object.
      stage
    Stage configuration in terms of a StageConfig object.
      store
    Store configuration in terms of a StoreConfig object.
      purge
    Purge configuration in terms of a PurgeConfig object.
      peers
    Peers configuration in terms of a PeersConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath, then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the Config.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    extractXml(self, xmlPath=None, validate=True)

    source code 

    Extracts configuration into an XML document.

    If xmlPath is not provided, then the XML document will be returned as a string. If xmlPath is provided, then the XML document will be written to the file and None will be returned.

    Unless the validate parameter is False, the Config.validate method will be called (with its default arguments) against the configuration before extracting the XML. If configuration is not valid, then an XML document will not be extracted.

    Parameters:
    • xmlPath (Absolute path to a file.) - Path to an XML file to create on disk.
    • validate (Boolean true/false.) - Validate the document before extracting it.
    Returns:
    XML string data or None as described above.
    Raises:
    • ValueError - If configuration within the object is not valid.
    • IOError - If there is an error writing to the file.
    • OSError - If there is an error writing to the file.

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to write an invalid configuration file to disk.

    validate(self, requireOneAction=True, requireReference=False, requireExtensions=False, requireOptions=True, requireCollect=False, requireStage=False, requireStore=False, requirePurge=False, requirePeers=False)

    source code 

    Validates configuration represented by the object.

    This method encapsulates all of the validations that should apply to a fully "complete" document but are not already taken care of by earlier validations. It also provides some extra convenience functionality which might be useful to some people. The process of validation is laid out in the Validation section in the class notes (above).

    Parameters:
    • requireOneAction - Require at least one of the collect, stage, store or purge sections.
    • requireReference - Require the reference section.
    • requireExtensions - Require the extensions section.
    • requireOptions - Require the options section.
    • requirePeers - Require the peers section.
    • requireCollect - Require the collect section.
    • requireStage - Require the stage section.
    • requireStore - Require the store section.
    • requirePurge - Require the purge section.
    Raises:
    • ValueError - If one of the validations fails.

    _setReference(self, value)

    source code 

    Property target used to set the reference configuration value. If not None, the value must be a ReferenceConfig object.

    Raises:
    • ValueError - If the value is not a ReferenceConfig

    _setExtensions(self, value)

    source code 

    Property target used to set the extensions configuration value. If not None, the value must be a ExtensionsConfig object.

    Raises:
    • ValueError - If the value is not a ExtensionsConfig

    _setOptions(self, value)

    source code 

    Property target used to set the options configuration value. If not None, the value must be an OptionsConfig object.

    Raises:
    • ValueError - If the value is not a OptionsConfig

    _setPeers(self, value)

    source code 

    Property target used to set the peers configuration value. If not None, the value must be an PeersConfig object.

    Raises:
    • ValueError - If the value is not a PeersConfig

    _setCollect(self, value)

    source code 

    Property target used to set the collect configuration value. If not None, the value must be a CollectConfig object.

    Raises:
    • ValueError - If the value is not a CollectConfig

    _setStage(self, value)

    source code 

    Property target used to set the stage configuration value. If not None, the value must be a StageConfig object.

    Raises:
    • ValueError - If the value is not a StageConfig

    _setStore(self, value)

    source code 

    Property target used to set the store configuration value. If not None, the value must be a StoreConfig object.

    Raises:
    • ValueError - If the value is not a StoreConfig

    _setPurge(self, value)

    source code 

    Property target used to set the purge configuration value. If not None, the value must be a PurgeConfig object.

    Raises:
    • ValueError - If the value is not a PurgeConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls individual static methods to parse each of the individual configuration sections.

    Most of the validation we do here has to do with whether the document can be parsed and whether any values which exist are valid. We don't do much validation as to whether required elements actually exist unless we have to to make sense of the document (instead, that's the job of the validate method).

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseReference(parentNode)
    Static Method

    source code 

    Parses a reference configuration section.

    We read the following fields:

      author         //cb_config/reference/author
      revision       //cb_config/reference/revision
      description    //cb_config/reference/description
      generator      //cb_config/reference/generator
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    ReferenceConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseExtensions(parentNode)
    Static Method

    source code 

    Parses an extensions configuration section.

    We read the following fields:

      orderMode            //cb_config/extensions/order_mode
    

    We also read groups of the following items, one list element per item:

      name                 //cb_config/extensions/action/name
      module               //cb_config/extensions/action/module
      function             //cb_config/extensions/action/function
      index                //cb_config/extensions/action/index
      dependencies         //cb_config/extensions/action/depends
    

    The extended actions are parsed by _parseExtendedActions.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    ExtensionsConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseOptions(parentNode)
    Static Method

    source code 

    Parses a options configuration section.

    We read the following fields:

      startingDay    //cb_config/options/starting_day
      workingDir     //cb_config/options/working_dir
      backupUser     //cb_config/options/backup_user
      backupGroup    //cb_config/options/backup_group
      rcpCommand     //cb_config/options/rcp_command
      rshCommand     //cb_config/options/rsh_command
      cbackCommand   //cb_config/options/cback_command
      managedActions //cb_config/options/managed_actions
    

    The list of managed actions is a comma-separated list of action names.

    We also read groups of the following items, one list element per item:

      overrides      //cb_config/options/override
      hooks          //cb_config/options/hook
    

    The overrides are parsed by _parseOverrides and the hooks are parsed by _parseHooks.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    OptionsConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parsePeers(parentNode)
    Static Method

    source code 

    Parses a peers configuration section.

    We read groups of the following items, one list element per item:

      localPeers     //cb_config/stage/peer
      remotePeers    //cb_config/stage/peer
    

    The individual peer entries are parsed by _parsePeerList.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    StageConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseCollect(parentNode)
    Static Method

    source code 

    Parses a collect configuration section.

    We read the following individual fields:

      targetDir            //cb_config/collect/collect_dir
      collectMode          //cb_config/collect/collect_mode
      archiveMode          //cb_config/collect/archive_mode
      ignoreFile           //cb_config/collect/ignore_file
    

    We also read groups of the following items, one list element per item:

      absoluteExcludePaths //cb_config/collect/exclude/abs_path
      excludePatterns      //cb_config/collect/exclude/pattern
      collectFiles         //cb_config/collect/file
      collectDirs          //cb_config/collect/dir
    

    The exclusions are parsed by _parseExclusions, the collect files are parsed by _parseCollectFiles, and the directories are parsed by _parseCollectDirs.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    CollectConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseStage(parentNode)
    Static Method

    source code 

    Parses a stage configuration section.

    We read the following individual fields:

      targetDir      //cb_config/stage/staging_dir
    

    We also read groups of the following items, one list element per item:

      localPeers     //cb_config/stage/peer
      remotePeers    //cb_config/stage/peer
    

    The individual peer entries are parsed by _parsePeerList.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    StageConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseStore(parentNode)
    Static Method

    source code 

    Parses a store configuration section.

    We read the following fields:

      sourceDir         //cb_config/store/source_dir
      mediaType         //cb_config/store/media_type
      deviceType        //cb_config/store/device_type
      devicePath        //cb_config/store/target_device
      deviceScsiId      //cb_config/store/target_scsi_id
      driveSpeed        //cb_config/store/drive_speed
      checkData         //cb_config/store/check_data
      checkMedia        //cb_config/store/check_media
      warnMidnite       //cb_config/store/warn_midnite
      noEject           //cb_config/store/no_eject
    

    Blanking behavior configuration is parsed by the _parseBlankBehavior method.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    StoreConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parsePurge(parentNode)
    Static Method

    source code 

    Parses a purge configuration section.

    We read groups of the following items, one list element per item:

      purgeDirs     //cb_config/purge/dir
    

    The individual directory entries are parsed by _parsePurgeDirs.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    PurgeConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseExtendedActions(parentNode)
    Static Method

    source code 

    Reads extended actions data from immediately beneath the parent.

    We read the following individual fields from each extended action:

      name           name
      module         module
      function       function
      index          index
      dependencies   depends
    

    Dependency information is parsed by the _parseDependencies method.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of extended actions.
    Raises:
    • ValueError - If the data at the location can't be read

    _parseExclusions(parentNode)
    Static Method

    source code 

    Reads exclusions data from immediately beneath the parent.

    We read groups of the following items, one list element per item:

      absolute    exclude/abs_path
      relative    exclude/rel_path
      patterns    exclude/pattern
    

    If there are none of some pattern (i.e. no relative path items) then None will be returned for that item in the tuple.

    This method can be used to parse exclusions on both the collect configuration level and on the collect directory level within collect configuration.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    Tuple of (absolute, relative, patterns) exclusions.

    _parseOverrides(parentNode)
    Static Method

    source code 

    Reads a list of CommandOverride objects from immediately beneath the parent.

    We read the following individual fields:

      command                 command
      absolutePath            abs_path
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of CommandOverride objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseHooks(parentNode)
    Static Method

    source code 

    Reads a list of ActionHook objects from immediately beneath the parent.

    We read the following individual fields:

      action                  action
      command                 command
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of ActionHook objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseCollectFiles(parentNode)
    Static Method

    source code 

    Reads a list of CollectFile objects from immediately beneath the parent.

    We read the following individual fields:

      absolutePath            abs_path
      collectMode             mode I{or} collect_mode
      archiveMode             archive_mode
    

    The collect mode is a special case. Just a mode tag is accepted, but we prefer collect_mode for consistency with the rest of the config file and to avoid confusion with the archive mode. If both are provided, only mode will be used.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of CollectFile objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseCollectDirs(parentNode)
    Static Method

    source code 

    Reads a list of CollectDir objects from immediately beneath the parent.

    We read the following individual fields:

      absolutePath            abs_path
      collectMode             mode I{or} collect_mode
      archiveMode             archive_mode
      ignoreFile              ignore_file
      linkDepth               link_depth
      dereference             dereference
      recursionLevel          recursion_level
    

    The collect mode is a special case. Just a mode tag is accepted for backwards compatibility, but we prefer collect_mode for consistency with the rest of the config file and to avoid confusion with the archive mode. If both are provided, only mode will be used.

    We also read groups of the following items, one list element per item:

      absoluteExcludePaths    exclude/abs_path
      relativeExcludePaths    exclude/rel_path
      excludePatterns         exclude/pattern
    

    The exclusions are parsed by _parseExclusions.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of CollectDir objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parsePurgeDirs(parentNode)
    Static Method

    source code 

    Reads a list of PurgeDir objects from immediately beneath the parent.

    We read the following individual fields:

      absolutePath            <baseExpr>/abs_path
      retainDays              <baseExpr>/retain_days
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of PurgeDir objects or None if none are found.
    Raises:
    • ValueError - If the data at the location can't be read

    _parsePeerList(parentNode)
    Static Method

    source code 

    Reads remote and local peer data from immediately beneath the parent.

    We read the following individual fields for both remote and local peers:

      name        name
      collectDir  collect_dir
    

    We also read the following individual fields for remote peers only:

      remoteUser     backup_user
      rcpCommand     rcp_command
      rshCommand     rsh_command
      cbackCommand   cback_command
      managed        managed
      managedActions managed_actions
    

    Additionally, the value in the type field is used to determine whether this entry is a remote peer. If the type is "remote", it's a remote peer, and if the type is "local", it's a remote peer.

    If there are none of one type of peer (i.e. no local peers) then None will be returned for that item in the tuple.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    Tuple of (local, remote) peer lists.
    Raises:
    • ValueError - If the data at the location can't be read

    _parseDependencies(parentNode)
    Static Method

    source code 

    Reads extended action dependency information from a parent node.

    We read the following individual fields:

      runBefore   depends/run_before
      runAfter    depends/run_after
    

    Each of these fields is a comma-separated list of action names.

    The result is placed into an ActionDependencies object.

    If the dependencies parent node does not exist, None will be returned. Otherwise, an ActionDependencies object will always be created, even if it does not contain any actual dependencies in it.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    ActionDependencies object or None.
    Raises:
    • ValueError - If the data at the location can't be read

    _parseBlankBehavior(parentNode)
    Static Method

    source code 

    Reads a single BlankBehavior object from immediately beneath the parent.

    We read the following individual fields:

      blankMode     blank_behavior/mode
      blankFactor   blank_behavior/factor
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    BlankBehavior object or None if none if the section is not found
    Raises:
    • ValueError - If some filled-in value is invalid.

    _extractXml(self)

    source code 

    Internal method to extract configuration into an XML string.

    This method assumes that the internal validate method has been called prior to extracting the XML, if the caller cares. No validation will be done internally.

    As a general rule, fields that are set to None will be extracted into the document as empty tags. The same goes for container tags that are filled based on lists - if the list is empty or None, the container tag will be empty.

    _addReference(xmlDom, parentNode, referenceConfig)
    Static Method

    source code 

    Adds a <reference> configuration section as the next child of a parent.

    We add the following fields to the document:

      author         //cb_config/reference/author
      revision       //cb_config/reference/revision
      description    //cb_config/reference/description
      generator      //cb_config/reference/generator
    

    If referenceConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • referenceConfig - Reference configuration section to be added to the document.

    _addExtensions(xmlDom, parentNode, extensionsConfig)
    Static Method

    source code 

    Adds an <extensions> configuration section as the next child of a parent.

    We add the following fields to the document:

      order_mode     //cb_config/extensions/order_mode
    

    We also add groups of the following items, one list element per item:

      actions        //cb_config/extensions/action
    

    The extended action entries are added by _addExtendedAction.

    If extensionsConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • extensionsConfig - Extensions configuration section to be added to the document.

    _addOptions(xmlDom, parentNode, optionsConfig)
    Static Method

    source code 

    Adds a <options> configuration section as the next child of a parent.

    We add the following fields to the document:

      startingDay    //cb_config/options/starting_day
      workingDir     //cb_config/options/working_dir
      backupUser     //cb_config/options/backup_user
      backupGroup    //cb_config/options/backup_group
      rcpCommand     //cb_config/options/rcp_command
      rshCommand     //cb_config/options/rsh_command
      cbackCommand   //cb_config/options/cback_command
      managedActions //cb_config/options/managed_actions
    

    We also add groups of the following items, one list element per item:

      overrides      //cb_config/options/override
      hooks          //cb_config/options/pre_action_hook
      hooks          //cb_config/options/post_action_hook
    

    The individual override items are added by _addOverride. The individual hook items are added by _addHook.

    If optionsConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • optionsConfig - Options configuration section to be added to the document.

    _addPeers(xmlDom, parentNode, peersConfig)
    Static Method

    source code 

    Adds a <peers> configuration section as the next child of a parent.

    We add groups of the following items, one list element per item:

      localPeers     //cb_config/peers/peer
      remotePeers    //cb_config/peers/peer
    

    The individual local and remote peer entries are added by _addLocalPeer and _addRemotePeer, respectively.

    If peersConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • peersConfig - Peers configuration section to be added to the document.

    _addCollect(xmlDom, parentNode, collectConfig)
    Static Method

    source code 

    Adds a <collect> configuration section as the next child of a parent.

    We add the following fields to the document:

      targetDir            //cb_config/collect/collect_dir
      collectMode          //cb_config/collect/collect_mode
      archiveMode          //cb_config/collect/archive_mode
      ignoreFile           //cb_config/collect/ignore_file
    

    We also add groups of the following items, one list element per item:

      absoluteExcludePaths //cb_config/collect/exclude/abs_path
      excludePatterns      //cb_config/collect/exclude/pattern
      collectFiles         //cb_config/collect/file
      collectDirs          //cb_config/collect/dir
    

    The individual collect files are added by _addCollectFile and individual collect directories are added by _addCollectDir.

    If collectConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • collectConfig - Collect configuration section to be added to the document.

    _addStage(xmlDom, parentNode, stageConfig)
    Static Method

    source code 

    Adds a <stage> configuration section as the next child of a parent.

    We add the following fields to the document:

      targetDir      //cb_config/stage/staging_dir
    

    We also add groups of the following items, one list element per item:

      localPeers     //cb_config/stage/peer
      remotePeers    //cb_config/stage/peer
    

    The individual local and remote peer entries are added by _addLocalPeer and _addRemotePeer, respectively.

    If stageConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • stageConfig - Stage configuration section to be added to the document.

    _addStore(xmlDom, parentNode, storeConfig)
    Static Method

    source code 

    Adds a <store> configuration section as the next child of a parent.

    We add the following fields to the document:

      sourceDir         //cb_config/store/source_dir
      mediaType         //cb_config/store/media_type
      deviceType        //cb_config/store/device_type
      devicePath        //cb_config/store/target_device
      deviceScsiId      //cb_config/store/target_scsi_id
      driveSpeed        //cb_config/store/drive_speed
      checkData         //cb_config/store/check_data
      checkMedia        //cb_config/store/check_media
      warnMidnite       //cb_config/store/warn_midnite
      noEject           //cb_config/store/no_eject
      refreshMediaDelay //cb_config/store/refresh_media_delay
      ejectDelay        //cb_config/store/eject_delay
    

    Blanking behavior configuration is added by the _addBlankBehavior method.

    If storeConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • storeConfig - Store configuration section to be added to the document.

    _addPurge(xmlDom, parentNode, purgeConfig)
    Static Method

    source code 

    Adds a <purge> configuration section as the next child of a parent.

    We add the following fields to the document:

      purgeDirs     //cb_config/purge/dir
    

    The individual directory entries are added by _addPurgeDir.

    If purgeConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • purgeConfig - Purge configuration section to be added to the document.

    _addExtendedAction(xmlDom, parentNode, action)
    Static Method

    source code 

    Adds an extended action container as the next child of a parent.

    We add the following fields to the document:

      name           action/name
      module         action/module
      function       action/function
      index          action/index
      dependencies   action/depends
    

    Dependencies are added by the _addDependencies method.

    The <action> node itself is created as the next child of the parent node. This method only adds one action node. The parent must loop for each action in the ExtensionsConfig object.

    If action is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • action - Purge directory to be added to the document.

    _addOverride(xmlDom, parentNode, override)
    Static Method

    source code 

    Adds a command override container as the next child of a parent.

    We add the following fields to the document:

      command                 override/command
      absolutePath            override/abs_path
    

    The <override> node itself is created as the next child of the parent node. This method only adds one override node. The parent must loop for each override in the OptionsConfig object.

    If override is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • override - Command override to be added to the document.

    _addHook(xmlDom, parentNode, hook)
    Static Method

    source code 

    Adds an action hook container as the next child of a parent.

    The behavior varies depending on the value of the before and after flags on the hook. If the before flag is set, it's a pre-action hook, and we'll add the following fields:

      action                  pre_action_hook/action
      command                 pre_action_hook/command
    

    If the after flag is set, it's a post-action hook, and we'll add the following fields:

      action                  post_action_hook/action
      command                 post_action_hook/command
    

    The <pre_action_hook> or <post_action_hook> node itself is created as the next child of the parent node. This method only adds one hook node. The parent must loop for each hook in the OptionsConfig object.

    If hook is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • hook - Command hook to be added to the document.

    _addCollectFile(xmlDom, parentNode, collectFile)
    Static Method

    source code 

    Adds a collect file container as the next child of a parent.

    We add the following fields to the document:

      absolutePath            dir/abs_path
      collectMode             dir/collect_mode
      archiveMode             dir/archive_mode
    

    Note that for consistency with collect directory handling we'll only emit the preferred collect_mode tag.

    The <file> node itself is created as the next child of the parent node. This method only adds one collect file node. The parent must loop for each collect file in the CollectConfig object.

    If collectFile is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • collectFile - Collect file to be added to the document.

    _addCollectDir(xmlDom, parentNode, collectDir)
    Static Method

    source code 

    Adds a collect directory container as the next child of a parent.

    We add the following fields to the document:

      absolutePath            dir/abs_path
      collectMode             dir/collect_mode
      archiveMode             dir/archive_mode
      ignoreFile              dir/ignore_file
      linkDepth               dir/link_depth
      dereference             dir/dereference
      recursionLevel          dir/recursion_level
    

    Note that an original XML document might have listed the collect mode using the mode tag, since we accept both collect_mode and mode. However, here we'll only emit the preferred collect_mode tag.

    We also add groups of the following items, one list element per item:

      absoluteExcludePaths    dir/exclude/abs_path
      relativeExcludePaths    dir/exclude/rel_path
      excludePatterns         dir/exclude/pattern
    

    The <dir> node itself is created as the next child of the parent node. This method only adds one collect directory node. The parent must loop for each collect directory in the CollectConfig object.

    If collectDir is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • collectDir - Collect directory to be added to the document.

    _addLocalPeer(xmlDom, parentNode, localPeer)
    Static Method

    source code 

    Adds a local peer container as the next child of a parent.

    We add the following fields to the document:

      name                peer/name
      collectDir          peer/collect_dir
      ignoreFailureMode   peer/ignore_failures
    

    Additionally, peer/type is filled in with "local", since this is a local peer.

    The <peer> node itself is created as the next child of the parent node. This method only adds one peer node. The parent must loop for each peer in the StageConfig object.

    If localPeer is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • localPeer - Purge directory to be added to the document.

    _addRemotePeer(xmlDom, parentNode, remotePeer)
    Static Method

    source code 

    Adds a remote peer container as the next child of a parent.

    We add the following fields to the document:

      name                peer/name
      collectDir          peer/collect_dir
      remoteUser          peer/backup_user
      rcpCommand          peer/rcp_command
      rcpCommand          peer/rcp_command
      rshCommand          peer/rsh_command
      cbackCommand        peer/cback_command
      ignoreFailureMode   peer/ignore_failures
      managed             peer/managed
      managedActions      peer/managed_actions
    

    Additionally, peer/type is filled in with "remote", since this is a remote peer.

    The <peer> node itself is created as the next child of the parent node. This method only adds one peer node. The parent must loop for each peer in the StageConfig object.

    If remotePeer is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • remotePeer - Purge directory to be added to the document.

    _addPurgeDir(xmlDom, parentNode, purgeDir)
    Static Method

    source code 

    Adds a purge directory container as the next child of a parent.

    We add the following fields to the document:

      absolutePath            dir/abs_path
      retainDays              dir/retain_days
    

    The <dir> node itself is created as the next child of the parent node. This method only adds one purge directory node. The parent must loop for each purge directory in the PurgeConfig object.

    If purgeDir is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • purgeDir - Purge directory to be added to the document.

    _addDependencies(xmlDom, parentNode, dependencies)
    Static Method

    source code 

    Adds a extended action dependencies to parent node.

    We add the following fields to the document:

      runBefore      depends/run_before
      runAfter       depends/run_after
    

    If dependencies is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • dependencies - ActionDependencies object to be added to the document

    _buildCommaSeparatedString(valueList)
    Static Method

    source code 

    Creates a comma-separated string from a list of values.

    As a special case, if valueList is None, then None will be returned.

    Parameters:
    • valueList - List of values to be placed into a string
    Returns:
    Values from valueList as a comma-separated string.

    _addBlankBehavior(xmlDom, parentNode, blankBehavior)
    Static Method

    source code 

    Adds a blanking behavior container as the next child of a parent.

    We add the following fields to the document:

      blankMode    blank_behavior/mode
      blankFactor  blank_behavior/factor
    

    The <blank_behavior> node itself is created as the next child of the parent node.

    If blankBehavior is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • blankBehavior - Blanking behavior to be added to the document.

    _validateContents(self)

    source code 

    Validates configuration contents per rules discussed in module documentation.

    This is the second pass at validation. It ensures that any filled-in section contains valid data. Any sections which is not set to None is validated per the rules for that section, laid out in the module documentation (above).

    Raises:
    • ValueError - If configuration is invalid.

    _validateReference(self)

    source code 

    Validates reference configuration. There are currently no reference-related validations.

    Raises:
    • ValueError - If reference configuration is invalid.

    _validateExtensions(self)

    source code 

    Validates extensions configuration.

    The list of actions may be either None or an empty list [] if desired. Each extended action must include a name, a module, and a function.

    Then, if the order mode is None or "index", an index is required; and if the order mode is "dependency", dependency information is required.

    Raises:
    • ValueError - If reference configuration is invalid.

    _validateOptions(self)

    source code 

    Validates options configuration.

    All fields must be filled in except the rsh command. The rcp and rsh commands are used as default values for all remote peers. Remote peers can also rely on the backup user as the default remote user name if they choose.

    Raises:
    • ValueError - If reference configuration is invalid.

    _validatePeers(self)

    source code 

    Validates peers configuration per rules in _validatePeerList.

    Raises:
    • ValueError - If peers configuration is invalid.

    _validateCollect(self)

    source code 

    Validates collect configuration.

    The target directory must be filled in. The collect mode, archive mode, ignore file, and recursion level are all optional. The list of absolute paths to exclude and patterns to exclude may be either None or an empty list [] if desired.

    Each collect directory entry must contain an absolute path to collect, and then must either be able to take collect mode, archive mode and ignore file configuration from the parent CollectConfig object, or must set each value on its own. The list of absolute paths to exclude, relative paths to exclude and patterns to exclude may be either None or an empty list [] if desired. Any list of absolute paths to exclude or patterns to exclude will be combined with the same list in the CollectConfig object to make the complete list for a given directory.

    Raises:
    • ValueError - If collect configuration is invalid.

    _validateStage(self)

    source code 

    Validates stage configuration.

    The target directory must be filled in, and the peers are also validated.

    Peers are only required in this section if the peers configuration section is not filled in. However, if any peers are filled in here, they override the peers configuration and must meet the validation criteria in _validatePeerList.

    Raises:
    • ValueError - If stage configuration is invalid.

    _validateStore(self)

    source code 

    Validates store configuration.

    The device type, drive speed, and blanking behavior are optional. All other values are required. Missing booleans will be set to defaults.

    If blanking behavior is provided, then both a blanking mode and a blanking factor are required.

    The image writer functionality in the writer module is supposed to be able to handle a device speed of None.

    Any caller which needs a "real" (non-None) value for the device type can use DEFAULT_DEVICE_TYPE, which is guaranteed to be sensible.

    This is also where we make sure that the media type -- which is already a valid type -- matches up properly with the device type.

    Raises:
    • ValueError - If store configuration is invalid.

    _validatePurge(self)

    source code 

    Validates purge configuration.

    The list of purge directories may be either None or an empty list [] if desired. All purge directories must contain a path and a retain days value.

    Raises:
    • ValueError - If purge configuration is invalid.

    _validatePeerList(self, localPeers, remotePeers)

    source code 

    Validates the set of local and remote peers.

    Local peers must be completely filled in, including both name and collect directory. Remote peers must also fill in the name and collect directory, but can leave the remote user and rcp command unset. In this case, the remote user is assumed to match the backup user from the options section and rcp command is taken directly from the options section.

    Parameters:
    • localPeers - List of local peers
    • remotePeers - List of remote peers
    Raises:
    • ValueError - If stage configuration is invalid.

    Property Details [hide private]

    reference

    Reference configuration in terms of a ReferenceConfig object.

    Get Method:
    _getReference(self) - Property target used to get the reference configuration value.
    Set Method:
    _setReference(self, value) - Property target used to set the reference configuration value.

    extensions

    Extensions configuration in terms of a ExtensionsConfig object.

    Get Method:
    _getExtensions(self) - Property target used to get the extensions configuration value.
    Set Method:
    _setExtensions(self, value) - Property target used to set the extensions configuration value.

    options

    Options configuration in terms of a OptionsConfig object.

    Get Method:
    _getOptions(self) - Property target used to get the options configuration value.
    Set Method:
    _setOptions(self, value) - Property target used to set the options configuration value.

    collect

    Collect configuration in terms of a CollectConfig object.

    Get Method:
    _getCollect(self) - Property target used to get the collect configuration value.
    Set Method:
    _setCollect(self, value) - Property target used to set the collect configuration value.

    stage

    Stage configuration in terms of a StageConfig object.

    Get Method:
    _getStage(self) - Property target used to get the stage configuration value.
    Set Method:
    _setStage(self, value) - Property target used to set the stage configuration value.

    store

    Store configuration in terms of a StoreConfig object.

    Get Method:
    _getStore(self) - Property target used to get the store configuration value.
    Set Method:
    _setStore(self, value) - Property target used to set the store configuration value.

    purge

    Purge configuration in terms of a PurgeConfig object.

    Get Method:
    _getPurge(self) - Property target used to get the purge configuration value.
    Set Method:
    _setPurge(self, value) - Property target used to set the purge configuration value.

    peers

    Peers configuration in terms of a PeersConfig object.

    Get Method:
    _getPeers(self) - Property target used to get the peers configuration value.
    Set Method:
    _setPeers(self, value) - Property target used to set the peers configuration value.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.customize-module.html0000664000175000017500000002610012657665544026643 0ustar pronovicpronovic00000000000000 CedarBackup3.customize
    Package CedarBackup3 :: Module customize
    [hide private]
    [frames] | no frames]

    Module customize

    source code

    Implements customized behavior.

    Some behaviors need to vary when packaged for certain platforms. For instance, while Cedar Backup generally uses cdrecord and mkisofs, Debian ships compatible utilities called wodim and genisoimage. I want there to be one single place where Cedar Backup is patched for Debian, rather than having to maintain a variety of patches in different places.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    customizeOverrides(config, platform='standard')
    Modify command overrides based on the configured platform.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.customize")
      PLATFORM = 'standard'
      DEBIAN_CDRECORD = '/usr/bin/wodim'
      DEBIAN_MKISOFS = '/usr/bin/genisoimage'
      __package__ = 'CedarBackup3'
    Function Details [hide private]

    customizeOverrides(config, platform='standard')

    source code 

    Modify command overrides based on the configured platform.

    On some platforms, we want to add command overrides to configuration. Each override will only be added if the configuration does not already contain an override with the same name. That way, the user still has a way to choose their own version of the command if they want.

    Parameters:
    • config - Configuration to modify
    • platform - Platform that is in use

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.actions.stage-module.html0000664000175000017500000000501612657665544030151 0ustar pronovicpronovic00000000000000 stage

    Module stage


    Functions

    executeStage

    Variables

    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.mysql-module.html0000664000175000017500000006056412657665544027270 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.mysql
    Package CedarBackup3 :: Package extend :: Module mysql
    [hide private]
    [frames] | no frames]

    Module mysql

    source code

    Provides an extension to back up MySQL databases.

    This is a Cedar Backup extension used to back up MySQL databases via the Cedar Backup command line. It requires a new configuration section <mysql> and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file.

    The backup is done via the mysqldump command included with the MySQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. Note that this code always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I'll update this extension or provide another.

    The extension assumes that all configured databases can be backed up by a single user. Often, the "root" database user will be used. An alternative is to create a separate MySQL "backup" user and grant that user rights to read (but not write) various databases as needed. This second option is probably the best choice.

    The extension accepts a username and password in configuration. However, you probably do not want to provide those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to mysqldump via the command-line --user and --password switches, which will be visible to other users in the process listing.

    Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in /root/.my.cnf:

      [mysqldump]
      user     = root
      password = <secret>
    

    Regardless of whether you are using ~/.my.cnf or /etc/cback3.conf to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode 0600).


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      MysqlConfig
    Class representing MySQL configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the MySQL backup action.
    source code
     
    _backupDatabase(targetDir, compressMode, user, password, backupUser, backupGroup, database=None)
    Backs up an individual MySQL database, or all databases.
    source code
     
    _getOutputFile(targetDir, database, compressMode)
    Opens the output file used for saving the MySQL dump.
    source code
     
    backupDatabase(user, password, backupFile, database=None)
    Backs up an individual MySQL database, or all databases.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.extend.mysql")
      MYSQLDUMP_COMMAND = ['mysqldump']
      __package__ = 'CedarBackup3.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the MySQL backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If a backup could not be written for some reason.

    _backupDatabase(targetDir, compressMode, user, password, backupUser, backupGroup, database=None)

    source code 

    Backs up an individual MySQL database, or all databases.

    This internal method wraps the public method and adds some functionality, like figuring out a filename, etc.

    Parameters:
    • targetDir - Directory into which backups should be written.
    • compressMode - Compress mode to be used for backed-up files.
    • user - User to use for connecting to the database (if any).
    • password - Password associated with user (if any).
    • backupUser - User to own resulting file.
    • backupGroup - Group to own resulting file.
    • database - Name of database, or None for all databases.
    Returns:
    Name of the generated backup file.
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the MySQL dump.

    _getOutputFile(targetDir, database, compressMode)

    source code 

    Opens the output file used for saving the MySQL dump.

    The filename is either "mysqldump.txt" or "mysqldump-<database>.txt". The ".bz2" extension is added if compress is True.

    Parameters:
    • targetDir - Target directory to write file in.
    • database - Name of the database (if any)
    • compressMode - Compress mode to be used for backed-up files.
    Returns:
    Tuple of (Output file object, filename), file opened in binary mode for use with executeCommand()

    backupDatabase(user, password, backupFile, database=None)

    source code 

    Backs up an individual MySQL database, or all databases.

    This function backs up either a named local MySQL database or all local MySQL databases, using the passed-in user and password (if provided) for connectivity. This function call always results a full backup. There is no facility for incremental backups.

    The backup data will be written into the passed-in backup file. Normally, this would be an object as returned from open(), but it is possible to use something like a GzipFile to write compressed output. The caller is responsible for closing the passed-in backup file.

    Often, the "root" database user will be used when backing up all databases. An alternative is to create a separate MySQL "backup" user and grant that user rights to read (but not write) all of the databases that will be backed up.

    This function accepts a username and password. However, you probably do not want to pass those values in. This is because they will be provided to mysqldump via the command-line --user and --password switches, which will be visible to other users in the process listing.

    Instead, you should configure the username and password in one of MySQL's configuration files. Typically, this would be done by putting a stanza like this in /root/.my.cnf, to provide mysqldump with the root database username and its password:

      [mysqldump]
      user     = root
      password = <secret>
    

    If you are executing this function as some system user other than root, then the .my.cnf file would be placed in the home directory of that user. In either case, make sure to set restrictive permissions (typically, mode 0600) on .my.cnf to make sure that other users cannot read the file.

    Parameters:
    • user (String representing MySQL username, or None) - User to use for connecting to the database (if any)
    • password (String representing MySQL password, or None) - Password associated with user (if any)
    • backupFile (Python file object as from open() or file().) - File use for writing backup.
    • database (String representing database name, or None for all databases.) - Name of the database to be backed up.
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the MySQL dump.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend-module.html0000664000175000017500000002026512657665544026116 0ustar pronovicpronovic00000000000000 CedarBackup3.extend
    Package CedarBackup3 :: Package extend
    [hide private]
    [frames] | no frames]

    Package extend

    source code

    Official Cedar Backup Extensions

    This package provides official Cedar Backup extensions. These are Cedar Backup actions that are not part of the "standard" set of Cedar Backup actions, but are officially supported along with Cedar Backup.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Submodules [hide private]

    Variables [hide private]
      __package__ = None
    hash(x)
    CedarBackup3-3.1.6/doc/interface/CedarBackup3.writers.util.IsoImage-class.html0000664000175000017500000021463112657665545030621 0ustar pronovicpronovic00000000000000 CedarBackup3.writers.util.IsoImage
    Package CedarBackup3 :: Package writers :: Module util :: Class IsoImage
    [hide private]
    [frames] | no frames]

    Class IsoImage

    source code

    object --+
             |
            IsoImage
    

    Represents an ISO filesystem image.

    Summary

    This object represents an ISO 9660 filesystem image. It is implemented in terms of the mkisofs program, which has been ported to many operating systems and platforms. A "sensible subset" of the mkisofs functionality is made available through the public interface, allowing callers to set a variety of basic options such as publisher id, application id, etc. as well as specify exactly which files and directories they want included in their image.

    By default, the image is created using the Rock Ridge protocol (using the -r option to mkisofs) because Rock Ridge discs are generally more useful on UN*X filesystems than standard ISO 9660 images. However, callers can fall back to the default mkisofs functionality by setting the useRockRidge instance variable to False. Note, however, that this option is not well-tested.

    Where Files and Directories are Placed in the Image

    Although this class is implemented in terms of the mkisofs program, its standard "image contents" semantics are slightly different than the original mkisofs semantics. The difference is that files and directories are added to the image with some additional information about their source directory kept intact.

    As an example, suppose you add the file /etc/profile to your image and you do not configure a graft point. The file /profile will be created in the image. The behavior for directories is similar. For instance, suppose that you add /etc/X11 to the image and do not configure a graft point. In this case, the directory /X11 will be created in the image, even if the original /etc/X11 directory is empty. This behavior differs from the standard mkisofs behavior!

    If a graft point is configured, it will be used to modify the point at which a file or directory is added into an image. Using the examples from above, let's assume you set a graft point of base when adding /etc/profile and /etc/X11 to your image. In this case, the file /base/profile and the directory /base/X11 would be added to the image.

    I feel that this behavior is more consistent than the original mkisofs behavior. However, to be fair, it is not quite as flexible, and some users might not like it. For this reason, the contentsOnly parameter to the addEntry method can be used to revert to the original behavior if desired.

    Instance Methods [hide private]
     
    __init__(self, device=None, boundaries=None, graftPoint=None)
    Initializes an empty ISO image object.
    source code
     
    addEntry(self, path, graftPoint=None, override=False, contentsOnly=False)
    Adds an individual file or directory into the ISO image.
    source code
     
    getEstimatedSize(self)
    Returns the estimated size (in bytes) of the ISO image.
    source code
     
    _getEstimatedSize(self, entries)
    Returns the estimated size (in bytes) for the passed-in entries dictionary.
    source code
     
    writeImage(self, imagePath)
    Writes this image to disk using the image path.
    source code
     
    _buildGeneralArgs(self)
    Builds a list of general arguments to be passed to a mkisofs command.
    source code
     
    _buildSizeArgs(self, entries)
    Builds a list of arguments to be passed to a mkisofs command.
    source code
     
    _buildWriteArgs(self, entries, imagePath)
    Builds a list of arguments to be passed to a mkisofs command.
    source code
     
    _setDevice(self, value)
    Property target used to set the device value.
    source code
     
    _getDevice(self)
    Property target used to get the device value.
    source code
     
    _setBoundaries(self, value)
    Property target used to set the boundaries tuple.
    source code
     
    _getBoundaries(self)
    Property target used to get the boundaries value.
    source code
     
    _setGraftPoint(self, value)
    Property target used to set the graft point.
    source code
     
    _getGraftPoint(self)
    Property target used to get the graft point.
    source code
     
    _setUseRockRidge(self, value)
    Property target used to set the use RockRidge flag.
    source code
     
    _getUseRockRidge(self)
    Property target used to get the use RockRidge flag.
    source code
     
    _setApplicationId(self, value)
    Property target used to set the application id.
    source code
     
    _getApplicationId(self)
    Property target used to get the application id.
    source code
     
    _setBiblioFile(self, value)
    Property target used to set the biblio file.
    source code
     
    _getBiblioFile(self)
    Property target used to get the biblio file.
    source code
     
    _setPublisherId(self, value)
    Property target used to set the publisher id.
    source code
     
    _getPublisherId(self)
    Property target used to get the publisher id.
    source code
     
    _setPreparerId(self, value)
    Property target used to set the preparer id.
    source code
     
    _getPreparerId(self)
    Property target used to get the preparer id.
    source code
     
    _setVolumeId(self, value)
    Property target used to set the volume id.
    source code
     
    _getVolumeId(self)
    Property target used to get the volume id.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _buildDirEntries(entries)
    Uses an entries dictionary to build a list of directory locations for use by mkisofs.
    source code
    Properties [hide private]
      device
    Device that image will be written to (device path or SCSI id).
      boundaries
    Session boundaries as required by mkisofs.
      graftPoint
    Default image-wide graft point (see addEntry for details).
      useRockRidge
    Indicates whether to use RockRidge (default is True).
      applicationId
    Optionally specifies the ISO header application id value.
      biblioFile
    Optionally specifies the ISO bibliographic file name.
      publisherId
    Optionally specifies the ISO header publisher id value.
      preparerId
    Optionally specifies the ISO header preparer id value.
      volumeId
    Optionally specifies the ISO header volume id value.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, device=None, boundaries=None, graftPoint=None)
    (Constructor)

    source code 

    Initializes an empty ISO image object.

    Only the most commonly-used configuration items can be set using this constructor. If you have a need to change the others, do so immediately after creating your object.

    The device and boundaries values are both required in order to write multisession discs. If either is missing or None, a multisession disc will not be written. The boundaries tuple is in terms of ISO sectors, as built by an image writer class and returned in a writer.MediaCapacity object.

    Parameters:
    • device (Either be a filesystem path or a SCSI address) - Name of the device that the image will be written to
    • boundaries (Tuple (last_sess_start,next_sess_start) as returned from cdrecord -msinfo, or None) - Session boundaries as required by mkisofs
    • graftPoint (String representing a graft point path (see addEntry).) - Default graft point for this page.
    Overrides: object.__init__

    addEntry(self, path, graftPoint=None, override=False, contentsOnly=False)

    source code 

    Adds an individual file or directory into the ISO image.

    The path must exist and must be a file or a directory. By default, the entry will be placed into the image at the root directory, but this behavior can be overridden using the graftPoint parameter or instance variable.

    You can use the contentsOnly behavior to revert to the "original" mkisofs behavior for adding directories, which is to add only the items within the directory, and not the directory itself.

    Parameters:
    • path (String representing a path on disk) - File or directory to be added to the image
    • graftPoint (String representing a graft point path, as described above) - Graft point to be used when adding this entry
    • override (Boolean true/false) - Override an existing entry with the same path.
    • contentsOnly (Boolean true/false) - Add directory contents only (standard mkisofs behavior).
    Raises:
    • ValueError - If path is not a file or directory, or does not exist.
    • ValueError - If the path has already been added, and override is not set.
    • ValueError - If a path cannot be encoded properly.
    Notes:
    • Things get odd if you try to add a directory to an image that will be written to a multisession disc, and the same directory already exists in an earlier session on that disc. Not all of the data gets written. You really wouldn't want to do this anyway, I guess.
    • An exception will be thrown if the path has already been added to the image, unless the override parameter is set to True.
    • The method graftPoints parameter overrides the object-wide instance variable. If neither the method parameter or object-wide value is set, the path will be written at the image root. The graft point behavior is determined by the value which is in effect at the time this method is called, so you must set the object-wide value before calling this method for the first time, or your image may not be consistent.
    • You cannot use the local graftPoint parameter to "turn off" an object-wide instance variable by setting it to None. Python's default argument functionality buys us a lot, but it can't make this method psychic. :)

    getEstimatedSize(self)

    source code 

    Returns the estimated size (in bytes) of the ISO image.

    This is implemented via the -print-size option to mkisofs, so it might take a bit of time to execute. However, the result is as accurate as we can get, since it takes into account all of the ISO overhead, the true cost of directories in the structure, etc, etc.

    Returns:
    Estimated size of the image, in bytes.
    Raises:
    • IOError - If there is a problem calling mkisofs.
    • ValueError - If there are no filesystem entries in the image

    _getEstimatedSize(self, entries)

    source code 

    Returns the estimated size (in bytes) for the passed-in entries dictionary.

    Returns:
    Estimated size of the image, in bytes.
    Raises:
    • IOError - If there is a problem calling mkisofs.

    writeImage(self, imagePath)

    source code 

    Writes this image to disk using the image path.

    Parameters:
    • imagePath (String representing a path on disk) - Path to write image out as
    Raises:
    • IOError - If there is an error writing the image to disk.
    • ValueError - If there are no filesystem entries in the image
    • ValueError - If a path cannot be encoded properly.

    _buildDirEntries(entries)
    Static Method

    source code 

    Uses an entries dictionary to build a list of directory locations for use by mkisofs.

    We build a list of entries that can be passed to mkisofs. Each entry is either raw (if no graft point was configured) or in graft-point form as described above (if a graft point was configured). The dictionary keys are the path names, and the values are the graft points, if any.

    Parameters:
    • entries - Dictionary of image entries (i.e. self.entries)
    Returns:
    List of directory locations for use by mkisofs

    _buildGeneralArgs(self)

    source code 

    Builds a list of general arguments to be passed to a mkisofs command.

    The various instance variables (applicationId, etc.) are filled into the list of arguments if they are set. By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested.

    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildSizeArgs(self, entries)

    source code 

    Builds a list of arguments to be passed to a mkisofs command.

    The various instance variables (applicationId, etc.) are filled into the list of arguments if they are set. The command will be built to just return size output (a simple count of sectors via the -print-size option), rather than an image file on disk.

    By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested.

    Parameters:
    • entries - Dictionary of image entries (i.e. self.entries)
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildWriteArgs(self, entries, imagePath)

    source code 

    Builds a list of arguments to be passed to a mkisofs command.

    The various instance variables (applicationId, etc.) are filled into the list of arguments if they are set. The command will be built to write an image to disk.

    By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested.

    Parameters:
    • entries - Dictionary of image entries (i.e. self.entries)
    • imagePath (String representing a path on disk) - Path to write image out as
    Returns:
    List suitable for passing to util.executeCommand as args.

    _setDevice(self, value)

    source code 

    Property target used to set the device value. If not None, the value can be either an absolute path or a SCSI id.

    Raises:
    • ValueError - If the value is not valid

    _setBoundaries(self, value)

    source code 

    Property target used to set the boundaries tuple. If not None, the value must be a tuple of two integers.

    Raises:
    • ValueError - If the tuple values are not integers.
    • IndexError - If the tuple does not contain enough elements.

    _setGraftPoint(self, value)

    source code 

    Property target used to set the graft point. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setUseRockRidge(self, value)

    source code 

    Property target used to set the use RockRidge flag. No validations, but we normalize the value to True or False.

    _setApplicationId(self, value)

    source code 

    Property target used to set the application id. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setBiblioFile(self, value)

    source code 

    Property target used to set the biblio file. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setPublisherId(self, value)

    source code 

    Property target used to set the publisher id. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setPreparerId(self, value)

    source code 

    Property target used to set the preparer id. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setVolumeId(self, value)

    source code 

    Property target used to set the volume id. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    Property Details [hide private]

    device

    Device that image will be written to (device path or SCSI id).

    Get Method:
    _getDevice(self) - Property target used to get the device value.
    Set Method:
    _setDevice(self, value) - Property target used to set the device value.

    boundaries

    Session boundaries as required by mkisofs.

    Get Method:
    _getBoundaries(self) - Property target used to get the boundaries value.
    Set Method:
    _setBoundaries(self, value) - Property target used to set the boundaries tuple.

    graftPoint

    Default image-wide graft point (see addEntry for details).

    Get Method:
    _getGraftPoint(self) - Property target used to get the graft point.
    Set Method:
    _setGraftPoint(self, value) - Property target used to set the graft point.

    useRockRidge

    Indicates whether to use RockRidge (default is True).

    Get Method:
    _getUseRockRidge(self) - Property target used to get the use RockRidge flag.
    Set Method:
    _setUseRockRidge(self, value) - Property target used to set the use RockRidge flag.

    applicationId

    Optionally specifies the ISO header application id value.

    Get Method:
    _getApplicationId(self) - Property target used to get the application id.
    Set Method:
    _setApplicationId(self, value) - Property target used to set the application id.

    biblioFile

    Optionally specifies the ISO bibliographic file name.

    Get Method:
    _getBiblioFile(self) - Property target used to get the biblio file.
    Set Method:
    _setBiblioFile(self, value) - Property target used to set the biblio file.

    publisherId

    Optionally specifies the ISO header publisher id value.

    Get Method:
    _getPublisherId(self) - Property target used to get the publisher id.
    Set Method:
    _setPublisherId(self, value) - Property target used to set the publisher id.

    preparerId

    Optionally specifies the ISO header preparer id value.

    Get Method:
    _getPreparerId(self) - Property target used to get the preparer id.
    Set Method:
    _setPreparerId(self, value) - Property target used to set the preparer id.

    volumeId

    Optionally specifies the ISO header volume id value.

    Get Method:
    _getVolumeId(self) - Property target used to get the volume id.
    Set Method:
    _setVolumeId(self, value) - Property target used to set the volume id.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.mysql.LocalConfig-class.html0000664000175000017500000010434512657665545031264 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.mysql.LocalConfig
    Package CedarBackup3 :: Package extend :: Module mysql :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit MySQL-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds a <mysql> configuration section as the next child of a parent.
    source code
     
    _setMysql(self, value)
    Property target used to set the mysql configuration value.
    source code
     
    _getMysql(self)
    Property target used to get the mysql configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseMysql(parentNode)
    Parses a mysql configuration section.
    source code
    Properties [hide private]
      mysql
    Mysql configuration in terms of a MysqlConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    The compress mode must be filled in. Then, if the 'all' flag is set, no databases are allowed, and if the 'all' flag is not set, at least one database is required.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds a <mysql> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      user           //cb_config/mysql/user
      password       //cb_config/mysql/password
      compressMode   //cb_config/mysql/compress_mode
      all            //cb_config/mysql/all
    

    We also add groups of the following items, one list element per item:

      database       //cb_config/mysql/database
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setMysql(self, value)

    source code 

    Property target used to set the mysql configuration value. If not None, the value must be a MysqlConfig object.

    Raises:
    • ValueError - If the value is not a MysqlConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the mysql configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseMysql(parentNode)
    Static Method

    source code 

    Parses a mysql configuration section.

    We read the following fields:

      user           //cb_config/mysql/user
      password       //cb_config/mysql/password
      compressMode   //cb_config/mysql/compress_mode
      all            //cb_config/mysql/all
    

    We also read groups of the following item, one list element per item:

      databases      //cb_config/mysql/database
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    MysqlConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    Property Details [hide private]

    mysql

    Mysql configuration in terms of a MysqlConfig object.

    Get Method:
    _getMysql(self) - Property target used to get the mysql configuration value.
    Set Method:
    _setMysql(self, value) - Property target used to set the mysql configuration value.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.StoreConfig-class.html0000664000175000017500000022230312657665544030132 0ustar pronovicpronovic00000000000000 CedarBackup3.config.StoreConfig
    Package CedarBackup3 :: Module config :: Class StoreConfig
    [hide private]
    [frames] | no frames]

    Class StoreConfig

    source code

    object --+
             |
            StoreConfig
    

    Class representing a Cedar Backup store configuration.

    The following restrictions exist on data in this class:

    • The source directory must be an absolute path.
    • The media type must be one of the values in VALID_MEDIA_TYPES.
    • The device type must be one of the values in VALID_DEVICE_TYPES.
    • The device path must be an absolute path.
    • The SCSI id, if provided, must be in the form specified by validateScsiId.
    • The drive speed must be an integer >= 1
    • The blanking behavior must be a BlankBehavior object
    • The refresh media delay must be an integer >= 0
    • The eject delay must be an integer >= 0

    Note that although the blanking factor must be a positive floating point number, it is stored as a string. This is done so that we can losslessly go back and forth between XML and object representations of configuration.

    Instance Methods [hide private]
     
    __init__(self, sourceDir=None, mediaType=None, deviceType=None, devicePath=None, deviceScsiId=None, driveSpeed=None, checkData=False, warnMidnite=False, noEject=False, checkMedia=False, blankBehavior=None, refreshMediaDelay=None, ejectDelay=None)
    Constructor for the StoreConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    _setSourceDir(self, value)
    Property target used to set the source directory.
    source code
     
    _getSourceDir(self)
    Property target used to get the source directory.
    source code
     
    _setMediaType(self, value)
    Property target used to set the media type.
    source code
     
    _getMediaType(self)
    Property target used to get the media type.
    source code
     
    _setDeviceType(self, value)
    Property target used to set the device type.
    source code
     
    _getDeviceType(self)
    Property target used to get the device type.
    source code
     
    _setDevicePath(self, value)
    Property target used to set the device path.
    source code
     
    _getDevicePath(self)
    Property target used to get the device path.
    source code
     
    _setDeviceScsiId(self, value)
    Property target used to set the SCSI id The SCSI id must be valid per validateScsiId.
    source code
     
    _getDeviceScsiId(self)
    Property target used to get the SCSI id.
    source code
     
    _setDriveSpeed(self, value)
    Property target used to set the drive speed.
    source code
     
    _getDriveSpeed(self)
    Property target used to get the drive speed.
    source code
     
    _setCheckData(self, value)
    Property target used to set the check data flag.
    source code
     
    _getCheckData(self)
    Property target used to get the check data flag.
    source code
     
    _setCheckMedia(self, value)
    Property target used to set the check media flag.
    source code
     
    _getCheckMedia(self)
    Property target used to get the check media flag.
    source code
     
    _setWarnMidnite(self, value)
    Property target used to set the midnite warning flag.
    source code
     
    _getWarnMidnite(self)
    Property target used to get the midnite warning flag.
    source code
     
    _setNoEject(self, value)
    Property target used to set the no-eject flag.
    source code
     
    _getNoEject(self)
    Property target used to get the no-eject flag.
    source code
     
    _setBlankBehavior(self, value)
    Property target used to set blanking behavior configuration.
    source code
     
    _getBlankBehavior(self)
    Property target used to get the blanking behavior configuration.
    source code
     
    _setRefreshMediaDelay(self, value)
    Property target used to set the refreshMediaDelay.
    source code
     
    _getRefreshMediaDelay(self)
    Property target used to get the action refreshMediaDelay.
    source code
     
    _setEjectDelay(self, value)
    Property target used to set the ejectDelay.
    source code
     
    _getEjectDelay(self)
    Property target used to get the action ejectDelay.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      sourceDir
    Directory whose contents should be written to media.
      mediaType
    Type of the media (see notes above).
      deviceType
    Type of the device (optional, see notes above).
      devicePath
    Filesystem device name for writer device.
      deviceScsiId
    SCSI id for writer device (optional, see notes above).
      driveSpeed
    Speed of the drive.
      checkData
    Whether resulting image should be validated.
      checkMedia
    Whether media should be checked before being written to.
      warnMidnite
    Whether to generate warnings for crossing midnite.
      noEject
    Indicates that the writer device should not be ejected.
      blankBehavior
    Controls optimized blanking behavior.
      refreshMediaDelay
    Delay, in seconds, to add after refreshing media.
      ejectDelay
    Delay, in seconds, to add after ejecting media before closing the tray

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, sourceDir=None, mediaType=None, deviceType=None, devicePath=None, deviceScsiId=None, driveSpeed=None, checkData=False, warnMidnite=False, noEject=False, checkMedia=False, blankBehavior=None, refreshMediaDelay=None, ejectDelay=None)
    (Constructor)

    source code 

    Constructor for the StoreConfig class.

    Parameters:
    • sourceDir - Directory whose contents should be written to media.
    • mediaType - Type of the media (see notes above).
    • deviceType - Type of the device (optional, see notes above).
    • devicePath - Filesystem device name for writer device, i.e. /dev/cdrw.
    • deviceScsiId - SCSI id for writer device, i.e. [<method>:]scsibus,target,lun.
    • driveSpeed - Speed of the drive, i.e. 2 for 2x drive, etc.
    • checkData - Whether resulting image should be validated.
    • checkMedia - Whether media should be checked before being written to.
    • warnMidnite - Whether to generate warnings for crossing midnite.
    • noEject - Indicates that the writer device should not be ejected.
    • blankBehavior - Controls optimized blanking behavior.
    • refreshMediaDelay - Delay, in seconds, to add after refreshing media
    • ejectDelay - Delay, in seconds, to add after ejecting media before closing the tray
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setSourceDir(self, value)

    source code 

    Property target used to set the source directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setMediaType(self, value)

    source code 

    Property target used to set the media type. The value must be one of VALID_MEDIA_TYPES.

    Raises:
    • ValueError - If the value is not valid.

    _setDeviceType(self, value)

    source code 

    Property target used to set the device type. The value must be one of VALID_DEVICE_TYPES.

    Raises:
    • ValueError - If the value is not valid.

    _setDevicePath(self, value)

    source code 

    Property target used to set the device path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setDeviceScsiId(self, value)

    source code 

    Property target used to set the SCSI id The SCSI id must be valid per validateScsiId.

    Raises:
    • ValueError - If the value is not valid.

    _setDriveSpeed(self, value)

    source code 

    Property target used to set the drive speed. The drive speed must be valid per validateDriveSpeed.

    Raises:
    • ValueError - If the value is not valid.

    _setCheckData(self, value)

    source code 

    Property target used to set the check data flag. No validations, but we normalize the value to True or False.

    _setCheckMedia(self, value)

    source code 

    Property target used to set the check media flag. No validations, but we normalize the value to True or False.

    _setWarnMidnite(self, value)

    source code 

    Property target used to set the midnite warning flag. No validations, but we normalize the value to True or False.

    _setNoEject(self, value)

    source code 

    Property target used to set the no-eject flag. No validations, but we normalize the value to True or False.

    _setBlankBehavior(self, value)

    source code 

    Property target used to set blanking behavior configuration. If not None, the value must be a BlankBehavior object.

    Raises:
    • ValueError - If the value is not a BlankBehavior

    _setRefreshMediaDelay(self, value)

    source code 

    Property target used to set the refreshMediaDelay. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    _setEjectDelay(self, value)

    source code 

    Property target used to set the ejectDelay. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    sourceDir

    Directory whose contents should be written to media.

    Get Method:
    _getSourceDir(self) - Property target used to get the source directory.
    Set Method:
    _setSourceDir(self, value) - Property target used to set the source directory.

    mediaType

    Type of the media (see notes above).

    Get Method:
    _getMediaType(self) - Property target used to get the media type.
    Set Method:
    _setMediaType(self, value) - Property target used to set the media type.

    deviceType

    Type of the device (optional, see notes above).

    Get Method:
    _getDeviceType(self) - Property target used to get the device type.
    Set Method:
    _setDeviceType(self, value) - Property target used to set the device type.

    devicePath

    Filesystem device name for writer device.

    Get Method:
    _getDevicePath(self) - Property target used to get the device path.
    Set Method:
    _setDevicePath(self, value) - Property target used to set the device path.

    deviceScsiId

    SCSI id for writer device (optional, see notes above).

    Get Method:
    _getDeviceScsiId(self) - Property target used to get the SCSI id.
    Set Method:
    _setDeviceScsiId(self, value) - Property target used to set the SCSI id The SCSI id must be valid per validateScsiId.

    driveSpeed

    Speed of the drive.

    Get Method:
    _getDriveSpeed(self) - Property target used to get the drive speed.
    Set Method:
    _setDriveSpeed(self, value) - Property target used to set the drive speed.

    checkData

    Whether resulting image should be validated.

    Get Method:
    _getCheckData(self) - Property target used to get the check data flag.
    Set Method:
    _setCheckData(self, value) - Property target used to set the check data flag.

    checkMedia

    Whether media should be checked before being written to.

    Get Method:
    _getCheckMedia(self) - Property target used to get the check media flag.
    Set Method:
    _setCheckMedia(self, value) - Property target used to set the check media flag.

    warnMidnite

    Whether to generate warnings for crossing midnite.

    Get Method:
    _getWarnMidnite(self) - Property target used to get the midnite warning flag.
    Set Method:
    _setWarnMidnite(self, value) - Property target used to set the midnite warning flag.

    noEject

    Indicates that the writer device should not be ejected.

    Get Method:
    _getNoEject(self) - Property target used to get the no-eject flag.
    Set Method:
    _setNoEject(self, value) - Property target used to set the no-eject flag.

    blankBehavior

    Controls optimized blanking behavior.

    Get Method:
    _getBlankBehavior(self) - Property target used to get the blanking behavior configuration.
    Set Method:
    _setBlankBehavior(self, value) - Property target used to set blanking behavior configuration.

    refreshMediaDelay

    Delay, in seconds, to add after refreshing media.

    Get Method:
    _getRefreshMediaDelay(self) - Property target used to get the action refreshMediaDelay.
    Set Method:
    _setRefreshMediaDelay(self, value) - Property target used to set the refreshMediaDelay.

    ejectDelay

    Delay, in seconds, to add after ejecting media before closing the tray

    Get Method:
    _getEjectDelay(self) - Property target used to get the action ejectDelay.
    Set Method:
    _setEjectDelay(self, value) - Property target used to set the ejectDelay.

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.writers.cdwriter-module.html0000664000175000017500000000503012657665544030724 0ustar pronovicpronovic00000000000000 cdwriter

    Module cdwriter


    Classes

    CdWriter
    MediaCapacity
    MediaDefinition

    Variables

    CDRECORD_COMMAND
    EJECT_COMMAND
    MEDIA_CDRW_74
    MEDIA_CDRW_80
    MEDIA_CDR_74
    MEDIA_CDR_80
    MKISOFS_COMMAND
    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.action-module.html0000664000175000017500000001325412657665544026104 0ustar pronovicpronovic00000000000000 CedarBackup3.action
    Package CedarBackup3 :: Module action
    [hide private]
    [frames] | no frames]

    Module action

    source code

    Provides interface backwards compatibility.

    In Cedar Backup 2.10.0, a refactoring effort took place to reorganize the code for the standard actions. The code formerly in action.py was split into various other files in the CedarBackup3.actions package. This mostly-empty file remains to preserve the Cedar Backup library interface.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Variables [hide private]
      __package__ = 'CedarBackup3'
    CedarBackup3-3.1.6/doc/interface/CedarBackup3.xmlutil-module.html0000664000175000017500000015357712657665544026342 0ustar pronovicpronovic00000000000000 CedarBackup3.xmlutil
    Package CedarBackup3 :: Module xmlutil
    [hide private]
    [frames] | no frames]

    Module xmlutil

    source code

    Provides general XML-related functionality.

    What I'm trying to do here is abstract much of the functionality that directly accesses the DOM tree. This is not so much to "protect" the other code from the DOM, but to standardize the way it's used. It will also help extension authors write code that easily looks more like the rest of Cedar Backup.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      Serializer
    XML serializer class.
    Functions [hide private]
     
    createInputDom(xmlData, name='cb_config')
    Creates a DOM tree based on reading an XML string.
    source code
     
    createOutputDom(name='cb_config')
    Creates a DOM tree used for writing an XML document.
    source code
     
    serializeDom(xmlDom, indent=3)
    Serializes a DOM tree and returns the result in a string.
    source code
     
    isElement(node)
    Returns True or False depending on whether the XML node is an element node.
    source code
     
    readChildren(parent, name)
    Returns a list of nodes with a given name immediately beneath the parent.
    source code
     
    readFirstChild(parent, name)
    Returns the first child with a given name immediately beneath the parent.
    source code
     
    readStringList(parent, name)
    Returns a list of the string contents associated with nodes with a given name immediately beneath the parent.
    source code
     
    readString(parent, name)
    Returns string contents of the first child with a given name immediately beneath the parent.
    source code
     
    readInteger(parent, name)
    Returns integer contents of the first child with a given name immediately beneath the parent.
    source code
     
    readBoolean(parent, name)
    Returns boolean contents of the first child with a given name immediately beneath the parent.
    source code
     
    addContainerNode(xmlDom, parentNode, nodeName)
    Adds a container node as the next child of a parent node.
    source code
     
    addStringNode(xmlDom, parentNode, nodeName, nodeValue)
    Adds a text node as the next child of a parent, to contain a string.
    source code
     
    addIntegerNode(xmlDom, parentNode, nodeName, nodeValue)
    Adds a text node as the next child of a parent, to contain an integer.
    source code
     
    addBooleanNode(xmlDom, parentNode, nodeName, nodeValue)
    Adds a text node as the next child of a parent, to contain a boolean.
    source code
     
    readLong(parent, name)
    Returns long integer contents of the first child with a given name immediately beneath the parent.
    source code
     
    readFloat(parent, name)
    Returns float contents of the first child with a given name immediately beneath the parent.
    source code
     
    addLongNode(xmlDom, parentNode, nodeName, nodeValue)
    Adds a text node as the next child of a parent, to contain a long integer.
    source code
     
    _encodeText(text, encoding)
    Safely encodes the passed-in text as a Unicode string, converting bytes to UTF-8 if necessary.
    source code
     
    _translateCDATAAttr(characters)
    Handles normalization and some intelligence about quoting.
    source code
     
    _translateCDATA(characters, encoding='UTF-8', prev_chars='', markupSafe=0) source code
    Variables [hide private]
      TRUE_BOOLEAN_VALUES = ['Y', 'y']
    List of boolean values in XML representing True.
      FALSE_BOOLEAN_VALUES = ['N', 'n']
    List of boolean values in XML representing False.
      VALID_BOOLEAN_VALUES = ['Y', 'y', 'N', 'n']
    List of valid boolean values in XML.
      logger = logging.getLogger("CedarBackup3.log.xml")
      __package__ = 'CedarBackup3'
    Function Details [hide private]

    createInputDom(xmlData, name='cb_config')

    source code 

    Creates a DOM tree based on reading an XML string.

    Parameters:
    • name - Assumed base name of the document (root node name).
    Returns:
    Tuple (xmlDom, parentNode) for the parsed document
    Raises:
    • ValueError - If the document can't be parsed.

    createOutputDom(name='cb_config')

    source code 

    Creates a DOM tree used for writing an XML document.

    Parameters:
    • name - Base name of the document (root node name).
    Returns:
    Tuple (xmlDom, parentNode) for the new document

    serializeDom(xmlDom, indent=3)

    source code 

    Serializes a DOM tree and returns the result in a string.

    Parameters:
    • xmlDom - XML DOM tree to serialize
    • indent - Number of spaces to indent, as an integer
    Returns:
    String form of DOM tree, pretty-printed.

    readChildren(parent, name)

    source code 

    Returns a list of nodes with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    Underneath, we use the Python getElementsByTagName method, which is pretty cool, but which (surprisingly?) returns a list of all children with a given name below the parent, at any level. We just prune that list to include only children whose parentNode matches the passed-in parent.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of nodes to search for.
    Returns:
    List of child nodes with correct parent, or an empty list if no matching nodes are found.

    readFirstChild(parent, name)

    source code 

    Returns the first child with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    First properly-named child of parent, or None if no matching nodes are found.

    readStringList(parent, name)

    source code 

    Returns a list of the string contents associated with nodes with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    First, we find all of the nodes using readChildren, and then we retrieve the "string contents" of each of those nodes. The returned list has one entry per matching node. We assume that string contents of a given node belong to the first TEXT_NODE child of that node. Nodes which have no TEXT_NODE children are not represented in the returned list.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    List of strings as described above, or None if no matching nodes are found.

    readString(parent, name)

    source code 

    Returns string contents of the first child with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. We assume that string contents of a given node belong to the first TEXT_NODE child of that node.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    String contents of node or None if no matching nodes are found.

    readInteger(parent, name)

    source code 

    Returns integer contents of the first child with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    Integer contents of node or None if no matching nodes are found.
    Raises:
    • ValueError - If the string at the location can't be converted to an integer.

    readBoolean(parent, name)

    source code 

    Returns boolean contents of the first child with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    The string value of the node must be one of the values in VALID_BOOLEAN_VALUES.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    Boolean contents of node or None if no matching nodes are found.
    Raises:
    • ValueError - If the string at the location can't be converted to a boolean.

    addContainerNode(xmlDom, parentNode, nodeName)

    source code 

    Adds a container node as the next child of a parent node.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    Returns:
    Reference to the newly-created node.

    addStringNode(xmlDom, parentNode, nodeName, nodeValue)

    source code 

    Adds a text node as the next child of a parent, to contain a string.

    If the nodeValue is None, then the node will be created, but will be empty (i.e. will contain no text node child).

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    • nodeValue - The value to put into the node.
    Returns:
    Reference to the newly-created node.

    addIntegerNode(xmlDom, parentNode, nodeName, nodeValue)

    source code 

    Adds a text node as the next child of a parent, to contain an integer.

    If the nodeValue is None, then the node will be created, but will be empty (i.e. will contain no text node child).

    The integer will be converted to a string using "%d". The result will be added to the document via addStringNode.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    • nodeValue - The value to put into the node.
    Returns:
    Reference to the newly-created node.

    addBooleanNode(xmlDom, parentNode, nodeName, nodeValue)

    source code 

    Adds a text node as the next child of a parent, to contain a boolean.

    If the nodeValue is None, then the node will be created, but will be empty (i.e. will contain no text node child).

    Boolean True, or anything else interpreted as True by Python, will be converted to a string "Y". Anything else will be converted to a string "N". The result is added to the document via addStringNode.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    • nodeValue - The value to put into the node.
    Returns:
    Reference to the newly-created node.

    readLong(parent, name)

    source code 

    Returns long integer contents of the first child with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    Long integer contents of node or None if no matching nodes are found.
    Raises:
    • ValueError - If the string at the location can't be converted to an integer.

    readFloat(parent, name)

    source code 

    Returns float contents of the first child with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    Float contents of node or None if no matching nodes are found.
    Raises:
    • ValueError - If the string at the location can't be converted to a float value.

    addLongNode(xmlDom, parentNode, nodeName, nodeValue)

    source code 

    Adds a text node as the next child of a parent, to contain a long integer.

    If the nodeValue is None, then the node will be created, but will be empty (i.e. will contain no text node child).

    The integer will be converted to a string using "%d". The result will be added to the document via addStringNode.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    • nodeValue - The value to put into the node.
    Returns:
    Reference to the newly-created node.

    _translateCDATAAttr(characters)

    source code 

    Handles normalization and some intelligence about quoting.

    Copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved.

    _translateCDATA(characters, encoding='UTF-8', prev_chars='', markupSafe=0)

    source code 

    Copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved.


    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.actions.constants-module.html0000664000175000017500000000351312657665544031062 0ustar pronovicpronovic00000000000000 constants

    Module constants


    Variables

    COLLECT_INDICATOR
    DIGEST_EXTENSION
    DIR_TIME_FORMAT
    INDICATOR_PATTERN
    STAGE_INDICATOR
    STORE_INDICATOR
    __package__

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.filesystem.PurgeItemList-class.html0000664000175000017500000006504112657665545031371 0ustar pronovicpronovic00000000000000 CedarBackup3.filesystem.PurgeItemList
    Package CedarBackup3 :: Module filesystem :: Class PurgeItemList
    [hide private]
    [frames] | no frames]

    Class PurgeItemList

    source code

    object --+        
             |        
          list --+    
                 |    
    FilesystemList --+
                     |
                    PurgeItemList
    

    List of files and directories to be purged.

    A PurgeItemList is a FilesystemList containing a list of files and directories to be purged. On top of the generic functionality provided by FilesystemList, this class adds functionality to remove items that are too young to be purged, and to actually remove each item in the list from the filesystem.

    The other main difference is that when you add a directory's contents to a purge item list, the directory itself is not added to the list. This way, if someone asks to purge within in /opt/backup/collect, that directory doesn't get removed once all of the files within it is gone.

    Instance Methods [hide private]
    new empty list
    __init__(self)
    Initializes a list with no configured exclusions.
    source code
     
    addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False)
    Adds the contents of a directory to the list.
    source code
     
    removeYoungFiles(self, daysOld)
    Removes from the list files younger than a certain age (in days).
    source code
     
    purgeItems(self)
    Purges all items in the list.
    source code

    Inherited from FilesystemList: addDir, addFile, normalize, removeDirs, removeFiles, removeInvalid, removeLinks, removeMatch, verify

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __eq__, __ge__, __getattribute__, __getitem__, __getslice__, __gt__, __iadd__, __imul__, __iter__, __le__, __len__, __lt__, __mul__, __ne__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, append, count, extend, index, insert, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from FilesystemList: excludeBasenamePatterns, excludeDirs, excludeFiles, excludeLinks, excludePaths, excludePatterns, ignoreFile

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    Initializes a list with no configured exclusions.

    Returns: new empty list
    Overrides: object.__init__

    addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False)

    source code 

    Adds the contents of a directory to the list.

    The path must exist and must be a directory or a link to a directory. The contents of the directory (but not the directory path itself) will be recursively added to the list, subject to any exclusions that are in place. If you only want the directory and its contents to be added, then pass in recursive=False.

    Parameters:
    • path (String representing a path on disk) - Directory path whose contents should be added to the list
    • recursive (Boolean value) - Indicates whether directory contents should be added recursively.
    • addSelf - Ignored in this subclass.
    • linkDepth (Integer value, where zero means not to follow any soft links) - Depth of soft links that should be followed
    • dereference (Boolean value) - Indicates whether soft links, if followed, should be dereferenced
    Returns:
    Number of items recursively added to the list
    Raises:
    • ValueError - If path is not a directory or does not exist.
    • ValueError - If the path could not be encoded properly.
    Overrides: FilesystemList.addDirContents
    Notes:
    • If a directory's absolute path matches an exclude pattern or path, or if the directory contains the configured ignore file, then the directory and all of its contents will be recursively excluded from the list.
    • If the passed-in directory happens to be a soft link, it will be recursed. However, the linkDepth parameter controls whether any soft links within the directory will be recursed. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links, a depth of 1 follows only links within the passed-in directory, a depth of 2 follows the links at the next level down, etc.
    • Any invalid soft links (i.e. soft links that point to non-existent items) will be silently ignored.
    • The excludeDirs flag only controls whether any given soft link path itself is added to the list once it has been discovered. It does not modify any behavior related to directory recursion.
    • The excludeDirs flag only controls whether any given directory path itself is added to the list once it has been discovered. It does not modify any behavior related to directory recursion.
    • If you call this method on a link to a directory that link will never be dereferenced (it may, however, be followed).

    removeYoungFiles(self, daysOld)

    source code 

    Removes from the list files younger than a certain age (in days).

    Any file whose "age" in days is less than (<) the value of the daysOld parameter will be removed from the list so that it will not be purged later when purgeItems is called. Directories and soft links will be ignored.

    The "age" of a file is the amount of time since the file was last used, per the most recent of the file's st_atime and st_mtime values.

    Parameters:
    • daysOld (Integer value >= 0.) - Minimum age of files that are to be kept in the list.
    Returns:
    Number of entries removed

    Note: Some people find the "sense" of this method confusing or "backwards". Keep in mind that this method is used to remove items from the list, not from the filesystem! It removes from the list those items that you would not want to purge because they are too young. As an example, passing in daysOld of zero (0) would remove from the list no files, which would result in purging all of the files later. I would be happy to make a synonym of this method with an easier-to-understand "sense", if someone can suggest one.

    purgeItems(self)

    source code 

    Purges all items in the list.

    Every item in the list will be purged. Directories in the list will not be purged recursively, and hence will only be removed if they are empty. Errors will be ignored.

    To faciliate easy removal of directories that will end up being empty, the delete process happens in two passes: files first (including soft links), then directories.

    Returns:
    Tuple containing count of (files, dirs) removed

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.tools-module.html0000664000175000017500000000215412657665544026547 0ustar pronovicpronovic00000000000000 tools

    Module tools


    Variables


    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.subversion-module.html0000664000175000017500000013732312657665544030320 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.subversion
    Package CedarBackup3 :: Package extend :: Module subversion
    [hide private]
    [frames] | no frames]

    Module subversion

    source code

    Provides an extension to back up Subversion repositories.

    This is a Cedar Backup extension used to back up Subversion repositories via the Cedar Backup command line. Each Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action: weekly, daily, incremental.

    This extension requires a new configuration section <subversion> and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file.

    There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). Although the repository type can be specified in configuration, that information is just kept around for reference. It doesn't affect the backup. Both kinds of repositories are backed up in the same way, using svnadmin dump in an incremental mode.

    It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do that, then use the normal collect action. This is probably simpler, although it carries its own advantages and disadvantages (plus you will have to be careful to exclude the working directories Subversion uses when building an update to commit). Check the Subversion documentation for more information.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      RepositoryDir
    Class representing Subversion repository directory.
      Repository
    Class representing generic Subversion repository configuration..
      SubversionConfig
    Class representing Subversion configuration.
      LocalConfig
    Class representing this extension's configuration document.
      BDBRepository
    Class representing Subversion BDB (Berkeley Database) repository configuration.
      FSFSRepository
    Class representing Subversion FSFS repository configuration.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the Subversion backup action.
    source code
     
    _getCollectMode(local, repository)
    Gets the collect mode that should be used for a repository.
    source code
     
    _getCompressMode(local, repository)
    Gets the compress mode that should be used for a repository.
    source code
     
    _getRevisionPath(config, repository)
    Gets the path to the revision file associated with a repository.
    source code
     
    _getBackupPath(config, repositoryPath, compressMode, startRevision, endRevision)
    Gets the backup file path (including correct extension) associated with a repository.
    source code
     
    _getRepositoryPaths(repositoryDir)
    Gets a list of child repository paths within a repository directory.
    source code
     
    _getExclusions(repositoryDir)
    Gets exclusions (file and patterns) associated with an repository directory.
    source code
     
    _backupRepository(config, local, todayIsStart, fullBackup, repository)
    Backs up an individual Subversion repository.
    source code
     
    _getOutputFile(backupPath, compressMode)
    Opens the output file used for saving the Subversion dump.
    source code
     
    _loadLastRevision(revisionPath)
    Loads the indicated revision file from disk into an integer.
    source code
     
    _writeLastRevision(config, revisionPath, endRevision)
    Writes the end revision to the indicated revision file on disk.
    source code
     
    backupRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)
    Backs up an individual Subversion repository.
    source code
     
    getYoungestRevision(repositoryPath)
    Gets the youngest (newest) revision in a Subversion repository using svnlook.
    source code
     
    backupBDBRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)
    Backs up an individual Subversion BDB repository.
    source code
     
    backupFSFSRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)
    Backs up an individual Subversion FSFS repository.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.extend.subversion")
      SVNLOOK_COMMAND = ['svnlook']
      SVNADMIN_COMMAND = ['svnadmin']
      REVISION_PATH_EXTENSION = 'svnlast'
      __package__ = 'CedarBackup3.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the Subversion backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If a backup could not be written for some reason.

    _getCollectMode(local, repository)

    source code 

    Gets the collect mode that should be used for a repository. Use repository's if possible, otherwise take from subversion section.

    Parameters:
    • repository - Repository object.
    Returns:
    Collect mode to use.

    _getCompressMode(local, repository)

    source code 

    Gets the compress mode that should be used for a repository. Use repository's if possible, otherwise take from subversion section.

    Parameters:
    • local - LocalConfig object.
    • repository - Repository object.
    Returns:
    Compress mode to use.

    _getRevisionPath(config, repository)

    source code 

    Gets the path to the revision file associated with a repository.

    Parameters:
    • config - Config object.
    • repository - Repository object.
    Returns:
    Absolute path to the revision file associated with the repository.

    _getBackupPath(config, repositoryPath, compressMode, startRevision, endRevision)

    source code 

    Gets the backup file path (including correct extension) associated with a repository.

    Parameters:
    • config - Config object.
    • repositoryPath - Path to the indicated repository
    • compressMode - Compress mode to use for this repository.
    • startRevision - Starting repository revision.
    • endRevision - Ending repository revision.
    Returns:
    Absolute path to the backup file associated with the repository.

    _getRepositoryPaths(repositoryDir)

    source code 

    Gets a list of child repository paths within a repository directory.

    Parameters:
    • repositoryDir - RepositoryDirectory

    _getExclusions(repositoryDir)

    source code 

    Gets exclusions (file and patterns) associated with an repository directory.

    The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the repository directory's relative exclude paths.

    The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the repository directory's list of patterns.

    Parameters:
    • repositoryDir - Repository directory object.
    Returns:
    Tuple (files, patterns) indicating what to exclude.

    _backupRepository(config, local, todayIsStart, fullBackup, repository)

    source code 

    Backs up an individual Subversion repository.

    This internal method wraps the public methods and adds some functionality to work better with the extended action itself.

    Parameters:
    • config - Cedar Backup configuration.
    • local - Local configuration
    • todayIsStart - Indicates whether today is start of week
    • fullBackup - Full backup flag
    • repository - Repository to operate on
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the Subversion dump.

    _getOutputFile(backupPath, compressMode)

    source code 

    Opens the output file used for saving the Subversion dump.

    If the compress mode is "gzip", we'll open a GzipFile, and if the compress mode is "bzip2", we'll open a BZ2File. Otherwise, we'll just return an object from the normal open() method.

    Parameters:
    • backupPath - Path to file to open.
    • compressMode - Compress mode of file ("none", "gzip", "bzip").
    Returns:
    Output file object, opened in binary mode for use with executeCommand()

    _loadLastRevision(revisionPath)

    source code 

    Loads the indicated revision file from disk into an integer.

    If we can't load the revision file successfully (either because it doesn't exist or for some other reason), then a revision of -1 will be returned - but the condition will be logged. This way, we err on the side of backing up too much, because anyone using this will presumably be adding 1 to the revision, so they don't duplicate any backups.

    Parameters:
    • revisionPath - Path to the revision file on disk.
    Returns:
    Integer representing last backed-up revision, -1 on error or if none can be read.

    _writeLastRevision(config, revisionPath, endRevision)

    source code 

    Writes the end revision to the indicated revision file on disk.

    If we can't write the revision file successfully for any reason, we'll log the condition but won't throw an exception.

    Parameters:
    • config - Config object.
    • revisionPath - Path to the revision file on disk.
    • endRevision - Last revision backed up on this run.

    backupRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)

    source code 

    Backs up an individual Subversion repository.

    The starting and ending revision values control an incremental backup. If the starting revision is not passed in, then revision zero (the start of the repository) is assumed. If the ending revision is not passed in, then the youngest revision in the database will be used as the endpoint.

    The backup data will be written into the passed-in back file. Normally, this would be an object as returned from open, but it is possible to use something like a GzipFile to write compressed output. The caller is responsible for closing the passed-in backup file.

    Parameters:
    • repositoryPath (String path representing Subversion repository on disk.) - Path to Subversion repository to back up
    • backupFile (Python file object as from open() or file().) - Python file object to use for writing backup.
    • startRevision (Integer value >= 0.) - Starting repository revision to back up (for incremental backups)
    • endRevision (Integer value >= 0.) - Ending repository revision to back up (for incremental backups)
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the Subversion dump.
    Notes:
    • This function should either be run as root or as the owner of the Subversion repository.
    • It is apparently not a good idea to interrupt this function. Sometimes, this leaves the repository in a "wedged" state, which requires recovery using svnadmin recover.

    getYoungestRevision(repositoryPath)

    source code 

    Gets the youngest (newest) revision in a Subversion repository using svnlook.

    Parameters:
    • repositoryPath (String path representing Subversion repository on disk.) - Path to Subversion repository to look in.
    Returns:
    Youngest revision as an integer.
    Raises:
    • ValueError - If there is a problem parsing the svnlook output.
    • IOError - If there is a problem executing the svnlook command.

    Note: This function should either be run as root or as the owner of the Subversion repository.

    backupBDBRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)

    source code 

    Backs up an individual Subversion BDB repository. This function is deprecated. Use backupRepository instead.

    backupFSFSRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)

    source code 

    Backs up an individual Subversion FSFS repository. This function is deprecated. Use backupRepository instead.


    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.extend.amazons3-module.html0000664000175000017500000000625612657665544030437 0ustar pronovicpronovic00000000000000 amazons3

    Module amazons3


    Classes

    AmazonS3Config
    LocalConfig

    Functions

    executeAction

    Variables

    AWS_COMMAND
    STORE_INDICATOR
    SU_COMMAND
    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.extend-module.html0000664000175000017500000000215712657665544026701 0ustar pronovicpronovic00000000000000 extend

    Module extend


    Variables


    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.tools.amazons3-pysrc.html0000664000175000017500000137524412657665550027404 0ustar pronovicpronovic00000000000000 CedarBackup3.tools.amazons3
    Package CedarBackup3 :: Package tools :: Module amazons3
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.tools.amazons3

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2014,2015 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python 3 (>= 3.4) 
      29  # Project  : Cedar Backup, release 3 
      30  # Purpose  : Cedar Backup tool to synchronize an Amazon S3 bucket. 
      31  # 
      32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      33   
      34  ######################################################################## 
      35  # Notes 
      36  ######################################################################## 
      37   
      38  """ 
      39  Synchonizes a local directory with an Amazon S3 bucket. 
      40   
      41  No configuration is required; all necessary information is taken from the 
      42  command-line.  The only thing configuration would help with is the path 
      43  resolver interface, and it doesn't seem worth it to require configuration just 
      44  to get that. 
      45   
      46  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      47  """ 
      48   
      49  ######################################################################## 
      50  # Imported modules and constants 
      51  ######################################################################## 
      52   
      53  # System modules 
      54  import sys 
      55  import os 
      56  import logging 
      57  import getopt 
      58  import json 
      59  import warnings 
      60  from functools import total_ordering 
      61  from pathlib import Path 
      62  import chardet 
      63   
      64  # Cedar Backup modules 
      65  from CedarBackup3.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT 
      66  from CedarBackup3.filesystem import FilesystemList 
      67  from CedarBackup3.cli import setupLogging, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP, DEFAULT_MODE 
      68  from CedarBackup3.util import Diagnostics, splitCommandLine, encodePath 
      69  from CedarBackup3.util import executeCommand 
      70   
      71   
      72  ######################################################################## 
      73  # Module-wide constants and variables 
      74  ######################################################################## 
      75   
      76  logger = logging.getLogger("CedarBackup3.log.tools.amazons3") 
      77   
      78  AWS_COMMAND   = [ "aws" ] 
      79   
      80  SHORT_SWITCHES     = "hVbql:o:m:OdsDvw" 
      81  LONG_SWITCHES      = [ 'help', 'version', 'verbose', 'quiet', 
      82                         'logfile=', 'owner=', 'mode=', 
      83                         'output', 'debug', 'stack', 'diagnostics', 
      84                         'verifyOnly', 'ignoreWarnings', ] 
    
    85 86 87 ####################################################################### 88 # Options class 89 ####################################################################### 90 91 @total_ordering 92 -class Options(object):
    93 94 ###################### 95 # Class documentation 96 ###################### 97 98 """ 99 Class representing command-line options for the cback3-amazons3-sync script. 100 101 The C{Options} class is a Python object representation of the command-line 102 options of the cback3-amazons3-sync script. 103 104 The object representation is two-way: a command line string or a list of 105 command line arguments can be used to create an C{Options} object, and then 106 changes to the object can be propogated back to a list of command-line 107 arguments or to a command-line string. An C{Options} object can even be 108 created from scratch programmatically (if you have a need for that). 109 110 There are two main levels of validation in the C{Options} class. The first 111 is field-level validation. Field-level validation comes into play when a 112 given field in an object is assigned to or updated. We use Python's 113 C{property} functionality to enforce specific validations on field values, 114 and in some places we even use customized list classes to enforce 115 validations on list members. You should expect to catch a C{ValueError} 116 exception when making assignments to fields if you are programmatically 117 filling an object. 118 119 The second level of validation is post-completion validation. Certain 120 validations don't make sense until an object representation of options is 121 fully "complete". We don't want these validations to apply all of the time, 122 because it would make building up a valid object from scratch a real pain. 123 For instance, we might have to do things in the right order to keep from 124 throwing exceptions, etc. 125 126 All of these post-completion validations are encapsulated in the 127 L{Options.validate} method. This method can be called at any time by a 128 client, and will always be called immediately after creating a C{Options} 129 object from a command line and before exporting a C{Options} object back to 130 a command line. This way, we get acceptable ease-of-use but we also don't 131 accept or emit invalid command lines. 132 133 @note: Lists within this class are "unordered" for equality comparisons. 134 135 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__ 136 """ 137 138 ############## 139 # Constructor 140 ############## 141
    142 - def __init__(self, argumentList=None, argumentString=None, validate=True):
    143 """ 144 Initializes an options object. 145 146 If you initialize the object without passing either C{argumentList} or 147 C{argumentString}, the object will be empty and will be invalid until it 148 is filled in properly. 149 150 No reference to the original arguments is saved off by this class. Once 151 the data has been parsed (successfully or not) this original information 152 is discarded. 153 154 The argument list is assumed to be a list of arguments, not including the 155 name of the command, something like C{sys.argv[1:]}. If you pass 156 C{sys.argv} instead, things are not going to work. 157 158 The argument string will be parsed into an argument list by the 159 L{util.splitCommandLine} function (see the documentation for that 160 function for some important notes about its limitations). There is an 161 assumption that the resulting list will be equivalent to C{sys.argv[1:]}, 162 just like C{argumentList}. 163 164 Unless the C{validate} argument is C{False}, the L{Options.validate} 165 method will be called (with its default arguments) after successfully 166 parsing any passed-in command line. This validation ensures that 167 appropriate actions, etc. have been specified. Keep in mind that even if 168 C{validate} is C{False}, it might not be possible to parse the passed-in 169 command line, so an exception might still be raised. 170 171 @note: The command line format is specified by the L{_usage} function. 172 Call L{_usage} to see a usage statement for the cback3-amazons3-sync script. 173 174 @note: It is strongly suggested that the C{validate} option always be set 175 to C{True} (the default) unless there is a specific need to read in 176 invalid command line arguments. 177 178 @param argumentList: Command line for a program. 179 @type argumentList: List of arguments, i.e. C{sys.argv} 180 181 @param argumentString: Command line for a program. 182 @type argumentString: String, i.e. "cback3-amazons3-sync --verbose stage store" 183 184 @param validate: Validate the command line after parsing it. 185 @type validate: Boolean true/false. 186 187 @raise getopt.GetoptError: If the command-line arguments could not be parsed. 188 @raise ValueError: If the command-line arguments are invalid. 189 """ 190 self._help = False 191 self._version = False 192 self._verbose = False 193 self._quiet = False 194 self._logfile = None 195 self._owner = None 196 self._mode = None 197 self._output = False 198 self._debug = False 199 self._stacktrace = False 200 self._diagnostics = False 201 self._verifyOnly = False 202 self._ignoreWarnings = False 203 self._sourceDir = None 204 self._s3BucketUrl = None 205 if argumentList is not None and argumentString is not None: 206 raise ValueError("Use either argumentList or argumentString, but not both.") 207 if argumentString is not None: 208 argumentList = splitCommandLine(argumentString) 209 if argumentList is not None: 210 self._parseArgumentList(argumentList) 211 if validate: 212 self.validate()
    213 214 215 ######################### 216 # String representations 217 ######################### 218
    219 - def __repr__(self):
    220 """ 221 Official string representation for class instance. 222 """ 223 return self.buildArgumentString(validate=False)
    224
    225 - def __str__(self):
    226 """ 227 Informal string representation for class instance. 228 """ 229 return self.__repr__()
    230 231 232 ############################# 233 # Standard comparison method 234 ############################# 235
    236 - def __eq__(self, other):
    237 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 238 return self.__cmp__(other) == 0
    239
    240 - def __lt__(self, other):
    241 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 242 return self.__cmp__(other) < 0
    243
    244 - def __gt__(self, other):
    245 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 246 return self.__cmp__(other) > 0
    247
    248 - def __cmp__(self, other):
    249 """ 250 Original Python 2 comparison operator. 251 Lists within this class are "unordered" for equality comparisons. 252 @param other: Other object to compare to. 253 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 254 """ 255 if other is None: 256 return 1 257 if self.help != other.help: 258 if self.help < other.help: 259 return -1 260 else: 261 return 1 262 if self.version != other.version: 263 if self.version < other.version: 264 return -1 265 else: 266 return 1 267 if self.verbose != other.verbose: 268 if self.verbose < other.verbose: 269 return -1 270 else: 271 return 1 272 if self.quiet != other.quiet: 273 if self.quiet < other.quiet: 274 return -1 275 else: 276 return 1 277 if self.logfile != other.logfile: 278 if str(self.logfile or "") < str(other.logfile or ""): 279 return -1 280 else: 281 return 1 282 if self.owner != other.owner: 283 if str(self.owner or "") < str(other.owner or ""): 284 return -1 285 else: 286 return 1 287 if self.mode != other.mode: 288 if int(self.mode or 0) < int(other.mode or 0): 289 return -1 290 else: 291 return 1 292 if self.output != other.output: 293 if self.output < other.output: 294 return -1 295 else: 296 return 1 297 if self.debug != other.debug: 298 if self.debug < other.debug: 299 return -1 300 else: 301 return 1 302 if self.stacktrace != other.stacktrace: 303 if self.stacktrace < other.stacktrace: 304 return -1 305 else: 306 return 1 307 if self.diagnostics != other.diagnostics: 308 if self.diagnostics < other.diagnostics: 309 return -1 310 else: 311 return 1 312 if self.verifyOnly != other.verifyOnly: 313 if self.verifyOnly < other.verifyOnly: 314 return -1 315 else: 316 return 1 317 if self.ignoreWarnings != other.ignoreWarnings: 318 if self.ignoreWarnings < other.ignoreWarnings: 319 return -1 320 else: 321 return 1 322 if self.sourceDir != other.sourceDir: 323 if str(self.sourceDir or "") < str(other.sourceDir or ""): 324 return -1 325 else: 326 return 1 327 if self.s3BucketUrl != other.s3BucketUrl: 328 if str(self.s3BucketUrl or "") < str(other.s3BucketUrl or ""): 329 return -1 330 else: 331 return 1 332 return 0
    333 334 335 ############# 336 # Properties 337 ############# 338
    339 - def _setHelp(self, value):
    340 """ 341 Property target used to set the help flag. 342 No validations, but we normalize the value to C{True} or C{False}. 343 """ 344 if value: 345 self._help = True 346 else: 347 self._help = False
    348
    349 - def _getHelp(self):
    350 """ 351 Property target used to get the help flag. 352 """ 353 return self._help
    354
    355 - def _setVersion(self, value):
    356 """ 357 Property target used to set the version flag. 358 No validations, but we normalize the value to C{True} or C{False}. 359 """ 360 if value: 361 self._version = True 362 else: 363 self._version = False
    364
    365 - def _getVersion(self):
    366 """ 367 Property target used to get the version flag. 368 """ 369 return self._version
    370
    371 - def _setVerbose(self, value):
    372 """ 373 Property target used to set the verbose flag. 374 No validations, but we normalize the value to C{True} or C{False}. 375 """ 376 if value: 377 self._verbose = True 378 else: 379 self._verbose = False
    380
    381 - def _getVerbose(self):
    382 """ 383 Property target used to get the verbose flag. 384 """ 385 return self._verbose
    386
    387 - def _setQuiet(self, value):
    388 """ 389 Property target used to set the quiet flag. 390 No validations, but we normalize the value to C{True} or C{False}. 391 """ 392 if value: 393 self._quiet = True 394 else: 395 self._quiet = False
    396
    397 - def _getQuiet(self):
    398 """ 399 Property target used to get the quiet flag. 400 """ 401 return self._quiet
    402
    403 - def _setLogfile(self, value):
    404 """ 405 Property target used to set the logfile parameter. 406 @raise ValueError: If the value cannot be encoded properly. 407 """ 408 if value is not None: 409 if len(value) < 1: 410 raise ValueError("The logfile parameter must be a non-empty string.") 411 self._logfile = encodePath(value)
    412
    413 - def _getLogfile(self):
    414 """ 415 Property target used to get the logfile parameter. 416 """ 417 return self._logfile
    418
    419 - def _setOwner(self, value):
    420 """ 421 Property target used to set the owner parameter. 422 If not C{None}, the owner must be a C{(user,group)} tuple or list. 423 Strings (and inherited children of strings) are explicitly disallowed. 424 The value will be normalized to a tuple. 425 @raise ValueError: If the value is not valid. 426 """ 427 if value is None: 428 self._owner = None 429 else: 430 if isinstance(value, str): 431 raise ValueError("Must specify user and group tuple for owner parameter.") 432 if len(value) != 2: 433 raise ValueError("Must specify user and group tuple for owner parameter.") 434 if len(value[0]) < 1 or len(value[1]) < 1: 435 raise ValueError("User and group tuple values must be non-empty strings.") 436 self._owner = (value[0], value[1])
    437
    438 - def _getOwner(self):
    439 """ 440 Property target used to get the owner parameter. 441 The parameter is a tuple of C{(user, group)}. 442 """ 443 return self._owner
    444
    445 - def _setMode(self, value):
    446 """ 447 Property target used to set the mode parameter. 448 """ 449 if value is None: 450 self._mode = None 451 else: 452 try: 453 if isinstance(value, str): 454 value = int(value, 8) 455 else: 456 value = int(value) 457 except TypeError: 458 raise ValueError("Mode must be an octal integer >= 0, i.e. 644.") 459 if value < 0: 460 raise ValueError("Mode must be an octal integer >= 0. i.e. 644.") 461 self._mode = value
    462
    463 - def _getMode(self):
    464 """ 465 Property target used to get the mode parameter. 466 """ 467 return self._mode
    468
    469 - def _setOutput(self, value):
    470 """ 471 Property target used to set the output flag. 472 No validations, but we normalize the value to C{True} or C{False}. 473 """ 474 if value: 475 self._output = True 476 else: 477 self._output = False
    478
    479 - def _getOutput(self):
    480 """ 481 Property target used to get the output flag. 482 """ 483 return self._output
    484
    485 - def _setDebug(self, value):
    486 """ 487 Property target used to set the debug flag. 488 No validations, but we normalize the value to C{True} or C{False}. 489 """ 490 if value: 491 self._debug = True 492 else: 493 self._debug = False
    494
    495 - def _getDebug(self):
    496 """ 497 Property target used to get the debug flag. 498 """ 499 return self._debug
    500
    501 - def _setStacktrace(self, value):
    502 """ 503 Property target used to set the stacktrace flag. 504 No validations, but we normalize the value to C{True} or C{False}. 505 """ 506 if value: 507 self._stacktrace = True 508 else: 509 self._stacktrace = False
    510
    511 - def _getStacktrace(self):
    512 """ 513 Property target used to get the stacktrace flag. 514 """ 515 return self._stacktrace
    516
    517 - def _setDiagnostics(self, value):
    518 """ 519 Property target used to set the diagnostics flag. 520 No validations, but we normalize the value to C{True} or C{False}. 521 """ 522 if value: 523 self._diagnostics = True 524 else: 525 self._diagnostics = False
    526
    527 - def _getDiagnostics(self):
    528 """ 529 Property target used to get the diagnostics flag. 530 """ 531 return self._diagnostics
    532
    533 - def _setVerifyOnly(self, value):
    534 """ 535 Property target used to set the verifyOnly flag. 536 No validations, but we normalize the value to C{True} or C{False}. 537 """ 538 if value: 539 self._verifyOnly = True 540 else: 541 self._verifyOnly = False
    542
    543 - def _getVerifyOnly(self):
    544 """ 545 Property target used to get the verifyOnly flag. 546 """ 547 return self._verifyOnly
    548
    549 - def _setIgnoreWarnings(self, value):
    550 """ 551 Property target used to set the ignoreWarnings flag. 552 No validations, but we normalize the value to C{True} or C{False}. 553 """ 554 if value: 555 self._ignoreWarnings = True 556 else: 557 self._ignoreWarnings = False
    558
    559 - def _getIgnoreWarnings(self):
    560 """ 561 Property target used to get the ignoreWarnings flag. 562 """ 563 return self._ignoreWarnings
    564
    565 - def _setSourceDir(self, value):
    566 """ 567 Property target used to set the sourceDir parameter. 568 """ 569 if value is not None: 570 if len(value) < 1: 571 raise ValueError("The sourceDir parameter must be a non-empty string.") 572 self._sourceDir = value
    573
    574 - def _getSourceDir(self):
    575 """ 576 Property target used to get the sourceDir parameter. 577 """ 578 return self._sourceDir
    579
    580 - def _setS3BucketUrl(self, value):
    581 """ 582 Property target used to set the s3BucketUrl parameter. 583 """ 584 if value is not None: 585 if len(value) < 1: 586 raise ValueError("The s3BucketUrl parameter must be a non-empty string.") 587 self._s3BucketUrl = value
    588
    589 - def _getS3BucketUrl(self):
    590 """ 591 Property target used to get the s3BucketUrl parameter. 592 """ 593 return self._s3BucketUrl
    594 595 help = property(_getHelp, _setHelp, None, "Command-line help (C{-h,--help}) flag.") 596 version = property(_getVersion, _setVersion, None, "Command-line version (C{-V,--version}) flag.") 597 verbose = property(_getVerbose, _setVerbose, None, "Command-line verbose (C{-b,--verbose}) flag.") 598 quiet = property(_getQuiet, _setQuiet, None, "Command-line quiet (C{-q,--quiet}) flag.") 599 logfile = property(_getLogfile, _setLogfile, None, "Command-line logfile (C{-l,--logfile}) parameter.") 600 owner = property(_getOwner, _setOwner, None, "Command-line owner (C{-o,--owner}) parameter, as tuple C{(user,group)}.") 601 mode = property(_getMode, _setMode, None, "Command-line mode (C{-m,--mode}) parameter.") 602 output = property(_getOutput, _setOutput, None, "Command-line output (C{-O,--output}) flag.") 603 debug = property(_getDebug, _setDebug, None, "Command-line debug (C{-d,--debug}) flag.") 604 stacktrace = property(_getStacktrace, _setStacktrace, None, "Command-line stacktrace (C{-s,--stack}) flag.") 605 diagnostics = property(_getDiagnostics, _setDiagnostics, None, "Command-line diagnostics (C{-D,--diagnostics}) flag.") 606 verifyOnly = property(_getVerifyOnly, _setVerifyOnly, None, "Command-line verifyOnly (C{-v,--verifyOnly}) flag.") 607 ignoreWarnings = property(_getIgnoreWarnings, _setIgnoreWarnings, None, "Command-line ignoreWarnings (C{-w,--ignoreWarnings}) flag.") 608 sourceDir = property(_getSourceDir, _setSourceDir, None, "Command-line sourceDir, source of sync.") 609 s3BucketUrl = property(_getS3BucketUrl, _setS3BucketUrl, None, "Command-line s3BucketUrl, target of sync.") 610 611 612 ################## 613 # Utility methods 614 ################## 615
    616 - def validate(self):
    617 """ 618 Validates command-line options represented by the object. 619 620 Unless C{--help} or C{--version} are supplied, at least one action must 621 be specified. Other validations (as for allowed values for particular 622 options) will be taken care of at assignment time by the properties 623 functionality. 624 625 @note: The command line format is specified by the L{_usage} function. 626 Call L{_usage} to see a usage statement for the cback3-amazons3-sync script. 627 628 @raise ValueError: If one of the validations fails. 629 """ 630 if not self.help and not self.version and not self.diagnostics: 631 if self.sourceDir is None or self.s3BucketUrl is None: 632 raise ValueError("Source directory and S3 bucket URL are both required.")
    633
    634 - def buildArgumentList(self, validate=True):
    635 """ 636 Extracts options into a list of command line arguments. 637 638 The original order of the various arguments (if, indeed, the object was 639 initialized with a command-line) is not preserved in this generated 640 argument list. Besides that, the argument list is normalized to use the 641 long option names (i.e. --version rather than -V). The resulting list 642 will be suitable for passing back to the constructor in the 643 C{argumentList} parameter. Unlike L{buildArgumentString}, string 644 arguments are not quoted here, because there is no need for it. 645 646 Unless the C{validate} parameter is C{False}, the L{Options.validate} 647 method will be called (with its default arguments) against the 648 options before extracting the command line. If the options are not valid, 649 then an argument list will not be extracted. 650 651 @note: It is strongly suggested that the C{validate} option always be set 652 to C{True} (the default) unless there is a specific need to extract an 653 invalid command line. 654 655 @param validate: Validate the options before extracting the command line. 656 @type validate: Boolean true/false. 657 658 @return: List representation of command-line arguments. 659 @raise ValueError: If options within the object are invalid. 660 """ 661 if validate: 662 self.validate() 663 argumentList = [] 664 if self._help: 665 argumentList.append("--help") 666 if self.version: 667 argumentList.append("--version") 668 if self.verbose: 669 argumentList.append("--verbose") 670 if self.quiet: 671 argumentList.append("--quiet") 672 if self.logfile is not None: 673 argumentList.append("--logfile") 674 argumentList.append(self.logfile) 675 if self.owner is not None: 676 argumentList.append("--owner") 677 argumentList.append("%s:%s" % (self.owner[0], self.owner[1])) 678 if self.mode is not None: 679 argumentList.append("--mode") 680 argumentList.append("%o" % self.mode) 681 if self.output: 682 argumentList.append("--output") 683 if self.debug: 684 argumentList.append("--debug") 685 if self.stacktrace: 686 argumentList.append("--stack") 687 if self.diagnostics: 688 argumentList.append("--diagnostics") 689 if self.verifyOnly: 690 argumentList.append("--verifyOnly") 691 if self.ignoreWarnings: 692 argumentList.append("--ignoreWarnings") 693 if self.sourceDir is not None: 694 argumentList.append(self.sourceDir) 695 if self.s3BucketUrl is not None: 696 argumentList.append(self.s3BucketUrl) 697 return argumentList
    698
    699 - def buildArgumentString(self, validate=True):
    700 """ 701 Extracts options into a string of command-line arguments. 702 703 The original order of the various arguments (if, indeed, the object was 704 initialized with a command-line) is not preserved in this generated 705 argument string. Besides that, the argument string is normalized to use 706 the long option names (i.e. --version rather than -V) and to quote all 707 string arguments with double quotes (C{"}). The resulting string will be 708 suitable for passing back to the constructor in the C{argumentString} 709 parameter. 710 711 Unless the C{validate} parameter is C{False}, the L{Options.validate} 712 method will be called (with its default arguments) against the options 713 before extracting the command line. If the options are not valid, then 714 an argument string will not be extracted. 715 716 @note: It is strongly suggested that the C{validate} option always be set 717 to C{True} (the default) unless there is a specific need to extract an 718 invalid command line. 719 720 @param validate: Validate the options before extracting the command line. 721 @type validate: Boolean true/false. 722 723 @return: String representation of command-line arguments. 724 @raise ValueError: If options within the object are invalid. 725 """ 726 if validate: 727 self.validate() 728 argumentString = "" 729 if self._help: 730 argumentString += "--help " 731 if self.version: 732 argumentString += "--version " 733 if self.verbose: 734 argumentString += "--verbose " 735 if self.quiet: 736 argumentString += "--quiet " 737 if self.logfile is not None: 738 argumentString += "--logfile \"%s\" " % self.logfile 739 if self.owner is not None: 740 argumentString += "--owner \"%s:%s\" " % (self.owner[0], self.owner[1]) 741 if self.mode is not None: 742 argumentString += "--mode %o " % self.mode 743 if self.output: 744 argumentString += "--output " 745 if self.debug: 746 argumentString += "--debug " 747 if self.stacktrace: 748 argumentString += "--stack " 749 if self.diagnostics: 750 argumentString += "--diagnostics " 751 if self.verifyOnly: 752 argumentString += "--verifyOnly " 753 if self.ignoreWarnings: 754 argumentString += "--ignoreWarnings " 755 if self.sourceDir is not None: 756 argumentString += "\"%s\" " % self.sourceDir 757 if self.s3BucketUrl is not None: 758 argumentString += "\"%s\" " % self.s3BucketUrl 759 return argumentString
    760
    761 - def _parseArgumentList(self, argumentList):
    762 """ 763 Internal method to parse a list of command-line arguments. 764 765 Most of the validation we do here has to do with whether the arguments 766 can be parsed and whether any values which exist are valid. We don't do 767 any validation as to whether required elements exist or whether elements 768 exist in the proper combination (instead, that's the job of the 769 L{validate} method). 770 771 For any of the options which supply parameters, if the option is 772 duplicated with long and short switches (i.e. C{-l} and a C{--logfile}) 773 then the long switch is used. If the same option is duplicated with the 774 same switch (long or short), then the last entry on the command line is 775 used. 776 777 @param argumentList: List of arguments to a command. 778 @type argumentList: List of arguments to a command, i.e. C{sys.argv[1:]} 779 780 @raise ValueError: If the argument list cannot be successfully parsed. 781 """ 782 switches = { } 783 opts, remaining = getopt.getopt(argumentList, SHORT_SWITCHES, LONG_SWITCHES) 784 for o, a in opts: # push the switches into a hash 785 switches[o] = a 786 if "-h" in switches or "--help" in switches: 787 self.help = True 788 if "-V" in switches or "--version" in switches: 789 self.version = True 790 if "-b" in switches or "--verbose" in switches: 791 self.verbose = True 792 if "-q" in switches or "--quiet" in switches: 793 self.quiet = True 794 if "-l" in switches: 795 self.logfile = switches["-l"] 796 if "--logfile" in switches: 797 self.logfile = switches["--logfile"] 798 if "-o" in switches: 799 self.owner = switches["-o"].split(":", 1) 800 if "--owner" in switches: 801 self.owner = switches["--owner"].split(":", 1) 802 if "-m" in switches: 803 self.mode = switches["-m"] 804 if "--mode" in switches: 805 self.mode = switches["--mode"] 806 if "-O" in switches or "--output" in switches: 807 self.output = True 808 if "-d" in switches or "--debug" in switches: 809 self.debug = True 810 if "-s" in switches or "--stack" in switches: 811 self.stacktrace = True 812 if "-D" in switches or "--diagnostics" in switches: 813 self.diagnostics = True 814 if "-v" in switches or "--verifyOnly" in switches: 815 self.verifyOnly = True 816 if "-w" in switches or "--ignoreWarnings" in switches: 817 self.ignoreWarnings = True 818 try: 819 (self.sourceDir, self.s3BucketUrl) = remaining 820 except ValueError: 821 pass
    822
    823 824 ####################################################################### 825 # Public functions 826 ####################################################################### 827 828 ################# 829 # cli() function 830 ################# 831 832 -def cli():
    833 """ 834 Implements the command-line interface for the C{cback3-amazons3-sync} script. 835 836 Essentially, this is the "main routine" for the cback3-amazons3-sync script. It does 837 all of the argument processing for the script, and then also implements the 838 tool functionality. 839 840 This function looks pretty similiar to C{CedarBackup3.cli.cli()}. It's not 841 easy to refactor this code to make it reusable and also readable, so I've 842 decided to just live with the duplication. 843 844 A different error code is returned for each type of failure: 845 846 - C{1}: The Python interpreter version is < 3.4 847 - C{2}: Error processing command-line arguments 848 - C{3}: Error configuring logging 849 - C{5}: Backup was interrupted with a CTRL-C or similar 850 - C{6}: Error executing other parts of the script 851 852 @note: This script uses print rather than logging to the INFO level, because 853 it is interactive. Underlying Cedar Backup functionality uses the logging 854 mechanism exclusively. 855 856 @return: Error code as described above. 857 """ 858 try: 859 if list(map(int, [sys.version_info[0], sys.version_info[1]])) < [3, 4]: 860 sys.stderr.write("Python 3 version 3.4 or greater required.\n") 861 return 1 862 except: 863 # sys.version_info isn't available before 2.0 864 sys.stderr.write("Python 3 version 3.4 or greater required.\n") 865 return 1 866 867 try: 868 options = Options(argumentList=sys.argv[1:]) 869 except Exception as e: 870 _usage() 871 sys.stderr.write(" *** Error: %s\n" % e) 872 return 2 873 874 if options.help: 875 _usage() 876 return 0 877 if options.version: 878 _version() 879 return 0 880 if options.diagnostics: 881 _diagnostics() 882 return 0 883 884 if options.stacktrace: 885 logfile = setupLogging(options) 886 else: 887 try: 888 logfile = setupLogging(options) 889 except Exception as e: 890 sys.stderr.write("Error setting up logging: %s\n" % e) 891 return 3 892 893 logger.info("Cedar Backup Amazon S3 sync run started.") 894 logger.info("Options were [%s]", options) 895 logger.info("Logfile is [%s]", logfile) 896 Diagnostics().logDiagnostics(method=logger.info) 897 898 if options.stacktrace: 899 _executeAction(options) 900 else: 901 try: 902 _executeAction(options) 903 except KeyboardInterrupt: 904 logger.error("Backup interrupted.") 905 logger.info("Cedar Backup Amazon S3 sync run completed with status 5.") 906 return 5 907 except Exception as e: 908 logger.error("Error executing backup: %s", e) 909 logger.info("Cedar Backup Amazon S3 sync run completed with status 6.") 910 return 6 911 912 logger.info("Cedar Backup Amazon S3 sync run completed with status 0.") 913 return 0
    914
    915 916 ####################################################################### 917 # Utility functions 918 ####################################################################### 919 920 #################### 921 # _usage() function 922 #################### 923 924 -def _usage(fd=sys.stderr):
    925 """ 926 Prints usage information for the cback3-amazons3-sync script. 927 @param fd: File descriptor used to print information. 928 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 929 """ 930 fd.write("\n") 931 fd.write(" Usage: cback3-amazons3-sync [switches] sourceDir s3bucketUrl\n") 932 fd.write("\n") 933 fd.write(" Cedar Backup Amazon S3 sync tool.\n") 934 fd.write("\n") 935 fd.write(" This Cedar Backup utility synchronizes a local directory to an Amazon S3\n") 936 fd.write(" bucket. After the sync is complete, a validation step is taken. An\n") 937 fd.write(" error is reported if the contents of the bucket do not match the\n") 938 fd.write(" source directory, or if the indicated size for any file differs.\n") 939 fd.write(" This tool is a wrapper over the AWS CLI command-line tool.\n") 940 fd.write("\n") 941 fd.write(" The following arguments are required:\n") 942 fd.write("\n") 943 fd.write(" sourceDir The local source directory on disk (must exist)\n") 944 fd.write(" s3BucketUrl The URL to the target Amazon S3 bucket\n") 945 fd.write("\n") 946 fd.write(" The following switches are accepted:\n") 947 fd.write("\n") 948 fd.write(" -h, --help Display this usage/help listing\n") 949 fd.write(" -V, --version Display version information\n") 950 fd.write(" -b, --verbose Print verbose output as well as logging to disk\n") 951 fd.write(" -q, --quiet Run quietly (display no output to the screen)\n") 952 fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE) 953 fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1])) 954 fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE) 955 fd.write(" -O, --output Record some sub-command (i.e. aws) output to the log\n") 956 fd.write(" -d, --debug Write debugging information to the log (implies --output)\n") 957 fd.write(" -s, --stack Dump Python stack trace instead of swallowing exceptions\n") # exactly 80 characters in width! 958 fd.write(" -D, --diagnostics Print runtime diagnostics to the screen and exit\n") 959 fd.write(" -v, --verifyOnly Only verify the S3 bucket contents, do not make changes\n") 960 fd.write(" -w, --ignoreWarnings Ignore warnings about problematic filename encodings\n") 961 fd.write("\n") 962 fd.write(" Typical usage would be something like:\n") 963 fd.write("\n") 964 fd.write(" cback3-amazons3-sync /home/myuser s3://example.com-backup/myuser\n") 965 fd.write("\n") 966 fd.write(" This will sync the contents of /home/myuser into the indicated bucket.\n") 967 fd.write("\n")
    968
    969 970 ###################### 971 # _version() function 972 ###################### 973 974 -def _version(fd=sys.stdout):
    975 """ 976 Prints version information for the cback3-amazons3-sync script. 977 @param fd: File descriptor used to print information. 978 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 979 """ 980 fd.write("\n") 981 fd.write(" Cedar Backup Amazon S3 sync tool.\n") 982 fd.write(" Included with Cedar Backup version %s, released %s.\n" % (VERSION, DATE)) 983 fd.write("\n") 984 fd.write(" Copyright (c) %s %s <%s>.\n" % (COPYRIGHT, AUTHOR, EMAIL)) 985 fd.write(" See CREDITS for a list of included code and other contributors.\n") 986 fd.write(" This is free software; there is NO warranty. See the\n") 987 fd.write(" GNU General Public License version 2 for copying conditions.\n") 988 fd.write("\n") 989 fd.write(" Use the --help option for usage information.\n") 990 fd.write("\n")
    991
    992 993 ########################## 994 # _diagnostics() function 995 ########################## 996 997 -def _diagnostics(fd=sys.stdout):
    998 """ 999 Prints runtime diagnostics information. 1000 @param fd: File descriptor used to print information. 1001 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 1002 """ 1003 fd.write("\n") 1004 fd.write("Diagnostics:\n") 1005 fd.write("\n") 1006 Diagnostics().printDiagnostics(fd=fd, prefix=" ") 1007 fd.write("\n")
    1008
    1009 1010 ############################ 1011 # _executeAction() function 1012 ############################ 1013 1014 -def _executeAction(options):
    1015 """ 1016 Implements the guts of the cback3-amazons3-sync tool. 1017 1018 @param options: Program command-line options. 1019 @type options: Options object. 1020 1021 @raise Exception: Under many generic error conditions 1022 """ 1023 sourceFiles = _buildSourceFiles(options.sourceDir) 1024 if not options.ignoreWarnings: 1025 _checkSourceFiles(options.sourceDir, sourceFiles) 1026 if not options.verifyOnly: 1027 _synchronizeBucket(options.sourceDir, options.s3BucketUrl) 1028 _verifyBucketContents(options.sourceDir, sourceFiles, options.s3BucketUrl)
    1029
    1030 1031 ################################ 1032 # _buildSourceFiles() function 1033 ################################ 1034 1035 -def _buildSourceFiles(sourceDir):
    1036 """ 1037 Build a list of files in a source directory 1038 @param sourceDir: Local source directory 1039 @return: FilesystemList with contents of source directory 1040 """ 1041 if not os.path.isdir(sourceDir): 1042 raise ValueError("Source directory does not exist on disk.") 1043 sourceFiles = FilesystemList() 1044 sourceFiles.addDirContents(sourceDir) 1045 return sourceFiles
    1046
    1047 1048 ############################### 1049 # _checkSourceFiles() function 1050 ############################### 1051 1052 -def _checkSourceFiles(sourceDir, sourceFiles):
    1053 """ 1054 Check source files, trying to guess which ones will have encoding problems. 1055 @param sourceDir: Local source directory 1056 @param sourceDir: Local source directory 1057 @raises ValueError: If a problem file is found 1058 @see U{http://opensourcehacker.com/2011/09/16/fix-linux-filename-encodings-with-python/} 1059 @see U{http://serverfault.com/questions/82821/how-to-tell-the-language-encoding-of-a-filename-on-linux} 1060 @see U{http://randysofia.com/2014/06/06/aws-cli-and-your-locale/} 1061 """ 1062 with warnings.catch_warnings(): 1063 encoding = Diagnostics().encoding 1064 1065 # Note: this was difficult to fully test. As of the original Python 2 1066 # implementation, I had a bunch of files on disk that had inconsistent 1067 # encodings, so I was able to prove that the check warned about these 1068 # files initially, and then didn't warn after I fixed them. I didn't 1069 # save off those files for a unit test (ugh) so by the time of the Python 1070 # 3 conversion -- which is subtly different because of the different way 1071 # Python 3 handles unicode strings -- I had to contrive some tests. I 1072 # think the tests I wrote are consistent with the earlier problems, and I 1073 # do get the same result for those tests in both CedarBackup 2 and Cedar 1074 # Backup 3. However, I can't be certain the implementation is 1075 # equivalent. If someone runs into a situation that this code doesn't 1076 # handle, you may need to revisit the implementation. 1077 1078 failed = False 1079 for entry in sourceFiles: 1080 path = bytes(Path(entry)) 1081 result = chardet.detect(path) 1082 source = path.decode(result["encoding"]) 1083 try: 1084 target = path.decode(encoding) 1085 if source != target: 1086 logger.error("Inconsistent encoding for [%s]: got %s, but need %s", source, result["encoding"], encoding) 1087 failed = True 1088 except Exception: 1089 logger.error("Inconsistent encoding for [%s]: got %s, but need %s", source, result["encoding"], encoding) 1090 failed = True 1091 1092 if not failed: 1093 logger.info("Completed checking source filename encoding (no problems found).") 1094 else: 1095 logger.error("Some filenames have inconsistent encodings and will likely cause sync problems.") 1096 logger.error("You may be able to fix this by setting a more sensible locale in your environment.") 1097 logger.error("Aternately, you can rename the problem files to be valid in the indicated locale.") 1098 logger.error("To ignore this warning and proceed anyway, use --ignoreWarnings") 1099 raise ValueError("Some filenames have inconsistent encodings and will likely cause sync problems.")
    1100
    1101 1102 ################################ 1103 # _synchronizeBucket() function 1104 ################################ 1105 1106 -def _synchronizeBucket(sourceDir, s3BucketUrl):
    1107 """ 1108 Synchronize a local directory to an Amazon S3 bucket. 1109 @param sourceDir: Local source directory 1110 @param s3BucketUrl: Target S3 bucket URL 1111 """ 1112 logger.info("Synchronizing local source directory up to Amazon S3.") 1113 args = [ "s3", "sync", sourceDir, s3BucketUrl, "--delete", "--recursive", ] 1114 result = executeCommand(AWS_COMMAND, args, returnOutput=False)[0] 1115 if result != 0: 1116 raise IOError("Error [%d] calling AWS CLI synchronize bucket." % result)
    1117
    1118 1119 ################################### 1120 # _verifyBucketContents() function 1121 ################################### 1122 1123 -def _verifyBucketContents(sourceDir, sourceFiles, s3BucketUrl):
    1124 """ 1125 Verify that a source directory is equivalent to an Amazon S3 bucket. 1126 @param sourceDir: Local source directory 1127 @param sourceFiles: Filesystem list containing contents of source directory 1128 @param s3BucketUrl: Target S3 bucket URL 1129 """ 1130 # As of this writing, the documentation for the S3 API that we're using 1131 # below says that up to 1000 elements at a time are returned, and that we 1132 # have to manually handle pagination by looking for the IsTruncated element. 1133 # However, in practice, this is not true. I have been testing with 1134 # "aws-cli/1.4.4 Python/2.7.3 Linux/3.2.0-4-686-pae", installed through PIP. 1135 # No matter how many items exist in my bucket and prefix, I get back a 1136 # single JSON result. I've tested with buckets containing nearly 6000 1137 # elements. 1138 # 1139 # If I turn on debugging, it's clear that underneath, something in the API 1140 # is executing multiple list-object requests against AWS, and stiching 1141 # results together to give me back the final JSON result. The debug output 1142 # clearly incldues multiple requests, and each XML response (except for the 1143 # final one) contains <IsTruncated>true</IsTruncated>. 1144 # 1145 # This feature is not mentioned in the offical changelog for any of the 1146 # releases going back to 1.0.0. It appears to happen in the botocore 1147 # library, but I'll admit I can't actually find the code that implements it. 1148 # For now, all I can do is rely on this behavior and hope that the 1149 # documentation is out-of-date. I'm not going to write code that tries to 1150 # parse out IsTruncated if I can't actually test that code. 1151 1152 (bucket, prefix) = s3BucketUrl.replace("s3://", "").split("/", 1) 1153 1154 query = "Contents[].{Key: Key, Size: Size}" 1155 args = [ "s3api", "list-objects", "--bucket", bucket, "--prefix", prefix, "--query", query, ] 1156 (result, data) = executeCommand(AWS_COMMAND, args, returnOutput=True) 1157 if result != 0: 1158 raise IOError("Error [%d] calling AWS CLI verify bucket contents." % result) 1159 1160 contents = { } 1161 for entry in json.loads("".join(data)): 1162 key = entry["Key"].replace(prefix, "") 1163 size = int(entry["Size"]) 1164 contents[key] = size 1165 1166 failed = False 1167 for entry in sourceFiles: 1168 if os.path.isfile(entry): 1169 key = entry.replace(sourceDir, "") 1170 size = int(os.stat(entry).st_size) 1171 if not key in contents: 1172 logger.error("File was apparently not uploaded: [%s]", entry) 1173 failed = True 1174 else: 1175 if size != contents[key]: 1176 logger.error("File size differs [%s]: expected %s bytes but got %s bytes", entry, size, contents[key]) 1177 failed = True 1178 1179 if not failed: 1180 logger.info("Completed verifying Amazon S3 bucket contents (no problems found).") 1181 else: 1182 logger.error("There were differences between source directory and target S3 bucket.") 1183 raise ValueError("There were differences between source directory and target S3 bucket.")
    1184 1185 1186 ######################################################################### 1187 # Main routine 1188 ######################################################################## 1189 1190 if __name__ == "__main__": 1191 sys.exit(cli()) 1192

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.mbox.MboxConfig-class.html0000664000175000017500000010735512657665545030743 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.mbox.MboxConfig
    Package CedarBackup3 :: Package extend :: Module mbox :: Class MboxConfig
    [hide private]
    [frames] | no frames]

    Class MboxConfig

    source code

    object --+
             |
            MboxConfig
    

    Class representing mbox configuration.

    Mbox configuration is used for backing up mbox email files.

    The following restrictions exist on data in this class:

    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.
    • The mboxFiles list must be a list of MboxFile objects
    • The mboxDirs list must be a list of MboxDir objects

    For the mboxFiles and mboxDirs lists, validation is accomplished through the util.ObjectTypeList list implementation that overrides common list methods and transparently ensures that each element is of the proper type.

    Unlike collect configuration, no global exclusions are allowed on this level. We only allow relative exclusions at the mbox directory level. Also, there is no configured ignore file. This is because mbox directory backups are not recursive.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, collectMode=None, compressMode=None, mboxFiles=None, mboxDirs=None)
    Constructor for the MboxConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setMboxFiles(self, value)
    Property target used to set the mboxFiles list.
    source code
     
    _getMboxFiles(self)
    Property target used to get the mboxFiles list.
    source code
     
    _setMboxDirs(self, value)
    Property target used to set the mboxDirs list.
    source code
     
    _getMboxDirs(self)
    Property target used to get the mboxDirs list.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      collectMode
    Default collect mode.
      compressMode
    Default compress mode.
      mboxFiles
    List of mbox files to back up.
      mboxDirs
    List of mbox directories to back up.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, collectMode=None, compressMode=None, mboxFiles=None, mboxDirs=None)
    (Constructor)

    source code 

    Constructor for the MboxConfig class.

    Parameters:
    • collectMode - Default collect mode.
    • compressMode - Default compress mode.
    • mboxFiles - List of mbox files to back up
    • mboxDirs - List of mbox directories to back up
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setMboxFiles(self, value)

    source code 

    Property target used to set the mboxFiles list. Either the value must be None or each element must be an MboxFile.

    Raises:
    • ValueError - If the value is not an MboxFile

    _setMboxDirs(self, value)

    source code 

    Property target used to set the mboxDirs list. Either the value must be None or each element must be an MboxDir.

    Raises:
    • ValueError - If the value is not an MboxDir

    Property Details [hide private]

    collectMode

    Default collect mode.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Default compress mode.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    mboxFiles

    List of mbox files to back up.

    Get Method:
    _getMboxFiles(self) - Property target used to get the mboxFiles list.
    Set Method:
    _setMboxFiles(self, value) - Property target used to set the mboxFiles list.

    mboxDirs

    List of mbox directories to back up.

    Get Method:
    _getMboxDirs(self) - Property target used to get the mboxDirs list.
    Set Method:
    _setMboxDirs(self, value) - Property target used to set the mboxDirs list.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.amazons3.AmazonS3Config-class.html0000664000175000017500000011406312657665544032250 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.amazons3.AmazonS3Config
    Package CedarBackup3 :: Package extend :: Module amazons3 :: Class AmazonS3Config
    [hide private]
    [frames] | no frames]

    Class AmazonS3Config

    source code

    object --+
             |
            AmazonS3Config
    

    Class representing Amazon S3 configuration.

    Amazon S3 configuration is used for storing backup data in Amazon's S3 cloud storage using the s3cmd tool.

    The following restrictions exist on data in this class:

    • The s3Bucket value must be a non-empty string
    • The encryptCommand value, if set, must be a non-empty string
    • The full backup size limit, if set, must be a ByteQuantity >= 0
    • The incremental backup size limit, if set, must be a ByteQuantity >= 0
    Instance Methods [hide private]
     
    __init__(self, warnMidnite=None, s3Bucket=None, encryptCommand=None, fullBackupSizeLimit=None, incrementalBackupSizeLimit=None)
    Constructor for the AmazonS3Config class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    _setWarnMidnite(self, value)
    Property target used to set the midnite warning flag.
    source code
     
    _getWarnMidnite(self)
    Property target used to get the midnite warning flag.
    source code
     
    _setS3Bucket(self, value)
    Property target used to set the S3 bucket.
    source code
     
    _getS3Bucket(self)
    Property target used to get the S3 bucket.
    source code
     
    _setEncryptCommand(self, value)
    Property target used to set the encrypt command.
    source code
     
    _getEncryptCommand(self)
    Property target used to get the encrypt command.
    source code
     
    _setFullBackupSizeLimit(self, value)
    Property target used to set the full backup size limit.
    source code
     
    _getFullBackupSizeLimit(self)
    Property target used to get the full backup size limit.
    source code
     
    _setIncrementalBackupSizeLimit(self, value)
    Property target used to set the incremental backup size limit.
    source code
     
    _getIncrementalBackupSizeLimit(self)
    Property target used to get the incremental backup size limit.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      warnMidnite
    Whether to generate warnings for crossing midnite.
      s3Bucket
    Amazon S3 Bucket in which to store data
      encryptCommand
    Command used to encrypt data before upload to S3
      fullBackupSizeLimit
    Maximum size of a full backup, as a ByteQuantity
      incrementalBackupSizeLimit
    Maximum size of an incremental backup, as a ByteQuantity

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, warnMidnite=None, s3Bucket=None, encryptCommand=None, fullBackupSizeLimit=None, incrementalBackupSizeLimit=None)
    (Constructor)

    source code 

    Constructor for the AmazonS3Config class.

    Parameters:
    • warnMidnite - Whether to generate warnings for crossing midnite.
    • s3Bucket - Name of the Amazon S3 bucket in which to store the data
    • encryptCommand - Command used to encrypt backup data before upload to S3
    • fullBackupSizeLimit - Maximum size of a full backup, a ByteQuantity
    • incrementalBackupSizeLimit - Maximum size of an incremental backup, a ByteQuantity
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setWarnMidnite(self, value)

    source code 

    Property target used to set the midnite warning flag. No validations, but we normalize the value to True or False.

    _setFullBackupSizeLimit(self, value)

    source code 

    Property target used to set the full backup size limit. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    _setIncrementalBackupSizeLimit(self, value)

    source code 

    Property target used to set the incremental backup size limit. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    warnMidnite

    Whether to generate warnings for crossing midnite.

    Get Method:
    _getWarnMidnite(self) - Property target used to get the midnite warning flag.
    Set Method:
    _setWarnMidnite(self, value) - Property target used to set the midnite warning flag.

    s3Bucket

    Amazon S3 Bucket in which to store data

    Get Method:
    _getS3Bucket(self) - Property target used to get the S3 bucket.
    Set Method:
    _setS3Bucket(self, value) - Property target used to set the S3 bucket.

    encryptCommand

    Command used to encrypt data before upload to S3

    Get Method:
    _getEncryptCommand(self) - Property target used to get the encrypt command.
    Set Method:
    _setEncryptCommand(self, value) - Property target used to set the encrypt command.

    fullBackupSizeLimit

    Maximum size of a full backup, as a ByteQuantity

    Get Method:
    _getFullBackupSizeLimit(self) - Property target used to get the full backup size limit.
    Set Method:
    _setFullBackupSizeLimit(self, value) - Property target used to set the full backup size limit.

    incrementalBackupSizeLimit

    Maximum size of an incremental backup, as a ByteQuantity

    Get Method:
    _getIncrementalBackupSizeLimit(self) - Property target used to get the incremental backup size limit.
    Set Method:
    _setIncrementalBackupSizeLimit(self, value) - Property target used to set the incremental backup size limit.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.writers.cdwriter.MediaDefinition-class.html0000664000175000017500000005175412657665545033027 0ustar pronovicpronovic00000000000000 CedarBackup3.writers.cdwriter.MediaDefinition
    Package CedarBackup3 :: Package writers :: Module cdwriter :: Class MediaDefinition
    [hide private]
    [frames] | no frames]

    Class MediaDefinition

    source code

    object --+
             |
            MediaDefinition
    

    Class encapsulating information about CD media definitions.

    The following media types are accepted:

    • MEDIA_CDR_74: 74-minute CD-R media (650 MB capacity)
    • MEDIA_CDRW_74: 74-minute CD-RW media (650 MB capacity)
    • MEDIA_CDR_80: 80-minute CD-R media (700 MB capacity)
    • MEDIA_CDRW_80: 80-minute CD-RW media (700 MB capacity)

    Note that all of the capacities associated with a media definition are in terms of ISO sectors (util.ISO_SECTOR_SIZE).

    Instance Methods [hide private]
     
    __init__(self, mediaType)
    Creates a media definition for the indicated media type.
    source code
     
    _setValues(self, mediaType)
    Sets values based on media type.
    source code
     
    _getMediaType(self)
    Property target used to get the media type value.
    source code
     
    _getRewritable(self)
    Property target used to get the rewritable flag value.
    source code
     
    _getInitialLeadIn(self)
    Property target used to get the initial lead-in value.
    source code
     
    _getLeadIn(self)
    Property target used to get the lead-in value.
    source code
     
    _getCapacity(self)
    Property target used to get the capacity value.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]
      mediaType
    Configured media type.
      rewritable
    Boolean indicating whether the media is rewritable.
      initialLeadIn
    Initial lead-in required for first image written to media.
      leadIn
    Lead-in required on successive images written to media.
      capacity
    Total capacity of the media before any required lead-in.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, mediaType)
    (Constructor)

    source code 

    Creates a media definition for the indicated media type.

    Parameters:
    • mediaType - Type of the media, as discussed above.
    Raises:
    • ValueError - If the media type is unknown or unsupported.
    Overrides: object.__init__

    _setValues(self, mediaType)

    source code 

    Sets values based on media type.

    Parameters:
    • mediaType - Type of the media, as discussed above.
    Raises:
    • ValueError - If the media type is unknown or unsupported.

    Property Details [hide private]

    mediaType

    Configured media type.

    Get Method:
    _getMediaType(self) - Property target used to get the media type value.

    rewritable

    Boolean indicating whether the media is rewritable.

    Get Method:
    _getRewritable(self) - Property target used to get the rewritable flag value.

    initialLeadIn

    Initial lead-in required for first image written to media.

    Get Method:
    _getInitialLeadIn(self) - Property target used to get the initial lead-in value.

    leadIn

    Lead-in required on successive images written to media.

    Get Method:
    _getLeadIn(self) - Property target used to get the lead-in value.

    capacity

    Total capacity of the media before any required lead-in.

    Get Method:
    _getCapacity(self) - Property target used to get the capacity value.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.ActionDependencies-class.html0000664000175000017500000006610112657665544031436 0ustar pronovicpronovic00000000000000 CedarBackup3.config.ActionDependencies
    Package CedarBackup3 :: Module config :: Class ActionDependencies
    [hide private]
    [frames] | no frames]

    Class ActionDependencies

    source code

    object --+
             |
            ActionDependencies
    

    Class representing dependencies associated with an extended action.

    Execution ordering for extended actions is done in one of two ways: either by using index values (lower index gets run first) or by having the extended action specify dependencies in terms of other named actions. This class encapsulates the dependency information for an extended action.

    The following restrictions exist on data in this class:

    • Any action name must be a non-empty string matching ACTION_NAME_REGEX
    Instance Methods [hide private]
     
    __init__(self, beforeList=None, afterList=None)
    Constructor for the ActionDependencies class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    _setBeforeList(self, value)
    Property target used to set the "run before" list.
    source code
     
    _getBeforeList(self)
    Property target used to get the "run before" list.
    source code
     
    _setAfterList(self, value)
    Property target used to set the "run after" list.
    source code
     
    _getAfterList(self)
    Property target used to get the "run after" list.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      beforeList
    List of named actions that this action must be run before.
      afterList
    List of named actions that this action must be run after.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, beforeList=None, afterList=None)
    (Constructor)

    source code 

    Constructor for the ActionDependencies class.

    Parameters:
    • beforeList - List of named actions that this action must be run before
    • afterList - List of named actions that this action must be run after
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setBeforeList(self, value)

    source code 

    Property target used to set the "run before" list. Either the value must be None or each element must be a string matching ACTION_NAME_REGEX.

    Raises:
    • ValueError - If the value does not match the regular expression.

    _setAfterList(self, value)

    source code 

    Property target used to set the "run after" list. Either the value must be None or each element must be a string matching ACTION_NAME_REGEX.

    Raises:
    • ValueError - If the value does not match the regular expression.

    Property Details [hide private]

    beforeList

    List of named actions that this action must be run before.

    Get Method:
    _getBeforeList(self) - Property target used to get the "run before" list.
    Set Method:
    _setBeforeList(self, value) - Property target used to set the "run before" list.

    afterList

    List of named actions that this action must be run after.

    Get Method:
    _getAfterList(self) - Property target used to get the "run after" list.
    Set Method:
    _setAfterList(self, value) - Property target used to set the "run after" list.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.cli._ActionSet-class.html0000664000175000017500000013246612657665544027254 0ustar pronovicpronovic00000000000000 CedarBackup3.cli._ActionSet
    Package CedarBackup3 :: Module cli :: Class _ActionSet
    [hide private]
    [frames] | no frames]

    Class _ActionSet

    source code

    object --+
             |
            _ActionSet
    

    Class representing a set of local actions to be executed.

    This class does four different things. First, it ensures that the actions specified on the command-line are sensible. The command-line can only list either built-in actions or extended actions specified in configuration. Also, certain actions (in NONCOMBINE_ACTIONS) cannot be combined with other actions.

    Second, the class enforces an execution order on the specified actions. Any time actions are combined on the command line (either built-in actions or extended actions), we must make sure they get executed in a sensible order.

    Third, the class ensures that any pre-action or post-action hooks are scheduled and executed appropriately. Hooks are configured by building a dictionary mapping between hook action name and command. Pre-action hooks are executed immediately before their associated action, and post-action hooks are executed immediately after their associated action.

    Finally, the class properly interleaves local and managed actions so that the same action gets executed first locally and then on managed peers.

    Instance Methods [hide private]
     
    __init__(self, actions, extensions, options, peers, managed, local)
    Constructor for the _ActionSet class.
    source code
     
    executeActions(self, configPath, options, config)
    Executes all actions and extended actions, in the proper order.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _deriveExtensionNames(extensions)
    Builds a list of extended actions that are available in configuration.
    source code
     
    _buildHookMaps(hooks)
    Build two mappings from action name to configured ActionHook.
    source code
     
    _buildFunctionMap(extensions)
    Builds a mapping from named action to action function.
    source code
     
    _buildIndexMap(extensions)
    Builds a mapping from action name to proper execution index.
    source code
     
    _buildActionMap(managed, local, extensionNames, functionMap, indexMap, preHookMap, postHookMap, peerMap)
    Builds a mapping from action name to list of action items.
    source code
     
    _buildPeerMap(options, peers)
    Build a mapping from action name to list of remote peers.
    source code
     
    _deriveHooks(action, preHookDict, postHookDict)
    Derive pre- and post-action hooks, if any, associated with named action.
    source code
     
    _validateActions(actions, extensionNames)
    Validate that the set of specified actions is sensible.
    source code
     
    _buildActionSet(actions, actionMap)
    Build set of actions to be executed.
    source code
     
    _getRemoteUser(options, remotePeer)
    Gets the remote user associated with a remote peer.
    source code
     
    _getRshCommand(options, remotePeer)
    Gets the RSH command associated with a remote peer.
    source code
     
    _getCbackCommand(options, remotePeer)
    Gets the cback command associated with a remote peer.
    source code
     
    _getManagedActions(options, remotePeer)
    Gets the managed actions list associated with a remote peer.
    source code
    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, actions, extensions, options, peers, managed, local)
    (Constructor)

    source code 

    Constructor for the _ActionSet class.

    This is kind of ugly, because the constructor has to set up a lot of data before being able to do anything useful. The following data structures are initialized based on the input:

    • extensionNames: List of extensions available in configuration
    • preHookMap: Mapping from action name to list of PreActionHook
    • postHookMap: Mapping from action name to list of PostActionHook
    • functionMap: Mapping from action name to Python function
    • indexMap: Mapping from action name to execution index
    • peerMap: Mapping from action name to set of RemotePeer
    • actionMap: Mapping from action name to _ActionItem

    Once these data structures are set up, the command line is validated to make sure only valid actions have been requested, and in a sensible combination. Then, all of the data is used to build self.actionSet, the set action items to be executed by executeActions(). This list might contain either _ActionItem or _ManagedActionItem.

    Parameters:
    • actions - Names of actions specified on the command-line.
    • extensions - Extended action configuration (i.e. config.extensions)
    • options - Options configuration (i.e. config.options)
    • peers - Peers configuration (i.e. config.peers)
    • managed - Whether to include managed actions in the set
    • local - Whether to include local actions in the set
    Raises:
    • ValueError - If one of the specified actions is invalid.
    Overrides: object.__init__

    executeActions(self, configPath, options, config)

    source code 

    Executes all actions and extended actions, in the proper order.

    Each action (whether built-in or extension) is executed in an identical manner. The built-in actions will use only the options and config values. We also pass in the config path so that extension modules can re-parse configuration if they want to, to add in extra information.

    Parameters:
    • configPath - Path to configuration file on disk.
    • options - Command-line options to be passed to action functions.
    • config - Parsed configuration to be passed to action functions.
    Raises:
    • Exception - If there is a problem executing the actions.

    _deriveExtensionNames(extensions)
    Static Method

    source code 

    Builds a list of extended actions that are available in configuration.

    Parameters:
    • extensions - Extended action configuration (i.e. config.extensions)
    Returns:
    List of extended action names.

    _buildHookMaps(hooks)
    Static Method

    source code 

    Build two mappings from action name to configured ActionHook.

    Parameters:
    • hooks - List of pre- and post-action hooks (i.e. config.options.hooks)
    Returns:
    Tuple of (pre hook dictionary, post hook dictionary).

    _buildFunctionMap(extensions)
    Static Method

    source code 

    Builds a mapping from named action to action function.

    Parameters:
    • extensions - Extended action configuration (i.e. config.extensions)
    Returns:
    Dictionary mapping action to function.

    _buildIndexMap(extensions)
    Static Method

    source code 

    Builds a mapping from action name to proper execution index.

    If extensions configuration is None, or there are no configured extended actions, the ordering dictionary will only include the built-in actions and their standard indices.

    Otherwise, if the extensions order mode is None or "index", actions will scheduled by explicit index; and if the extensions order mode is "dependency", actions will be scheduled using a dependency graph.

    Parameters:
    • extensions - Extended action configuration (i.e. config.extensions)
    Returns:
    Dictionary mapping action name to integer execution index.

    _buildActionMap(managed, local, extensionNames, functionMap, indexMap, preHookMap, postHookMap, peerMap)
    Static Method

    source code 

    Builds a mapping from action name to list of action items.

    We build either _ActionItem or _ManagedActionItem objects here.

    In most cases, the mapping from action name to _ActionItem is 1:1. The exception is the "all" action, which is a special case. However, a list is returned in all cases, just for consistency later. Each _ActionItem will be created with a proper function reference and index value for execution ordering.

    The mapping from action name to _ManagedActionItem is always 1:1. Each managed action item contains a list of peers which the action should be executed.

    Parameters:
    • managed - Whether to include managed actions in the set
    • local - Whether to include local actions in the set
    • extensionNames - List of valid extended action names
    • functionMap - Dictionary mapping action name to Python function
    • indexMap - Dictionary mapping action name to integer execution index
    • preHookMap - Dictionary mapping action name to pre hooks (if any) for the action
    • postHookMap - Dictionary mapping action name to post hooks (if any) for the action
    • peerMap - Dictionary mapping action name to list of remote peers on which to execute the action
    Returns:
    Dictionary mapping action name to list of _ActionItem objects.

    _buildPeerMap(options, peers)
    Static Method

    source code 

    Build a mapping from action name to list of remote peers.

    There will be one entry in the mapping for each managed action. If there are no managed peers, the mapping will be empty. Only managed actions will be listed in the mapping.

    Parameters:
    • options - Option configuration (i.e. config.options)
    • peers - Peers configuration (i.e. config.peers)

    _deriveHooks(action, preHookDict, postHookDict)
    Static Method

    source code 

    Derive pre- and post-action hooks, if any, associated with named action.

    Parameters:
    • action - Name of action to look up
    • preHookDict - Dictionary mapping pre-action hooks to action name
    • postHookDict - Dictionary mapping post-action hooks to action name @return Tuple (preHooks, postHooks) per mapping, with None values if there is no hook.

    _validateActions(actions, extensionNames)
    Static Method

    source code 

    Validate that the set of specified actions is sensible.

    Any specified action must either be a built-in action or must be among the extended actions defined in configuration. The actions from within NONCOMBINE_ACTIONS may not be combined with other actions.

    Parameters:
    • actions - Names of actions specified on the command-line.
    • extensionNames - Names of extensions specified in configuration.
    Raises:
    • ValueError - If one or more configured actions are not valid.

    _buildActionSet(actions, actionMap)
    Static Method

    source code 

    Build set of actions to be executed.

    The set of actions is built in the proper order, so executeActions can spin through the set without thinking about it. Since we've already validated that the set of actions is sensible, we don't take any precautions here to make sure things are combined properly. If the action is listed, it will be "scheduled" for execution.

    Parameters:
    • actions - Names of actions specified on the command-line.
    • actionMap - Dictionary mapping action name to _ActionItem object.
    Returns:
    Set of action items in proper order.

    _getRemoteUser(options, remotePeer)
    Static Method

    source code 

    Gets the remote user associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • options - OptionsConfig object, as from config.options
    • remotePeer - Configuration-style remote peer object.
    Returns:
    Name of remote user associated with remote peer.

    _getRshCommand(options, remotePeer)
    Static Method

    source code 

    Gets the RSH command associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • options - OptionsConfig object, as from config.options
    • remotePeer - Configuration-style remote peer object.
    Returns:
    RSH command associated with remote peer.

    _getCbackCommand(options, remotePeer)
    Static Method

    source code 

    Gets the cback command associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • options - OptionsConfig object, as from config.options
    • remotePeer - Configuration-style remote peer object.
    Returns:
    cback command associated with remote peer.

    _getManagedActions(options, remotePeer)
    Static Method

    source code 

    Gets the managed actions list associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • options - OptionsConfig object, as from config.options
    • remotePeer - Configuration-style remote peer object.
    Returns:
    Set of managed actions associated with remote peer.

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.release-module.html0000664000175000017500000000320712657665544027027 0ustar pronovicpronovic00000000000000 release

    Module release


    Variables

    AUTHOR
    COPYRIGHT
    DATE
    EMAIL
    URL
    VERSION
    __package__

    [hide private] CedarBackup3-3.1.6/doc/interface/toc.html0000664000175000017500000002370312657665544021703 0ustar pronovicpronovic00000000000000 Table of Contents

    Table of Contents


    Everything

    Modules

    CedarBackup3
    CedarBackup3.action
    CedarBackup3.actions
    CedarBackup3.actions.collect
    CedarBackup3.actions.constants
    CedarBackup3.actions.initialize
    CedarBackup3.actions.purge
    CedarBackup3.actions.rebuild
    CedarBackup3.actions.stage
    CedarBackup3.actions.store
    CedarBackup3.actions.util
    CedarBackup3.actions.validate
    CedarBackup3.cli
    CedarBackup3.config
    CedarBackup3.customize
    CedarBackup3.extend
    CedarBackup3.extend.amazons3
    CedarBackup3.extend.capacity
    CedarBackup3.extend.encrypt
    CedarBackup3.extend.mbox
    CedarBackup3.extend.mysql
    CedarBackup3.extend.postgresql
    CedarBackup3.extend.split
    CedarBackup3.extend.subversion
    CedarBackup3.extend.sysinfo
    CedarBackup3.filesystem
    CedarBackup3.image
    CedarBackup3.knapsack
    CedarBackup3.peer
    CedarBackup3.release
    CedarBackup3.testutil
    CedarBackup3.tools
    CedarBackup3.tools.amazons3
    CedarBackup3.tools.span
    CedarBackup3.util
    CedarBackup3.writer
    CedarBackup3.writers
    CedarBackup3.writers.cdwriter
    CedarBackup3.writers.dvdwriter
    CedarBackup3.writers.util
    CedarBackup3.xmlutil

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.capacity.CapacityConfig-class.html0000664000175000017500000006535212657665544032422 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.capacity.CapacityConfig
    Package CedarBackup3 :: Package extend :: Module capacity :: Class CapacityConfig
    [hide private]
    [frames] | no frames]

    Class CapacityConfig

    source code

    object --+
             |
            CapacityConfig
    

    Class representing capacity configuration.

    The following restrictions exist on data in this class:

    • The maximum percentage utilized must be a PercentageQuantity
    • The minimum bytes remaining must be a ByteQuantity
    Instance Methods [hide private]
     
    __init__(self, maxPercentage=None, minBytes=None)
    Constructor for the CapacityConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    _setMaxPercentage(self, value)
    Property target used to set the maxPercentage value.
    source code
     
    _getMaxPercentage(self)
    Property target used to get the maxPercentage value
    source code
     
    _setMinBytes(self, value)
    Property target used to set the bytes utilized value.
    source code
     
    _getMinBytes(self)
    Property target used to get the bytes remaining value.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      maxPercentage
    Maximum percentage of the media that may be utilized.
      minBytes
    Minimum number of free bytes that must be available.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, maxPercentage=None, minBytes=None)
    (Constructor)

    source code 

    Constructor for the CapacityConfig class.

    Parameters:
    • maxPercentage - Maximum percentage of the media that may be utilized
    • minBytes - Minimum number of free bytes that must be available
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setMaxPercentage(self, value)

    source code 

    Property target used to set the maxPercentage value. If not None, the value must be a PercentageQuantity object.

    Raises:
    • ValueError - If the value is not a PercentageQuantity

    _setMinBytes(self, value)

    source code 

    Property target used to set the bytes utilized value. If not None, the value must be a ByteQuantity object.

    Raises:
    • ValueError - If the value is not a ByteQuantity

    Property Details [hide private]

    maxPercentage

    Maximum percentage of the media that may be utilized.

    Get Method:
    _getMaxPercentage(self) - Property target used to get the maxPercentage value
    Set Method:
    _setMaxPercentage(self, value) - Property target used to set the maxPercentage value.

    minBytes

    Minimum number of free bytes that must be available.

    Get Method:
    _getMinBytes(self) - Property target used to get the bytes remaining value.
    Set Method:
    _setMinBytes(self, value) - Property target used to set the bytes utilized value.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.amazons3-pysrc.html0000664000175000017500000122151312657665545027524 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.amazons3
    Package CedarBackup3 :: Package extend :: Module amazons3
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.extend.amazons3

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2014-2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Official Cedar Backup Extensions 
     30  # Purpose  : "Store" type extension that writes data to Amazon S3. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Store-type extension that writes data to Amazon S3. 
     40   
     41  This extension requires a new configuration section <amazons3> and is intended 
     42  to be run immediately after the standard stage action, replacing the standard 
     43  store action.  Aside from its own configuration, it requires the options and 
     44  staging configuration sections in the standard Cedar Backup configuration file. 
     45  Since it is intended to replace the store action, it does not rely on any store 
     46  configuration. 
     47   
     48  The underlying functionality relies on the U{AWS CLI interface 
     49  <http://aws.amazon.com/documentation/cli/>}.  Before you use this extension, 
     50  you need to set up your Amazon S3 account and configure the AWS CLI connection 
     51  per Amazon's documentation.  The extension assumes that the backup is being 
     52  executed as root, and switches over to the configured backup user to 
     53  communicate with AWS.  So, make sure you configure AWS CLI as the backup user 
     54  and not root. 
     55   
     56  You can optionally configure Cedar Backup to encrypt data before sending it 
     57  to S3.  To do that, provide a complete command line using the C{${input}} and 
     58  C{${output}} variables to represent the original input file and the encrypted 
     59  output file.  This command will be executed as the backup user. 
     60   
     61  For instance, you can use something like this with GPG:: 
     62   
     63     /usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input} 
     64   
     65  The GPG mechanism depends on a strong passphrase for security.  One way to 
     66  generate a strong passphrase is using your system random number generator, i.e.:: 
     67   
     68     dd if=/dev/urandom count=20 bs=1 | xxd -ps 
     69   
     70  (See U{StackExchange <http://security.stackexchange.com/questions/14867/gpg-encryption-security>} 
     71  for more details about that advice.) If you decide to use encryption, make sure 
     72  you save off the passphrase in a safe place, so you can get at your backup data 
     73  later if you need to.  And obviously, make sure to set permissions on the 
     74  passphrase file so it can only be read by the backup user. 
     75   
     76  This extension was written for and tested on Linux.  It will throw an exception 
     77  if run on Windows. 
     78   
     79  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     80  """ 
     81   
     82  ######################################################################## 
     83  # Imported modules 
     84  ######################################################################## 
     85   
     86  # System modules 
     87  import sys 
     88  import os 
     89  import logging 
     90  import tempfile 
     91  import datetime 
     92  import json 
     93  import shutil 
     94  from functools import total_ordering 
     95   
     96  # Cedar Backup modules 
     97  from CedarBackup3.filesystem import FilesystemList, BackupFileList 
     98  from CedarBackup3.util import resolveCommand, executeCommand, isRunningAsRoot, changeOwnership, isStartOfWeek 
     99  from CedarBackup3.util import displayBytes, UNIT_BYTES 
    100  from CedarBackup3.xmlutil import createInputDom, addContainerNode, addBooleanNode, addStringNode 
    101  from CedarBackup3.xmlutil import readFirstChild, readString, readBoolean 
    102  from CedarBackup3.actions.util import writeIndicatorFile 
    103  from CedarBackup3.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR 
    104  from CedarBackup3.config import ByteQuantity, readByteQuantity, addByteQuantityNode 
    105   
    106   
    107  ######################################################################## 
    108  # Module-wide constants and variables 
    109  ######################################################################## 
    110   
    111  logger = logging.getLogger("CedarBackup3.log.extend.amazons3") 
    112   
    113  SU_COMMAND    = [ "su" ] 
    114  AWS_COMMAND   = [ "aws" ] 
    115   
    116  STORE_INDICATOR = "cback.amazons3" 
    
    117 118 119 ######################################################################## 120 # AmazonS3Config class definition 121 ######################################################################## 122 123 @total_ordering 124 -class AmazonS3Config(object):
    125 126 """ 127 Class representing Amazon S3 configuration. 128 129 Amazon S3 configuration is used for storing backup data in Amazon's S3 cloud 130 storage using the C{s3cmd} tool. 131 132 The following restrictions exist on data in this class: 133 134 - The s3Bucket value must be a non-empty string 135 - The encryptCommand value, if set, must be a non-empty string 136 - The full backup size limit, if set, must be a ByteQuantity >= 0 137 - The incremental backup size limit, if set, must be a ByteQuantity >= 0 138 139 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 140 warnMidnite, s3Bucket 141 """ 142
    143 - def __init__(self, warnMidnite=None, s3Bucket=None, encryptCommand=None, 144 fullBackupSizeLimit=None, incrementalBackupSizeLimit=None):
    145 """ 146 Constructor for the C{AmazonS3Config} class. 147 148 @param warnMidnite: Whether to generate warnings for crossing midnite. 149 @param s3Bucket: Name of the Amazon S3 bucket in which to store the data 150 @param encryptCommand: Command used to encrypt backup data before upload to S3 151 @param fullBackupSizeLimit: Maximum size of a full backup, a ByteQuantity 152 @param incrementalBackupSizeLimit: Maximum size of an incremental backup, a ByteQuantity 153 154 @raise ValueError: If one of the values is invalid. 155 """ 156 self._warnMidnite = None 157 self._s3Bucket = None 158 self._encryptCommand = None 159 self._fullBackupSizeLimit = None 160 self._incrementalBackupSizeLimit = None 161 self.warnMidnite = warnMidnite 162 self.s3Bucket = s3Bucket 163 self.encryptCommand = encryptCommand 164 self.fullBackupSizeLimit = fullBackupSizeLimit 165 self.incrementalBackupSizeLimit = incrementalBackupSizeLimit
    166
    167 - def __repr__(self):
    168 """ 169 Official string representation for class instance. 170 """ 171 return "AmazonS3Config(%s, %s, %s, %s, %s)" % (self.warnMidnite, self.s3Bucket, self.encryptCommand, 172 self.fullBackupSizeLimit, self.incrementalBackupSizeLimit)
    173
    174 - def __str__(self):
    175 """ 176 Informal string representation for class instance. 177 """ 178 return self.__repr__()
    179
    180 - def __eq__(self, other):
    181 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 182 return self.__cmp__(other) == 0
    183
    184 - def __lt__(self, other):
    185 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 186 return self.__cmp__(other) < 0
    187
    188 - def __gt__(self, other):
    189 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 190 return self.__cmp__(other) > 0
    191
    192 - def __cmp__(self, other):
    193 """ 194 Original Python 2 comparison operator. 195 @param other: Other object to compare to. 196 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 197 """ 198 if other is None: 199 return 1 200 if self.warnMidnite != other.warnMidnite: 201 if self.warnMidnite < other.warnMidnite: 202 return -1 203 else: 204 return 1 205 if self.s3Bucket != other.s3Bucket: 206 if str(self.s3Bucket or "") < str(other.s3Bucket or ""): 207 return -1 208 else: 209 return 1 210 if self.encryptCommand != other.encryptCommand: 211 if str(self.encryptCommand or "") < str(other.encryptCommand or ""): 212 return -1 213 else: 214 return 1 215 if self.fullBackupSizeLimit != other.fullBackupSizeLimit: 216 if (self.fullBackupSizeLimit or ByteQuantity()) < (other.fullBackupSizeLimit or ByteQuantity()): 217 return -1 218 else: 219 return 1 220 if self.incrementalBackupSizeLimit != other.incrementalBackupSizeLimit: 221 if (self.incrementalBackupSizeLimit or ByteQuantity()) < (other.incrementalBackupSizeLimit or ByteQuantity()): 222 return -1 223 else: 224 return 1 225 return 0
    226
    227 - def _setWarnMidnite(self, value):
    228 """ 229 Property target used to set the midnite warning flag. 230 No validations, but we normalize the value to C{True} or C{False}. 231 """ 232 if value: 233 self._warnMidnite = True 234 else: 235 self._warnMidnite = False
    236
    237 - def _getWarnMidnite(self):
    238 """ 239 Property target used to get the midnite warning flag. 240 """ 241 return self._warnMidnite
    242
    243 - def _setS3Bucket(self, value):
    244 """ 245 Property target used to set the S3 bucket. 246 """ 247 if value is not None: 248 if len(value) < 1: 249 raise ValueError("S3 bucket must be non-empty string.") 250 self._s3Bucket = value
    251
    252 - def _getS3Bucket(self):
    253 """ 254 Property target used to get the S3 bucket. 255 """ 256 return self._s3Bucket
    257
    258 - def _setEncryptCommand(self, value):
    259 """ 260 Property target used to set the encrypt command. 261 """ 262 if value is not None: 263 if len(value) < 1: 264 raise ValueError("Encrypt command must be non-empty string.") 265 self._encryptCommand = value
    266
    267 - def _getEncryptCommand(self):
    268 """ 269 Property target used to get the encrypt command. 270 """ 271 return self._encryptCommand
    272
    273 - def _setFullBackupSizeLimit(self, value):
    274 """ 275 Property target used to set the full backup size limit. 276 The value must be an integer >= 0. 277 @raise ValueError: If the value is not valid. 278 """ 279 if value is None: 280 self._fullBackupSizeLimit = None 281 else: 282 if isinstance(value, ByteQuantity): 283 self._fullBackupSizeLimit = value 284 else: 285 self._fullBackupSizeLimit = ByteQuantity(value, UNIT_BYTES)
    286
    287 - def _getFullBackupSizeLimit(self):
    288 """ 289 Property target used to get the full backup size limit. 290 """ 291 return self._fullBackupSizeLimit
    292
    293 - def _setIncrementalBackupSizeLimit(self, value):
    294 """ 295 Property target used to set the incremental backup size limit. 296 The value must be an integer >= 0. 297 @raise ValueError: If the value is not valid. 298 """ 299 if value is None: 300 self._incrementalBackupSizeLimit = None 301 else: 302 if isinstance(value, ByteQuantity): 303 self._incrementalBackupSizeLimit = value 304 else: 305 self._incrementalBackupSizeLimit = ByteQuantity(value, UNIT_BYTES)
    306
    308 """ 309 Property target used to get the incremental backup size limit. 310 """ 311 return self._incrementalBackupSizeLimit
    312 313 warnMidnite = property(_getWarnMidnite, _setWarnMidnite, None, "Whether to generate warnings for crossing midnite.") 314 s3Bucket = property(_getS3Bucket, _setS3Bucket, None, doc="Amazon S3 Bucket in which to store data") 315 encryptCommand = property(_getEncryptCommand, _setEncryptCommand, None, doc="Command used to encrypt data before upload to S3") 316 fullBackupSizeLimit = property(_getFullBackupSizeLimit, _setFullBackupSizeLimit, None, 317 doc="Maximum size of a full backup, as a ByteQuantity") 318 incrementalBackupSizeLimit = property(_getIncrementalBackupSizeLimit, _setIncrementalBackupSizeLimit, None, 319 doc="Maximum size of an incremental backup, as a ByteQuantity")
    320
    321 322 ######################################################################## 323 # LocalConfig class definition 324 ######################################################################## 325 326 @total_ordering 327 -class LocalConfig(object):
    328 329 """ 330 Class representing this extension's configuration document. 331 332 This is not a general-purpose configuration object like the main Cedar 333 Backup configuration object. Instead, it just knows how to parse and emit 334 amazons3-specific configuration values. Third parties who need to read and 335 write configuration related to this extension should access it through the 336 constructor, C{validate} and C{addConfig} methods. 337 338 @note: Lists within this class are "unordered" for equality comparisons. 339 340 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 341 amazons3, validate, addConfig 342 """ 343
    344 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    345 """ 346 Initializes a configuration object. 347 348 If you initialize the object without passing either C{xmlData} or 349 C{xmlPath} then configuration will be empty and will be invalid until it 350 is filled in properly. 351 352 No reference to the original XML data or original path is saved off by 353 this class. Once the data has been parsed (successfully or not) this 354 original information is discarded. 355 356 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 357 method will be called (with its default arguments) against configuration 358 after successfully parsing any passed-in XML. Keep in mind that even if 359 C{validate} is C{False}, it might not be possible to parse the passed-in 360 XML document if lower-level validations fail. 361 362 @note: It is strongly suggested that the C{validate} option always be set 363 to C{True} (the default) unless there is a specific need to read in 364 invalid configuration from disk. 365 366 @param xmlData: XML data representing configuration. 367 @type xmlData: String data. 368 369 @param xmlPath: Path to an XML file on disk. 370 @type xmlPath: Absolute path to a file on disk. 371 372 @param validate: Validate the document after parsing it. 373 @type validate: Boolean true/false. 374 375 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 376 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 377 @raise ValueError: If the parsed configuration document is not valid. 378 """ 379 self._amazons3 = None 380 self.amazons3 = None 381 if xmlData is not None and xmlPath is not None: 382 raise ValueError("Use either xmlData or xmlPath, but not both.") 383 if xmlData is not None: 384 self._parseXmlData(xmlData) 385 if validate: 386 self.validate() 387 elif xmlPath is not None: 388 with open(xmlPath) as f: 389 xmlData = f.read() 390 self._parseXmlData(xmlData) 391 if validate: 392 self.validate()
    393
    394 - def __repr__(self):
    395 """ 396 Official string representation for class instance. 397 """ 398 return "LocalConfig(%s)" % (self.amazons3)
    399
    400 - def __str__(self):
    401 """ 402 Informal string representation for class instance. 403 """ 404 return self.__repr__()
    405
    406 - def __eq__(self, other):
    407 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 408 return self.__cmp__(other) == 0
    409
    410 - def __lt__(self, other):
    411 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 412 return self.__cmp__(other) < 0
    413
    414 - def __gt__(self, other):
    415 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 416 return self.__cmp__(other) > 0
    417
    418 - def __cmp__(self, other):
    419 """ 420 Original Python 2 comparison operator. 421 Lists within this class are "unordered" for equality comparisons. 422 @param other: Other object to compare to. 423 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 424 """ 425 if other is None: 426 return 1 427 if self.amazons3 != other.amazons3: 428 if self.amazons3 < other.amazons3: 429 return -1 430 else: 431 return 1 432 return 0
    433
    434 - def _setAmazonS3(self, value):
    435 """ 436 Property target used to set the amazons3 configuration value. 437 If not C{None}, the value must be a C{AmazonS3Config} object. 438 @raise ValueError: If the value is not a C{AmazonS3Config} 439 """ 440 if value is None: 441 self._amazons3 = None 442 else: 443 if not isinstance(value, AmazonS3Config): 444 raise ValueError("Value must be a C{AmazonS3Config} object.") 445 self._amazons3 = value
    446
    447 - def _getAmazonS3(self):
    448 """ 449 Property target used to get the amazons3 configuration value. 450 """ 451 return self._amazons3
    452 453 amazons3 = property(_getAmazonS3, _setAmazonS3, None, "AmazonS3 configuration in terms of a C{AmazonS3Config} object.") 454
    455 - def validate(self):
    456 """ 457 Validates configuration represented by the object. 458 459 AmazonS3 configuration must be filled in. Within that, the s3Bucket target must be filled in 460 461 @raise ValueError: If one of the validations fails. 462 """ 463 if self.amazons3 is None: 464 raise ValueError("AmazonS3 section is required.") 465 if self.amazons3.s3Bucket is None: 466 raise ValueError("AmazonS3 s3Bucket must be set.")
    467
    468 - def addConfig(self, xmlDom, parentNode):
    469 """ 470 Adds an <amazons3> configuration section as the next child of a parent. 471 472 Third parties should use this function to write configuration related to 473 this extension. 474 475 We add the following fields to the document:: 476 477 warnMidnite //cb_config/amazons3/warn_midnite 478 s3Bucket //cb_config/amazons3/s3_bucket 479 encryptCommand //cb_config/amazons3/encrypt 480 fullBackupSizeLimit //cb_config/amazons3/full_size_limit 481 incrementalBackupSizeLimit //cb_config/amazons3/incr_size_limit 482 483 @param xmlDom: DOM tree as from C{impl.createDocument()}. 484 @param parentNode: Parent that the section should be appended to. 485 """ 486 if self.amazons3 is not None: 487 sectionNode = addContainerNode(xmlDom, parentNode, "amazons3") 488 addBooleanNode(xmlDom, sectionNode, "warn_midnite", self.amazons3.warnMidnite) 489 addStringNode(xmlDom, sectionNode, "s3_bucket", self.amazons3.s3Bucket) 490 addStringNode(xmlDom, sectionNode, "encrypt", self.amazons3.encryptCommand) 491 addByteQuantityNode(xmlDom, sectionNode, "full_size_limit", self.amazons3.fullBackupSizeLimit) 492 addByteQuantityNode(xmlDom, sectionNode, "incr_size_limit", self.amazons3.incrementalBackupSizeLimit)
    493
    494 - def _parseXmlData(self, xmlData):
    495 """ 496 Internal method to parse an XML string into the object. 497 498 This method parses the XML document into a DOM tree (C{xmlDom}) and then 499 calls a static method to parse the amazons3 configuration section. 500 501 @param xmlData: XML data to be parsed 502 @type xmlData: String data 503 504 @raise ValueError: If the XML cannot be successfully parsed. 505 """ 506 (xmlDom, parentNode) = createInputDom(xmlData) 507 self._amazons3 = LocalConfig._parseAmazonS3(parentNode)
    508 509 @staticmethod
    510 - def _parseAmazonS3(parent):
    511 """ 512 Parses an amazons3 configuration section. 513 514 We read the following individual fields:: 515 516 warnMidnite //cb_config/amazons3/warn_midnite 517 s3Bucket //cb_config/amazons3/s3_bucket 518 encryptCommand //cb_config/amazons3/encrypt 519 fullBackupSizeLimit //cb_config/amazons3/full_size_limit 520 incrementalBackupSizeLimit //cb_config/amazons3/incr_size_limit 521 522 @param parent: Parent node to search beneath. 523 524 @return: C{AmazonS3Config} object or C{None} if the section does not exist. 525 @raise ValueError: If some filled-in value is invalid. 526 """ 527 amazons3 = None 528 section = readFirstChild(parent, "amazons3") 529 if section is not None: 530 amazons3 = AmazonS3Config() 531 amazons3.warnMidnite = readBoolean(section, "warn_midnite") 532 amazons3.s3Bucket = readString(section, "s3_bucket") 533 amazons3.encryptCommand = readString(section, "encrypt") 534 amazons3.fullBackupSizeLimit = readByteQuantity(section, "full_size_limit") 535 amazons3.incrementalBackupSizeLimit = readByteQuantity(section, "incr_size_limit") 536 return amazons3
    537
    538 539 ######################################################################## 540 # Public functions 541 ######################################################################## 542 543 ########################### 544 # executeAction() function 545 ########################### 546 547 -def executeAction(configPath, options, config):
    548 """ 549 Executes the amazons3 backup action. 550 551 @param configPath: Path to configuration file on disk. 552 @type configPath: String representing a path on disk. 553 554 @param options: Program command-line options. 555 @type options: Options object. 556 557 @param config: Program configuration. 558 @type config: Config object. 559 560 @raise ValueError: Under many generic error conditions 561 @raise IOError: If there are I/O problems reading or writing files 562 """ 563 logger.debug("Executing amazons3 extended action.") 564 if not isRunningAsRoot(): 565 logger.error("Error: the amazons3 extended action must be run as root.") 566 raise ValueError("The amazons3 extended action must be run as root.") 567 if sys.platform == "win32": 568 logger.error("Error: the amazons3 extended action is not supported on Windows.") 569 raise ValueError("The amazons3 extended action is not supported on Windows.") 570 if config.options is None or config.stage is None: 571 raise ValueError("Cedar Backup configuration is not properly filled in.") 572 local = LocalConfig(xmlPath=configPath) 573 stagingDirs = _findCorrectDailyDir(options, config, local) 574 _applySizeLimits(options, config, local, stagingDirs) 575 _writeToAmazonS3(config, local, stagingDirs) 576 _writeStoreIndicator(config, stagingDirs) 577 logger.info("Executed the amazons3 extended action successfully.")
    578
    579 580 ######################################################################## 581 # Private utility functions 582 ######################################################################## 583 584 ######################### 585 # _findCorrectDailyDir() 586 ######################### 587 588 -def _findCorrectDailyDir(options, config, local):
    589 """ 590 Finds the correct daily staging directory to be written to Amazon S3. 591 592 This is substantially similar to the same function in store.py. The 593 main difference is that it doesn't rely on store configuration at all. 594 595 @param options: Options object. 596 @param config: Config object. 597 @param local: Local config object. 598 599 @return: Correct staging dir, as a dict mapping directory to date suffix. 600 @raise IOError: If the staging directory cannot be found. 601 """ 602 oneDay = datetime.timedelta(days=1) 603 today = datetime.date.today() 604 yesterday = today - oneDay 605 tomorrow = today + oneDay 606 todayDate = today.strftime(DIR_TIME_FORMAT) 607 yesterdayDate = yesterday.strftime(DIR_TIME_FORMAT) 608 tomorrowDate = tomorrow.strftime(DIR_TIME_FORMAT) 609 todayPath = os.path.join(config.stage.targetDir, todayDate) 610 yesterdayPath = os.path.join(config.stage.targetDir, yesterdayDate) 611 tomorrowPath = os.path.join(config.stage.targetDir, tomorrowDate) 612 todayStageInd = os.path.join(todayPath, STAGE_INDICATOR) 613 yesterdayStageInd = os.path.join(yesterdayPath, STAGE_INDICATOR) 614 tomorrowStageInd = os.path.join(tomorrowPath, STAGE_INDICATOR) 615 todayStoreInd = os.path.join(todayPath, STORE_INDICATOR) 616 yesterdayStoreInd = os.path.join(yesterdayPath, STORE_INDICATOR) 617 tomorrowStoreInd = os.path.join(tomorrowPath, STORE_INDICATOR) 618 if options.full: 619 if os.path.isdir(todayPath) and os.path.exists(todayStageInd): 620 logger.info("Amazon S3 process will use current day's staging directory [%s]", todayPath) 621 return { todayPath:todayDate } 622 raise IOError("Unable to find staging directory to process (only tried today due to full option).") 623 else: 624 if os.path.isdir(todayPath) and os.path.exists(todayStageInd) and not os.path.exists(todayStoreInd): 625 logger.info("Amazon S3 process will use current day's staging directory [%s]", todayPath) 626 return { todayPath:todayDate } 627 elif os.path.isdir(yesterdayPath) and os.path.exists(yesterdayStageInd) and not os.path.exists(yesterdayStoreInd): 628 logger.info("Amazon S3 process will use previous day's staging directory [%s]", yesterdayPath) 629 if local.amazons3.warnMidnite: 630 logger.warning("Warning: Amazon S3 process crossed midnite boundary to find data.") 631 return { yesterdayPath:yesterdayDate } 632 elif os.path.isdir(tomorrowPath) and os.path.exists(tomorrowStageInd) and not os.path.exists(tomorrowStoreInd): 633 logger.info("Amazon S3 process will use next day's staging directory [%s]", tomorrowPath) 634 if local.amazons3.warnMidnite: 635 logger.warning("Warning: Amazon S3 process crossed midnite boundary to find data.") 636 return { tomorrowPath:tomorrowDate } 637 raise IOError("Unable to find unused staging directory to process (tried today, yesterday, tomorrow).")
    638
    639 640 ############################## 641 # _applySizeLimits() function 642 ############################## 643 644 -def _applySizeLimits(options, config, local, stagingDirs):
    645 """ 646 Apply size limits, throwing an exception if any limits are exceeded. 647 648 Size limits are optional. If a limit is set to None, it does not apply. 649 The full size limit applies if the full option is set or if today is the 650 start of the week. The incremental size limit applies otherwise. Limits 651 are applied to the total size of all the relevant staging directories. 652 653 @param options: Options object. 654 @param config: Config object. 655 @param local: Local config object. 656 @param stagingDirs: Dictionary mapping directory path to date suffix. 657 658 @raise ValueError: Under many generic error conditions 659 @raise ValueError: If a size limit has been exceeded 660 """ 661 if options.full or isStartOfWeek(config.options.startingDay): 662 logger.debug("Using Amazon S3 size limit for full backups.") 663 limit = local.amazons3.fullBackupSizeLimit 664 else: 665 logger.debug("Using Amazon S3 size limit for incremental backups.") 666 limit = local.amazons3.incrementalBackupSizeLimit 667 if limit is None: 668 logger.debug("No Amazon S3 size limit will be applied.") 669 else: 670 logger.debug("Amazon S3 size limit is: %s", limit) 671 contents = BackupFileList() 672 for stagingDir in stagingDirs: 673 contents.addDirContents(stagingDir) 674 total = contents.totalSize() 675 logger.debug("Amazon S3 backup size is: %s", displayBytes(total)) 676 if total > limit: 677 logger.error("Amazon S3 size limit exceeded: %s > %s", displayBytes(total), limit) 678 raise ValueError("Amazon S3 size limit exceeded: %s > %s" % (displayBytes(total), limit)) 679 else: 680 logger.info("Total size does not exceed Amazon S3 size limit, so backup can continue.")
    681
    682 683 ############################## 684 # _writeToAmazonS3() function 685 ############################## 686 687 -def _writeToAmazonS3(config, local, stagingDirs):
    688 """ 689 Writes the indicated staging directories to an Amazon S3 bucket. 690 691 Each of the staging directories listed in C{stagingDirs} will be written to 692 the configured Amazon S3 bucket from local configuration. The directories 693 will be placed into the image at the root by date, so staging directory 694 C{/opt/stage/2005/02/10} will be placed into the S3 bucket at C{/2005/02/10}. 695 If an encrypt commmand is provided, the files will be encrypted first. 696 697 @param config: Config object. 698 @param local: Local config object. 699 @param stagingDirs: Dictionary mapping directory path to date suffix. 700 701 @raise ValueError: Under many generic error conditions 702 @raise IOError: If there is a problem writing to Amazon S3 703 """ 704 for stagingDir in list(stagingDirs.keys()): 705 logger.debug("Storing stage directory to Amazon S3 [%s].", stagingDir) 706 dateSuffix = stagingDirs[stagingDir] 707 s3BucketUrl = "s3://%s/%s" % (local.amazons3.s3Bucket, dateSuffix) 708 logger.debug("S3 bucket URL is [%s]", s3BucketUrl) 709 _clearExistingBackup(config, s3BucketUrl) 710 if local.amazons3.encryptCommand is None: 711 logger.debug("Encryption is disabled; files will be uploaded in cleartext.") 712 _uploadStagingDir(config, stagingDir, s3BucketUrl) 713 _verifyUpload(config, stagingDir, s3BucketUrl) 714 else: 715 logger.debug("Encryption is enabled; files will be uploaded after being encrypted.") 716 encryptedDir = tempfile.mkdtemp(dir=config.options.workingDir) 717 changeOwnership(encryptedDir, config.options.backupUser, config.options.backupGroup) 718 try: 719 _encryptStagingDir(config, local, stagingDir, encryptedDir) 720 _uploadStagingDir(config, encryptedDir, s3BucketUrl) 721 _verifyUpload(config, encryptedDir, s3BucketUrl) 722 finally: 723 if os.path.exists(encryptedDir): 724 shutil.rmtree(encryptedDir)
    725
    726 727 ################################## 728 # _writeStoreIndicator() function 729 ################################## 730 731 -def _writeStoreIndicator(config, stagingDirs):
    732 """ 733 Writes a store indicator file into staging directories. 734 @param config: Config object. 735 @param stagingDirs: Dictionary mapping directory path to date suffix. 736 """ 737 for stagingDir in list(stagingDirs.keys()): 738 writeIndicatorFile(stagingDir, STORE_INDICATOR, 739 config.options.backupUser, 740 config.options.backupGroup)
    741
    742 743 ################################## 744 # _clearExistingBackup() function 745 ################################## 746 747 -def _clearExistingBackup(config, s3BucketUrl):
    748 """ 749 Clear any existing backup files for an S3 bucket URL. 750 @param config: Config object. 751 @param s3BucketUrl: S3 bucket URL associated with the staging directory 752 """ 753 suCommand = resolveCommand(SU_COMMAND) 754 awsCommand = resolveCommand(AWS_COMMAND) 755 actualCommand = "%s s3 rm --recursive %s/" % (awsCommand[0], s3BucketUrl) 756 result = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand])[0] 757 if result != 0: 758 raise IOError("Error [%d] calling AWS CLI to clear existing backup for [%s]." % (result, s3BucketUrl)) 759 logger.debug("Completed clearing any existing backup in S3 for [%s]", s3BucketUrl)
    760
    761 762 ############################### 763 # _uploadStagingDir() function 764 ############################### 765 766 -def _uploadStagingDir(config, stagingDir, s3BucketUrl):
    767 """ 768 Upload the contents of a staging directory out to the Amazon S3 cloud. 769 @param config: Config object. 770 @param stagingDir: Staging directory to upload 771 @param s3BucketUrl: S3 bucket URL associated with the staging directory 772 """ 773 suCommand = resolveCommand(SU_COMMAND) 774 awsCommand = resolveCommand(AWS_COMMAND) 775 actualCommand = "%s s3 cp --recursive %s/ %s/" % (awsCommand[0], stagingDir, s3BucketUrl) 776 result = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand])[0] 777 if result != 0: 778 raise IOError("Error [%d] calling AWS CLI to upload staging directory to [%s]." % (result, s3BucketUrl)) 779 logger.debug("Completed uploading staging dir [%s] to [%s]", stagingDir, s3BucketUrl)
    780
    781 782 ########################### 783 # _verifyUpload() function 784 ########################### 785 786 -def _verifyUpload(config, stagingDir, s3BucketUrl):
    787 """ 788 Verify that a staging directory was properly uploaded to the Amazon S3 cloud. 789 @param config: Config object. 790 @param stagingDir: Staging directory to verify 791 @param s3BucketUrl: S3 bucket URL associated with the staging directory 792 """ 793 (bucket, prefix) = s3BucketUrl.replace("s3://", "").split("/", 1) 794 suCommand = resolveCommand(SU_COMMAND) 795 awsCommand = resolveCommand(AWS_COMMAND) 796 query = "Contents[].{Key: Key, Size: Size}" 797 actualCommand = "%s s3api list-objects --bucket %s --prefix %s --query '%s'" % (awsCommand[0], bucket, prefix, query) 798 (result, data) = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand], returnOutput=True) 799 if result != 0: 800 raise IOError("Error [%d] calling AWS CLI verify upload to [%s]." % (result, s3BucketUrl)) 801 contents = { } 802 for entry in json.loads("".join(data)): 803 key = entry["Key"].replace(prefix, "") 804 size = int(entry["Size"]) 805 contents[key] = size 806 files = FilesystemList() 807 files.addDirContents(stagingDir) 808 for entry in files: 809 if os.path.isfile(entry): 810 key = entry.replace(stagingDir, "") 811 size = int(os.stat(entry).st_size) 812 if not key in contents: 813 raise IOError("File was apparently not uploaded: [%s]" % entry) 814 else: 815 if size != contents[key]: 816 raise IOError("File size differs [%s], expected %s bytes but got %s bytes" % (entry, size, contents[key])) 817 logger.debug("Completed verifying upload from [%s] to [%s].", stagingDir, s3BucketUrl)
    818
    819 820 ################################ 821 # _encryptStagingDir() function 822 ################################ 823 824 -def _encryptStagingDir(config, local, stagingDir, encryptedDir):
    825 """ 826 Encrypt a staging directory, creating a new directory in the process. 827 @param config: Config object. 828 @param stagingDir: Staging directory to use as source 829 @param encryptedDir: Target directory into which encrypted files should be written 830 """ 831 suCommand = resolveCommand(SU_COMMAND) 832 files = FilesystemList() 833 files.addDirContents(stagingDir) 834 for cleartext in files: 835 if os.path.isfile(cleartext): 836 encrypted = "%s%s" % (encryptedDir, cleartext.replace(stagingDir, "")) 837 if int(os.stat(cleartext).st_size) == 0: 838 with open(encrypted, 'a') as f: 839 f.close() # don't bother encrypting empty files 840 else: 841 actualCommand = local.amazons3.encryptCommand.replace("${input}", cleartext).replace("${output}", encrypted) 842 subdir = os.path.dirname(encrypted) 843 if not os.path.isdir(subdir): 844 os.makedirs(subdir) 845 changeOwnership(subdir, config.options.backupUser, config.options.backupGroup) 846 result = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand])[0] 847 if result != 0: 848 raise IOError("Error [%d] encrypting [%s]." % (result, cleartext)) 849 logger.debug("Completed encrypting staging directory [%s] into [%s]", stagingDir, encryptedDir)
    850

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.extend.postgresql-module.html0000664000175000017500000000446612657665544031110 0ustar pronovicpronovic00000000000000 postgresql

    Module postgresql


    Classes

    LocalConfig
    PostgresqlConfig

    Functions

    backupDatabase
    executeAction

    Variables

    POSTGRESQLDUMPALL_COMMAND
    POSTGRESQLDUMP_COMMAND
    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.sysinfo-pysrc.html0000664000175000017500000023367012657665545027471 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.sysinfo
    Package CedarBackup3 :: Package extend :: Module sysinfo
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.extend.sysinfo

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2005,2010,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Official Cedar Backup Extensions 
     30  # Purpose  : Provides an extension to save off important system recovery information. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Provides an extension to save off important system recovery information. 
     40   
     41  This is a simple Cedar Backup extension used to save off important system 
     42  recovery information.  It saves off three types of information: 
     43   
     44     - Currently-installed Debian packages via C{dpkg --get-selections} 
     45     - Disk partition information via C{fdisk -l} 
     46     - System-wide mounted filesystem contents, via C{ls -laR} 
     47   
     48  The saved-off information is placed into the collect directory and is 
     49  compressed using C{bzip2} to save space. 
     50   
     51  This extension relies on the options and collect configurations in the standard 
     52  Cedar Backup configuration file, but requires no new configuration of its own. 
     53  No public functions other than the action are exposed since all of this is 
     54  pretty simple. 
     55   
     56  @note: If the C{dpkg} or C{fdisk} commands cannot be found in their normal 
     57  locations or executed by the current user, those steps will be skipped and a 
     58  note will be logged at the INFO level. 
     59   
     60  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     61  """ 
     62   
     63  ######################################################################## 
     64  # Imported modules 
     65  ######################################################################## 
     66   
     67  # System modules 
     68  import os 
     69  import logging 
     70  from bz2 import BZ2File 
     71   
     72  # Cedar Backup modules 
     73  from CedarBackup3.util import resolveCommand, executeCommand, changeOwnership 
     74   
     75   
     76  ######################################################################## 
     77  # Module-wide constants and variables 
     78  ######################################################################## 
     79   
     80  logger = logging.getLogger("CedarBackup3.log.extend.sysinfo") 
     81   
     82  DPKG_PATH      = "/usr/bin/dpkg" 
     83  FDISK_PATH     = "/sbin/fdisk" 
     84   
     85  DPKG_COMMAND   = [ DPKG_PATH, "--get-selections", ] 
     86  FDISK_COMMAND  = [ FDISK_PATH, "-l", ] 
     87  LS_COMMAND     = [ "ls", "-laR", "/", ] 
     88   
     89   
     90  ######################################################################## 
     91  # Public functions 
     92  ######################################################################## 
     93   
     94  ########################### 
     95  # executeAction() function 
     96  ########################### 
     97   
    
    98 -def executeAction(configPath, options, config):
    99 """ 100 Executes the sysinfo backup action. 101 102 @param configPath: Path to configuration file on disk. 103 @type configPath: String representing a path on disk. 104 105 @param options: Program command-line options. 106 @type options: Options object. 107 108 @param config: Program configuration. 109 @type config: Config object. 110 111 @raise ValueError: Under many generic error conditions 112 @raise IOError: If the backup process fails for some reason. 113 """ 114 logger.debug("Executing sysinfo extended action.") 115 if config.options is None or config.collect is None: 116 raise ValueError("Cedar Backup configuration is not properly filled in.") 117 _dumpDebianPackages(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) 118 _dumpPartitionTable(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) 119 _dumpFilesystemContents(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) 120 logger.info("Executed the sysinfo extended action successfully.")
    121
    122 -def _dumpDebianPackages(targetDir, backupUser, backupGroup, compress=True):
    123 """ 124 Dumps a list of currently installed Debian packages via C{dpkg}. 125 @param targetDir: Directory to write output file into. 126 @param backupUser: User which should own the resulting file. 127 @param backupGroup: Group which should own the resulting file. 128 @param compress: Indicates whether to compress the output file. 129 @raise IOError: If the dump fails for some reason. 130 """ 131 if not os.path.exists(DPKG_PATH): 132 logger.info("Not executing Debian package dump since %s doesn't seem to exist.", DPKG_PATH) 133 elif not os.access(DPKG_PATH, os.X_OK): 134 logger.info("Not executing Debian package dump since %s cannot be executed.", DPKG_PATH) 135 else: 136 (outputFile, filename) = _getOutputFile(targetDir, "dpkg-selections", compress) 137 with outputFile: 138 command = resolveCommand(DPKG_COMMAND) 139 result = executeCommand(command, [], returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile)[0] 140 if result != 0: 141 raise IOError("Error [%d] executing Debian package dump." % result) 142 if not os.path.exists(filename): 143 raise IOError("File [%s] does not seem to exist after Debian package dump finished." % filename) 144 changeOwnership(filename, backupUser, backupGroup)
    145
    146 -def _dumpPartitionTable(targetDir, backupUser, backupGroup, compress=True):
    147 """ 148 Dumps information about the partition table via C{fdisk}. 149 @param targetDir: Directory to write output file into. 150 @param backupUser: User which should own the resulting file. 151 @param backupGroup: Group which should own the resulting file. 152 @param compress: Indicates whether to compress the output file. 153 @raise IOError: If the dump fails for some reason. 154 """ 155 if not os.path.exists(FDISK_PATH): 156 logger.info("Not executing partition table dump since %s doesn't seem to exist.", FDISK_PATH) 157 elif not os.access(FDISK_PATH, os.X_OK): 158 logger.info("Not executing partition table dump since %s cannot be executed.", FDISK_PATH) 159 else: 160 (outputFile, filename) = _getOutputFile(targetDir, "fdisk-l", compress) 161 with outputFile: 162 command = resolveCommand(FDISK_COMMAND) 163 result = executeCommand(command, [], returnOutput=False, ignoreStderr=True, outputFile=outputFile)[0] 164 if result != 0: 165 raise IOError("Error [%d] executing partition table dump." % result) 166 if not os.path.exists(filename): 167 raise IOError("File [%s] does not seem to exist after partition table dump finished." % filename) 168 changeOwnership(filename, backupUser, backupGroup)
    169
    170 -def _dumpFilesystemContents(targetDir, backupUser, backupGroup, compress=True):
    171 """ 172 Dumps complete listing of filesystem contents via C{ls -laR}. 173 @param targetDir: Directory to write output file into. 174 @param backupUser: User which should own the resulting file. 175 @param backupGroup: Group which should own the resulting file. 176 @param compress: Indicates whether to compress the output file. 177 @raise IOError: If the dump fails for some reason. 178 """ 179 (outputFile, filename) = _getOutputFile(targetDir, "ls-laR", compress) 180 with outputFile: 181 # Note: can't count on return status from 'ls', so we don't check it. 182 command = resolveCommand(LS_COMMAND) 183 executeCommand(command, [], returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile) 184 if not os.path.exists(filename): 185 raise IOError("File [%s] does not seem to exist after filesystem contents dump finished." % filename) 186 changeOwnership(filename, backupUser, backupGroup)
    187
    188 -def _getOutputFile(targetDir, name, compress=True):
    189 """ 190 Opens the output file used for saving a dump to the filesystem. 191 192 The filename will be C{name.txt} (or C{name.txt.bz2} if C{compress} is 193 C{True}), written in the target directory. 194 195 @param targetDir: Target directory to write file in. 196 @param name: Name of the file to create. 197 @param compress: Indicates whether to write compressed output. 198 199 @return: Tuple of (Output file object, filename), file opened in binary mode for use with executeCommand() 200 """ 201 filename = os.path.join(targetDir, "%s.txt" % name) 202 if compress: 203 filename = "%s.bz2" % filename 204 logger.debug("Dump file will be [%s].", filename) 205 if compress: 206 outputFile = BZ2File(filename, "wb") 207 else: 208 outputFile = open(filename, "wb") 209 return (outputFile, filename)
    210

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.ReferenceConfig-class.html0000664000175000017500000010253312657665544030736 0ustar pronovicpronovic00000000000000 CedarBackup3.config.ReferenceConfig
    Package CedarBackup3 :: Module config :: Class ReferenceConfig
    [hide private]
    [frames] | no frames]

    Class ReferenceConfig

    source code

    object --+
             |
            ReferenceConfig
    

    Class representing a Cedar Backup reference configuration.

    The reference information is just used for saving off metadata about configuration and exists mostly for backwards-compatibility with Cedar Backup 1.x.

    Instance Methods [hide private]
     
    __init__(self, author=None, revision=None, description=None, generator=None)
    Constructor for the ReferenceConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    _setAuthor(self, value)
    Property target used to set the author value.
    source code
     
    _getAuthor(self)
    Property target used to get the author value.
    source code
     
    _setRevision(self, value)
    Property target used to set the revision value.
    source code
     
    _getRevision(self)
    Property target used to get the revision value.
    source code
     
    _setDescription(self, value)
    Property target used to set the description value.
    source code
     
    _getDescription(self)
    Property target used to get the description value.
    source code
     
    _setGenerator(self, value)
    Property target used to set the generator value.
    source code
     
    _getGenerator(self)
    Property target used to get the generator value.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      author
    Author of the configuration file.
      revision
    Revision of the configuration file.
      description
    Description of the configuration file.
      generator
    Tool that generated the configuration file.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, author=None, revision=None, description=None, generator=None)
    (Constructor)

    source code 

    Constructor for the ReferenceConfig class.

    Parameters:
    • author - Author of the configuration file.
    • revision - Revision of the configuration file.
    • description - Description of the configuration file.
    • generator - Tool that generated the configuration file.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAuthor(self, value)

    source code 

    Property target used to set the author value. No validations.

    _setRevision(self, value)

    source code 

    Property target used to set the revision value. No validations.

    _setDescription(self, value)

    source code 

    Property target used to set the description value. No validations.

    _setGenerator(self, value)

    source code 

    Property target used to set the generator value. No validations.


    Property Details [hide private]

    author

    Author of the configuration file.

    Get Method:
    _getAuthor(self) - Property target used to get the author value.
    Set Method:
    _setAuthor(self, value) - Property target used to set the author value.

    revision

    Revision of the configuration file.

    Get Method:
    _getRevision(self) - Property target used to get the revision value.
    Set Method:
    _setRevision(self, value) - Property target used to set the revision value.

    description

    Description of the configuration file.

    Get Method:
    _getDescription(self) - Property target used to get the description value.
    Set Method:
    _setDescription(self, value) - Property target used to set the description value.

    generator

    Tool that generated the configuration file.

    Get Method:
    _getGenerator(self) - Property target used to get the generator value.
    Set Method:
    _setGenerator(self, value) - Property target used to set the generator value.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config-pysrc.html0000664000175000017500001023304012657665550025742 0ustar pronovicpronovic00000000000000 CedarBackup3.config
    Package CedarBackup3 :: Module config
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.config

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2008,2010,2015 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python 3 (>= 3.4) 
      29  # Project  : Cedar Backup, release 3 
      30  # Purpose  : Provides configuration-related objects. 
      31  # 
      32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      33   
      34  ######################################################################## 
      35  # Module documentation 
      36  ######################################################################## 
      37   
      38  """ 
      39  Provides configuration-related objects. 
      40   
      41  Summary 
      42  ======= 
      43   
      44     Cedar Backup stores all of its configuration in an XML document typically 
      45     called C{cback3.conf}.  The standard location for this document is in 
      46     C{/etc}, but users can specify a different location if they want to. 
      47   
      48     The C{Config} class is a Python object representation of a Cedar Backup XML 
      49     configuration file.  The representation is two-way: XML data can be used to 
      50     create a C{Config} object, and then changes to the object can be propogated 
      51     back to disk.  A C{Config} object can even be used to create a configuration 
      52     file from scratch programmatically. 
      53   
      54     The C{Config} class is intended to be the only Python-language interface to 
      55     Cedar Backup configuration on disk.  Cedar Backup will use the class as its 
      56     internal representation of configuration, and applications external to Cedar 
      57     Backup itself (such as a hypothetical third-party configuration tool written 
      58     in Python or a third party extension module) should also use the class when 
      59     they need to read and write configuration files. 
      60   
      61  Backwards Compatibility 
      62  ======================= 
      63   
      64     The configuration file format has changed between Cedar Backup 1.x and Cedar 
      65     Backup 2.x.  Any Cedar Backup 1.x configuration file is also a valid Cedar 
      66     Backup 2.x configuration file.  However, it doesn't work to go the other 
      67     direction, as the 2.x configuration files contains additional configuration 
      68     is not accepted by older versions of the software. 
      69   
      70  XML Configuration Structure 
      71  =========================== 
      72   
      73     A C{Config} object can either be created "empty", or can be created based on 
      74     XML input (either in the form of a string or read in from a file on disk). 
      75     Generally speaking, the XML input I{must} result in a C{Config} object which 
      76     passes the validations laid out below in the I{Validation} section. 
      77   
      78     An XML configuration file is composed of seven sections: 
      79   
      80        - I{reference}: specifies reference information about the file (author, revision, etc) 
      81        - I{extensions}: specifies mappings to Cedar Backup extensions (external code) 
      82        - I{options}: specifies global configuration options 
      83        - I{peers}: specifies the set of peers in a master's backup pool 
      84        - I{collect}: specifies configuration related to the collect action 
      85        - I{stage}: specifies configuration related to the stage action 
      86        - I{store}: specifies configuration related to the store action 
      87        - I{purge}: specifies configuration related to the purge action 
      88   
      89     Each section is represented by an class in this module, and then the overall 
      90     C{Config} class is a composition of the various other classes. 
      91   
      92     Any configuration section that is missing in the XML document (or has not 
      93     been filled into an "empty" document) will just be set to C{None} in the 
      94     object representation.  The same goes for individual fields within each 
      95     configuration section.  Keep in mind that the document might not be 
      96     completely valid if some sections or fields aren't filled in - but that 
      97     won't matter until validation takes place (see the I{Validation} section 
      98     below). 
      99   
     100  Unicode vs. String Data 
     101  ======================= 
     102   
     103     By default, all string data that comes out of XML documents in Python is 
     104     unicode data (i.e. C{u"whatever"}).  This is fine for many things, but when 
     105     it comes to filesystem paths, it can cause us some problems.  We really want 
     106     strings to be encoded in the filesystem encoding rather than being unicode. 
     107     So, most elements in configuration which represent filesystem paths are 
     108     coverted to plain strings using L{util.encodePath}.  The main exception is 
     109     the various C{absoluteExcludePath} and C{relativeExcludePath} lists.  These 
     110     are I{not} converted, because they are generally only used for filtering, 
     111     not for filesystem operations. 
     112   
     113  Validation 
     114  ========== 
     115   
     116     There are two main levels of validation in the C{Config} class and its 
     117     children.  The first is field-level validation.  Field-level validation 
     118     comes into play when a given field in an object is assigned to or updated. 
     119     We use Python's C{property} functionality to enforce specific validations on 
     120     field values, and in some places we even use customized list classes to 
     121     enforce validations on list members.  You should expect to catch a 
     122     C{ValueError} exception when making assignments to configuration class 
     123     fields. 
     124   
     125     The second level of validation is post-completion validation.  Certain 
     126     validations don't make sense until a document is fully "complete".  We don't 
     127     want these validations to apply all of the time, because it would make 
     128     building up a document from scratch a real pain.  For instance, we might 
     129     have to do things in the right order to keep from throwing exceptions, etc. 
     130   
     131     All of these post-completion validations are encapsulated in the 
     132     L{Config.validate} method.  This method can be called at any time by a 
     133     client, and will always be called immediately after creating a C{Config} 
     134     object from XML data and before exporting a C{Config} object to XML.  This 
     135     way, we get decent ease-of-use but we also don't accept or emit invalid 
     136     configuration files. 
     137   
     138     The L{Config.validate} implementation actually takes two passes to 
     139     completely validate a configuration document.  The first pass at validation 
     140     is to ensure that the proper sections are filled into the document.  There 
     141     are default requirements, but the caller has the opportunity to override 
     142     these defaults. 
     143   
     144     The second pass at validation ensures that any filled-in section contains 
     145     valid data.  Any section which is not set to C{None} is validated according 
     146     to the rules for that section (see below). 
     147   
     148     I{Reference Validations} 
     149   
     150     No validations. 
     151   
     152     I{Extensions Validations} 
     153   
     154     The list of actions may be either C{None} or an empty list C{[]} if desired. 
     155     Each extended action must include a name, a module and a function.  Then, an 
     156     extended action must include either an index or dependency information. 
     157     Which one is required depends on which order mode is configured. 
     158   
     159     I{Options Validations} 
     160   
     161     All fields must be filled in except the rsh command.  The rcp and rsh 
     162     commands are used as default values for all remote peers.  Remote peers can 
     163     also rely on the backup user as the default remote user name if they choose. 
     164   
     165     I{Peers Validations} 
     166   
     167     Local peers must be completely filled in, including both name and collect 
     168     directory.  Remote peers must also fill in the name and collect directory, 
     169     but can leave the remote user and rcp command unset.  In this case, the 
     170     remote user is assumed to match the backup user from the options section and 
     171     rcp command is taken directly from the options section. 
     172   
     173     I{Collect Validations} 
     174   
     175     The target directory must be filled in.  The collect mode, archive mode and 
     176     ignore file are all optional.  The list of absolute paths to exclude and 
     177     patterns to exclude may be either C{None} or an empty list C{[]} if desired. 
     178   
     179     Each collect directory entry must contain an absolute path to collect, and 
     180     then must either be able to take collect mode, archive mode and ignore file 
     181     configuration from the parent C{CollectConfig} object, or must set each 
     182     value on its own.  The list of absolute paths to exclude, relative paths to 
     183     exclude and patterns to exclude may be either C{None} or an empty list C{[]} 
     184     if desired.  Any list of absolute paths to exclude or patterns to exclude 
     185     will be combined with the same list in the C{CollectConfig} object to make 
     186     the complete list for a given directory. 
     187   
     188     I{Stage Validations} 
     189   
     190     The target directory must be filled in.  There must be at least one peer 
     191     (remote or local) between the two lists of peers.  A list with no entries 
     192     can be either C{None} or an empty list C{[]} if desired. 
     193   
     194     If a set of peers is provided, this configuration completely overrides 
     195     configuration in the peers configuration section, and the same validations 
     196     apply. 
     197   
     198     I{Store Validations} 
     199   
     200     The device type and drive speed are optional, and all other values are 
     201     required (missing booleans will be set to defaults, which is OK). 
     202   
     203     The image writer functionality in the C{writer} module is supposed to be 
     204     able to handle a device speed of C{None}.  Any caller which needs a "real" 
     205     (non-C{None}) value for the device type can use C{DEFAULT_DEVICE_TYPE}, 
     206     which is guaranteed to be sensible. 
     207   
     208     I{Purge Validations} 
     209   
     210     The list of purge directories may be either C{None} or an empty list C{[]} 
     211     if desired.  All purge directories must contain a path and a retain days 
     212     value. 
     213   
     214  @sort: ActionDependencies, ActionHook, PreActionHook, PostActionHook, 
     215         ExtendedAction, CommandOverride, CollectFile, CollectDir, PurgeDir, LocalPeer, 
     216         RemotePeer, ReferenceConfig, ExtensionsConfig, OptionsConfig, PeersConfig, 
     217         CollectConfig, StageConfig, StoreConfig, PurgeConfig, Config, 
     218         DEFAULT_DEVICE_TYPE, DEFAULT_MEDIA_TYPE, 
     219         VALID_DEVICE_TYPES, VALID_MEDIA_TYPES, 
     220         VALID_COLLECT_MODES, VALID_ARCHIVE_MODES, 
     221         VALID_ORDER_MODES 
     222   
     223  @var DEFAULT_DEVICE_TYPE: The default device type. 
     224  @var DEFAULT_MEDIA_TYPE: The default media type. 
     225  @var VALID_DEVICE_TYPES: List of valid device types. 
     226  @var VALID_MEDIA_TYPES: List of valid media types. 
     227  @var VALID_COLLECT_MODES: List of valid collect modes. 
     228  @var VALID_COMPRESS_MODES: List of valid compress modes. 
     229  @var VALID_ARCHIVE_MODES: List of valid archive modes. 
     230  @var VALID_ORDER_MODES: List of valid extension order modes. 
     231   
     232  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     233  """ 
     234   
     235  ######################################################################## 
     236  # Imported modules 
     237  ######################################################################## 
     238   
     239  # System modules 
     240  import os 
     241  import re 
     242  import logging 
     243  from functools import total_ordering 
     244   
     245  # Cedar Backup modules 
     246  from CedarBackup3.writers.util import validateScsiId, validateDriveSpeed 
     247  from CedarBackup3.util import UnorderedList, AbsolutePathList, ObjectTypeList, parseCommaSeparatedString 
     248  from CedarBackup3.util import RegexMatchList, RegexList, encodePath, checkUnique 
     249  from CedarBackup3.util import convertSize, displayBytes, UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES 
     250  from CedarBackup3.xmlutil import isElement, readChildren, readFirstChild 
     251  from CedarBackup3.xmlutil import readStringList, readString, readInteger, readBoolean 
     252  from CedarBackup3.xmlutil import addContainerNode, addStringNode, addIntegerNode, addBooleanNode 
     253  from CedarBackup3.xmlutil import createInputDom, createOutputDom, serializeDom 
     254   
     255   
     256  ######################################################################## 
     257  # Module-wide constants and variables 
     258  ######################################################################## 
     259   
     260  logger = logging.getLogger("CedarBackup3.log.config") 
     261   
     262  DEFAULT_DEVICE_TYPE   = "cdwriter" 
     263  DEFAULT_MEDIA_TYPE    = "cdrw-74" 
     264   
     265  VALID_DEVICE_TYPES    = [ "cdwriter", "dvdwriter", ] 
     266  VALID_CD_MEDIA_TYPES  = [ "cdr-74", "cdrw-74", "cdr-80", "cdrw-80", ] 
     267  VALID_DVD_MEDIA_TYPES = [ "dvd+r", "dvd+rw", ] 
     268  VALID_MEDIA_TYPES     = VALID_CD_MEDIA_TYPES + VALID_DVD_MEDIA_TYPES 
     269  VALID_COLLECT_MODES   = [ "daily", "weekly", "incr", ] 
     270  VALID_ARCHIVE_MODES   = [ "tar", "targz", "tarbz2", ] 
     271  VALID_COMPRESS_MODES  = [ "none", "gzip", "bzip2", ] 
     272  VALID_ORDER_MODES     = [ "index", "dependency", ] 
     273  VALID_BLANK_MODES     = [ "daily", "weekly", ] 
     274  VALID_BYTE_UNITS      = [ UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES, ] 
     275  VALID_FAILURE_MODES   = [ "none", "all", "daily", "weekly", ] 
     276   
     277  REWRITABLE_MEDIA_TYPES = [ "cdrw-74", "cdrw-80", "dvd+rw", ] 
     278   
     279  ACTION_NAME_REGEX     = r"^[a-z0-9]*$" 
    
    280 281 282 ######################################################################## 283 # ByteQuantity class definition 284 ######################################################################## 285 286 @total_ordering 287 -class ByteQuantity(object):
    288 289 """ 290 Class representing a byte quantity. 291 292 A byte quantity has both a quantity and a byte-related unit. Units are 293 maintained using the constants from util.py. If no units are provided, 294 C{UNIT_BYTES} is assumed. 295 296 The quantity is maintained internally as a string so that issues of 297 precision can be avoided. It really isn't possible to store a floating 298 point number here while being able to losslessly translate back and forth 299 between XML and object representations. (Perhaps the Python 2.4 Decimal 300 class would have been an option, but I originally wanted to stay compatible 301 with Python 2.3.) 302 303 Even though the quantity is maintained as a string, the string must be in a 304 valid floating point positive number. Technically, any floating point 305 string format supported by Python is allowble. However, it does not make 306 sense to have a negative quantity of bytes in this context. 307 308 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 309 quantity, units, bytes 310 """ 311
    312 - def __init__(self, quantity=None, units=None):
    313 """ 314 Constructor for the C{ByteQuantity} class. 315 316 @param quantity: Quantity of bytes, something interpretable as a float 317 @param units: Unit of bytes, one of VALID_BYTE_UNITS 318 319 @raise ValueError: If one of the values is invalid. 320 """ 321 self._quantity = None 322 self._units = None 323 self.quantity = quantity 324 self.units = units
    325
    326 - def __repr__(self):
    327 """ 328 Official string representation for class instance. 329 """ 330 return "ByteQuantity(%s, %s)" % (self.quantity, self.units)
    331
    332 - def __str__(self):
    333 """ 334 Informal string representation for class instance. 335 """ 336 return "%s" % displayBytes(self.bytes)
    337
    338 - def __eq__(self, other):
    339 """Equals operator, implemented in terms of Python 2-style compare operator.""" 340 return self.__cmp__(other) == 0
    341
    342 - def __lt__(self, other):
    343 """Less-than operator, implemented in terms of Python 2-style compare operator.""" 344 return self.__cmp__(other) < 0
    345
    346 - def __gt__(self, other):
    347 """Greater-than operator, implemented in terms of Python 2-style compare operator.""" 348 return self.__cmp__(other) > 0
    349
    350 - def __cmp__(self, other):
    351 """ 352 Python 2-style comparison operator. 353 @param other: Other object to compare to. 354 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 355 """ 356 if other is None: 357 return 1 358 elif isinstance(other, ByteQuantity): 359 if self.bytes != other.bytes: 360 if self.bytes < other.bytes: 361 return -1 362 else: 363 return 1 364 return 0 365 else: 366 return self.__cmp__(ByteQuantity(other, UNIT_BYTES)) # will fail if other can't be coverted to float
    367
    368 - def _setQuantity(self, value):
    369 """ 370 Property target used to set the quantity 371 The value must be interpretable as a float if it is not None 372 @raise ValueError: If the value is an empty string. 373 @raise ValueError: If the value is not a valid floating point number 374 @raise ValueError: If the value is less than zero 375 """ 376 if value is None: 377 self._quantity = None 378 else: 379 try: 380 floatValue = float(value) # allow integer, float, string, etc. 381 except: 382 raise ValueError("Quantity must be interpretable as a float") 383 if floatValue < 0.0: 384 raise ValueError("Quantity cannot be negative.") 385 self._quantity = str(value) # keep around string
    386
    387 - def _getQuantity(self):
    388 """ 389 Property target used to get the quantity. 390 """ 391 return self._quantity
    392
    393 - def _setUnits(self, value):
    394 """ 395 Property target used to set the units value. 396 If not C{None}, the units value must be one of the values in L{VALID_BYTE_UNITS}. 397 @raise ValueError: If the value is not valid. 398 """ 399 if value is None: 400 self._units = UNIT_BYTES 401 else: 402 if value not in VALID_BYTE_UNITS: 403 raise ValueError("Units value must be one of %s." % VALID_BYTE_UNITS) 404 self._units = value
    405
    406 - def _getUnits(self):
    407 """ 408 Property target used to get the units value. 409 """ 410 return self._units
    411
    412 - def _getBytes(self):
    413 """ 414 Property target used to return the byte quantity as a floating point number. 415 If there is no quantity set, then a value of 0.0 is returned. 416 """ 417 if self.quantity is not None and self.units is not None: 418 return convertSize(self.quantity, self.units, UNIT_BYTES) 419 return 0.0
    420 421 quantity = property(_getQuantity, _setQuantity, None, doc="Byte quantity, as a string") 422 units = property(_getUnits, _setUnits, None, doc="Units for byte quantity, for instance UNIT_BYTES") 423 bytes = property(_getBytes, None, None, doc="Byte quantity, as a floating point number.")
    424
    425 426 ######################################################################## 427 # ActionDependencies class definition 428 ######################################################################## 429 430 @total_ordering 431 -class ActionDependencies(object):
    432 433 """ 434 Class representing dependencies associated with an extended action. 435 436 Execution ordering for extended actions is done in one of two ways: either by using 437 index values (lower index gets run first) or by having the extended action specify 438 dependencies in terms of other named actions. This class encapsulates the dependency 439 information for an extended action. 440 441 The following restrictions exist on data in this class: 442 443 - Any action name must be a non-empty string matching C{ACTION_NAME_REGEX} 444 445 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 446 beforeList, afterList 447 """ 448
    449 - def __init__(self, beforeList=None, afterList=None):
    450 """ 451 Constructor for the C{ActionDependencies} class. 452 453 @param beforeList: List of named actions that this action must be run before 454 @param afterList: List of named actions that this action must be run after 455 456 @raise ValueError: If one of the values is invalid. 457 """ 458 self._beforeList = None 459 self._afterList = None 460 self.beforeList = beforeList 461 self.afterList = afterList
    462
    463 - def __repr__(self):
    464 """ 465 Official string representation for class instance. 466 """ 467 return "ActionDependencies(%s, %s)" % (self.beforeList, self.afterList)
    468
    469 - def __str__(self):
    470 """ 471 Informal string representation for class instance. 472 """ 473 return self.__repr__()
    474
    475 - def __eq__(self, other):
    476 """Equals operator, implemented in terms of original Python 2 compare operator.""" 477 return self.__cmp__(other) == 0
    478
    479 - def __lt__(self, other):
    480 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 481 return self.__cmp__(other) < 0
    482
    483 - def __gt__(self, other):
    484 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 485 return self.__cmp__(other) > 0
    486
    487 - def __cmp__(self, other):
    488 """ 489 Original Python 2 comparison operator. 490 @param other: Other object to compare to. 491 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 492 """ 493 if other is None: 494 return 1 495 if self.beforeList != other.beforeList: 496 if self.beforeList < other.beforeList: 497 return -1 498 else: 499 return 1 500 if self.afterList != other.afterList: 501 if self.afterList < other.afterList: 502 return -1 503 else: 504 return 1 505 return 0
    506
    507 - def _setBeforeList(self, value):
    508 """ 509 Property target used to set the "run before" list. 510 Either the value must be C{None} or each element must be a string matching ACTION_NAME_REGEX. 511 @raise ValueError: If the value does not match the regular expression. 512 """ 513 if value is None: 514 self._beforeList = None 515 else: 516 try: 517 saved = self._beforeList 518 self._beforeList = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") 519 self._beforeList.extend(value) 520 except Exception as e: 521 self._beforeList = saved 522 raise e
    523
    524 - def _getBeforeList(self):
    525 """ 526 Property target used to get the "run before" list. 527 """ 528 return self._beforeList
    529
    530 - def _setAfterList(self, value):
    531 """ 532 Property target used to set the "run after" list. 533 Either the value must be C{None} or each element must be a string matching ACTION_NAME_REGEX. 534 @raise ValueError: If the value does not match the regular expression. 535 """ 536 if value is None: 537 self._afterList = None 538 else: 539 try: 540 saved = self._afterList 541 self._afterList = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") 542 self._afterList.extend(value) 543 except Exception as e: 544 self._afterList = saved 545 raise e
    546
    547 - def _getAfterList(self):
    548 """ 549 Property target used to get the "run after" list. 550 """ 551 return self._afterList
    552 553 beforeList = property(_getBeforeList, _setBeforeList, None, "List of named actions that this action must be run before.") 554 afterList = property(_getAfterList, _setAfterList, None, "List of named actions that this action must be run after.")
    555
    556 557 ######################################################################## 558 # ActionHook class definition 559 ######################################################################## 560 561 @total_ordering 562 -class ActionHook(object):
    563 564 """ 565 Class representing a hook associated with an action. 566 567 A hook associated with an action is a shell command to be executed either 568 before or after a named action is executed. 569 570 The following restrictions exist on data in this class: 571 572 - The action name must be a non-empty string matching C{ACTION_NAME_REGEX} 573 - The shell command must be a non-empty string. 574 575 The internal C{before} and C{after} instance variables are always set to 576 False in this parent class. 577 578 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, action, 579 command, before, after 580 """ 581
    582 - def __init__(self, action=None, command=None):
    583 """ 584 Constructor for the C{ActionHook} class. 585 586 @param action: Action this hook is associated with 587 @param command: Shell command to execute 588 589 @raise ValueError: If one of the values is invalid. 590 """ 591 self._action = None 592 self._command = None 593 self._before = False 594 self._after = False 595 self.action = action 596 self.command = command
    597
    598 - def __repr__(self):
    599 """ 600 Official string representation for class instance. 601 """ 602 return "ActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after)
    603
    604 - def __str__(self):
    605 """ 606 Informal string representation for class instance. 607 """ 608 return self.__repr__()
    609
    610 - def __eq__(self, other):
    611 """Equals operator, implemented in terms of original Python 2 compare operator.""" 612 return self.__cmp__(other) == 0
    613
    614 - def __lt__(self, other):
    615 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 616 return self.__cmp__(other) < 0
    617
    618 - def __gt__(self, other):
    619 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 620 return self.__cmp__(other) > 0
    621
    622 - def __cmp__(self, other):
    623 """ 624 Original Python 2 comparison operator. 625 @param other: Other object to compare to. 626 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 627 """ 628 if other is None: 629 return 1 630 if self.action != other.action: 631 if str(self.action or "") < str(other.action or ""): 632 return -1 633 else: 634 return 1 635 if self.command != other.command: 636 if str(self.command or "") < str(other.command or ""): 637 return -1 638 else: 639 return 1 640 if self.before != other.before: 641 if self.before < other.before: 642 return -1 643 else: 644 return 1 645 if self.after != other.after: 646 if self.after < other.after: 647 return -1 648 else: 649 return 1 650 return 0
    651
    652 - def _setAction(self, value):
    653 """ 654 Property target used to set the action name. 655 The value must be a non-empty string if it is not C{None}. 656 It must also consist only of lower-case letters and digits. 657 @raise ValueError: If the value is an empty string. 658 """ 659 pattern = re.compile(ACTION_NAME_REGEX) 660 if value is not None: 661 if len(value) < 1: 662 raise ValueError("The action name must be a non-empty string.") 663 if not pattern.search(value): 664 raise ValueError("The action name must consist of only lower-case letters and digits.") 665 self._action = value
    666
    667 - def _getAction(self):
    668 """ 669 Property target used to get the action name. 670 """ 671 return self._action
    672
    673 - def _setCommand(self, value):
    674 """ 675 Property target used to set the command. 676 The value must be a non-empty string if it is not C{None}. 677 @raise ValueError: If the value is an empty string. 678 """ 679 if value is not None: 680 if len(value) < 1: 681 raise ValueError("The command must be a non-empty string.") 682 self._command = value
    683
    684 - def _getCommand(self):
    685 """ 686 Property target used to get the command. 687 """ 688 return self._command
    689
    690 - def _getBefore(self):
    691 """ 692 Property target used to get the before flag. 693 """ 694 return self._before
    695
    696 - def _getAfter(self):
    697 """ 698 Property target used to get the after flag. 699 """ 700 return self._after
    701 702 action = property(_getAction, _setAction, None, "Action this hook is associated with.") 703 command = property(_getCommand, _setCommand, None, "Shell command to execute.") 704 before = property(_getBefore, None, None, "Indicates whether command should be executed before action.") 705 after = property(_getAfter, None, None, "Indicates whether command should be executed after action.")
    706
    707 @total_ordering 708 -class PreActionHook(ActionHook):
    709 710 """ 711 Class representing a pre-action hook associated with an action. 712 713 A hook associated with an action is a shell command to be executed either 714 before or after a named action is executed. In this case, a pre-action hook 715 is executed before the named action. 716 717 The following restrictions exist on data in this class: 718 719 - The action name must be a non-empty string consisting of lower-case letters and digits. 720 - The shell command must be a non-empty string. 721 722 The internal C{before} instance variable is always set to True in this 723 class. 724 725 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, action, 726 command, before, after 727 """ 728
    729 - def __init__(self, action=None, command=None):
    730 """ 731 Constructor for the C{PreActionHook} class. 732 733 @param action: Action this hook is associated with 734 @param command: Shell command to execute 735 736 @raise ValueError: If one of the values is invalid. 737 """ 738 ActionHook.__init__(self, action, command) 739 self._before = True
    740
    741 - def __repr__(self):
    742 """ 743 Official string representation for class instance. 744 """ 745 return "PreActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after)
    746
    747 @total_ordering 748 -class PostActionHook(ActionHook):
    749 750 """ 751 Class representing a pre-action hook associated with an action. 752 753 A hook associated with an action is a shell command to be executed either 754 before or after a named action is executed. In this case, a post-action hook 755 is executed after the named action. 756 757 The following restrictions exist on data in this class: 758 759 - The action name must be a non-empty string consisting of lower-case letters and digits. 760 - The shell command must be a non-empty string. 761 762 The internal C{before} instance variable is always set to True in this 763 class. 764 765 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, action, 766 command, before, after 767 """ 768
    769 - def __init__(self, action=None, command=None):
    770 """ 771 Constructor for the C{PostActionHook} class. 772 773 @param action: Action this hook is associated with 774 @param command: Shell command to execute 775 776 @raise ValueError: If one of the values is invalid. 777 """ 778 ActionHook.__init__(self, action, command) 779 self._after = True
    780
    781 - def __repr__(self):
    782 """ 783 Official string representation for class instance. 784 """ 785 return "PostActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after)
    786
    787 788 ######################################################################## 789 # BlankBehavior class definition 790 ######################################################################## 791 792 @total_ordering 793 -class BlankBehavior(object):
    794 795 """ 796 Class representing optimized store-action media blanking behavior. 797 798 The following restrictions exist on data in this class: 799 800 - The blanking mode must be a one of the values in L{VALID_BLANK_MODES} 801 - The blanking factor must be a positive floating point number 802 803 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 804 blankMode, blankFactor 805 """ 806
    807 - def __init__(self, blankMode=None, blankFactor=None):
    808 """ 809 Constructor for the C{BlankBehavior} class. 810 811 @param blankMode: Blanking mode 812 @param blankFactor: Blanking factor 813 814 @raise ValueError: If one of the values is invalid. 815 """ 816 self._blankMode = None 817 self._blankFactor = None 818 self.blankMode = blankMode 819 self.blankFactor = blankFactor
    820
    821 - def __repr__(self):
    822 """ 823 Official string representation for class instance. 824 """ 825 return "BlankBehavior(%s, %s)" % (self.blankMode, self.blankFactor)
    826
    827 - def __str__(self):
    828 """ 829 Informal string representation for class instance. 830 """ 831 return self.__repr__()
    832
    833 - def __eq__(self, other):
    834 """Equals operator, implemented in terms of original Python 2 compare operator.""" 835 return self.__cmp__(other) == 0
    836
    837 - def __lt__(self, other):
    838 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 839 return self.__cmp__(other) < 0
    840
    841 - def __gt__(self, other):
    842 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 843 return self.__cmp__(other) > 0
    844
    845 - def __cmp__(self, other):
    846 """ 847 Original Python 2 comparison operator. 848 @param other: Other object to compare to. 849 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 850 """ 851 if other is None: 852 return 1 853 if self.blankMode != other.blankMode: 854 if str(self.blankMode or "") < str(other.blankMode or ""): 855 return -1 856 else: 857 return 1 858 if self.blankFactor != other.blankFactor: 859 if float(self.blankFactor or 0.0) < float(other.blankFactor or 0.0): 860 return -1 861 else: 862 return 1 863 return 0
    864
    865 - def _setBlankMode(self, value):
    866 """ 867 Property target used to set the blanking mode. 868 The value must be one of L{VALID_BLANK_MODES}. 869 @raise ValueError: If the value is not valid. 870 """ 871 if value is not None: 872 if value not in VALID_BLANK_MODES: 873 raise ValueError("Blanking mode must be one of %s." % VALID_BLANK_MODES) 874 self._blankMode = value
    875
    876 - def _getBlankMode(self):
    877 """ 878 Property target used to get the blanking mode. 879 """ 880 return self._blankMode
    881
    882 - def _setBlankFactor(self, value):
    883 """ 884 Property target used to set the blanking factor. 885 The value must be a non-empty string if it is not C{None}. 886 @raise ValueError: If the value is an empty string. 887 @raise ValueError: If the value is not a valid floating point number 888 @raise ValueError: If the value is less than zero 889 """ 890 if value is not None: 891 if len(value) < 1: 892 raise ValueError("Blanking factor must be a non-empty string.") 893 floatValue = float(value) 894 if floatValue < 0.0: 895 raise ValueError("Blanking factor cannot be negative.") 896 self._blankFactor = value # keep around string
    897
    898 - def _getBlankFactor(self):
    899 """ 900 Property target used to get the blanking factor. 901 """ 902 return self._blankFactor
    903 904 blankMode = property(_getBlankMode, _setBlankMode, None, "Blanking mode") 905 blankFactor = property(_getBlankFactor, _setBlankFactor, None, "Blanking factor")
    906
    907 908 ######################################################################## 909 # ExtendedAction class definition 910 ######################################################################## 911 912 @total_ordering 913 -class ExtendedAction(object):
    914 915 """ 916 Class representing an extended action. 917 918 Essentially, an extended action needs to allow the following to happen:: 919 920 exec("from %s import %s" % (module, function)) 921 exec("%s(action, configPath")" % function) 922 923 The following restrictions exist on data in this class: 924 925 - The action name must be a non-empty string consisting of lower-case letters and digits. 926 - The module must be a non-empty string and a valid Python identifier. 927 - The function must be an on-empty string and a valid Python identifier. 928 - If set, the index must be a positive integer. 929 - If set, the dependencies attribute must be an C{ActionDependencies} object. 930 931 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, name, 932 module, function, index, dependencies 933 """ 934
    935 - def __init__(self, name=None, module=None, function=None, index=None, dependencies=None):
    936 """ 937 Constructor for the C{ExtendedAction} class. 938 939 @param name: Name of the extended action 940 @param module: Name of the module containing the extended action function 941 @param function: Name of the extended action function 942 @param index: Index of action, used for execution ordering 943 @param dependencies: Dependencies for action, used for execution ordering 944 945 @raise ValueError: If one of the values is invalid. 946 """ 947 self._name = None 948 self._module = None 949 self._function = None 950 self._index = None 951 self._dependencies = None 952 self.name = name 953 self.module = module 954 self.function = function 955 self.index = index 956 self.dependencies = dependencies
    957
    958 - def __repr__(self):
    959 """ 960 Official string representation for class instance. 961 """ 962 return "ExtendedAction(%s, %s, %s, %s, %s)" % (self.name, self.module, self.function, self.index, self.dependencies)
    963
    964 - def __str__(self):
    965 """ 966 Informal string representation for class instance. 967 """ 968 return self.__repr__()
    969
    970 - def __eq__(self, other):
    971 """Equals operator, implemented in terms of original Python 2 compare operator.""" 972 return self.__cmp__(other) == 0
    973
    974 - def __lt__(self, other):
    975 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 976 return self.__cmp__(other) < 0
    977
    978 - def __gt__(self, other):
    979 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 980 return self.__cmp__(other) > 0
    981
    982 - def __cmp__(self, other):
    983 """ 984 Original Python 2 comparison operator. 985 @param other: Other object to compare to. 986 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 987 """ 988 if other is None: 989 return 1 990 if self.name != other.name: 991 if str(self.name or "") < str(other.name or ""): 992 return -1 993 else: 994 return 1 995 if self.module != other.module: 996 if str(self.module or "") < str(other.module or ""): 997 return -1 998 else: 999 return 1 1000 if self.function != other.function: 1001 if str(self.function or "") < str(other.function or ""): 1002 return -1 1003 else: 1004 return 1 1005 if self.index != other.index: 1006 if int(self.index or 0) < int(other.index or 0): 1007 return -1 1008 else: 1009 return 1 1010 if self.dependencies != other.dependencies: 1011 if self.dependencies < other.dependencies: 1012 return -1 1013 else: 1014 return 1 1015 return 0
    1016
    1017 - def _setName(self, value):
    1018 """ 1019 Property target used to set the action name. 1020 The value must be a non-empty string if it is not C{None}. 1021 It must also consist only of lower-case letters and digits. 1022 @raise ValueError: If the value is an empty string. 1023 """ 1024 pattern = re.compile(ACTION_NAME_REGEX) 1025 if value is not None: 1026 if len(value) < 1: 1027 raise ValueError("The action name must be a non-empty string.") 1028 if not pattern.search(value): 1029 raise ValueError("The action name must consist of only lower-case letters and digits.") 1030 self._name = value
    1031
    1032 - def _getName(self):
    1033 """ 1034 Property target used to get the action name. 1035 """ 1036 return self._name
    1037
    1038 - def _setModule(self, value):
    1039 """ 1040 Property target used to set the module name. 1041 The value must be a non-empty string if it is not C{None}. 1042 It must also be a valid Python identifier. 1043 @raise ValueError: If the value is an empty string. 1044 """ 1045 pattern = re.compile(r"^([A-Za-z_][A-Za-z0-9_]*)(\.[A-Za-z_][A-Za-z0-9_]*)*$") 1046 if value is not None: 1047 if len(value) < 1: 1048 raise ValueError("The module name must be a non-empty string.") 1049 if not pattern.search(value): 1050 raise ValueError("The module name must be a valid Python identifier.") 1051 self._module = value
    1052
    1053 - def _getModule(self):
    1054 """ 1055 Property target used to get the module name. 1056 """ 1057 return self._module
    1058
    1059 - def _setFunction(self, value):
    1060 """ 1061 Property target used to set the function name. 1062 The value must be a non-empty string if it is not C{None}. 1063 It must also be a valid Python identifier. 1064 @raise ValueError: If the value is an empty string. 1065 """ 1066 pattern = re.compile(r"^[A-Za-z_][A-Za-z0-9_]*$") 1067 if value is not None: 1068 if len(value) < 1: 1069 raise ValueError("The function name must be a non-empty string.") 1070 if not pattern.search(value): 1071 raise ValueError("The function name must be a valid Python identifier.") 1072 self._function = value
    1073
    1074 - def _getFunction(self):
    1075 """ 1076 Property target used to get the function name. 1077 """ 1078 return self._function
    1079
    1080 - def _setIndex(self, value):
    1081 """ 1082 Property target used to set the action index. 1083 The value must be an integer >= 0. 1084 @raise ValueError: If the value is not valid. 1085 """ 1086 if value is None: 1087 self._index = None 1088 else: 1089 try: 1090 value = int(value) 1091 except TypeError: 1092 raise ValueError("Action index value must be an integer >= 0.") 1093 if value < 0: 1094 raise ValueError("Action index value must be an integer >= 0.") 1095 self._index = value
    1096
    1097 - def _getIndex(self):
    1098 """ 1099 Property target used to get the action index. 1100 """ 1101 return self._index
    1102
    1103 - def _setDependencies(self, value):
    1104 """ 1105 Property target used to set the action dependencies information. 1106 If not C{None}, the value must be a C{ActionDependecies} object. 1107 @raise ValueError: If the value is not a C{ActionDependencies} object. 1108 """ 1109 if value is None: 1110 self._dependencies = None 1111 else: 1112 if not isinstance(value, ActionDependencies): 1113 raise ValueError("Value must be a C{ActionDependencies} object.") 1114 self._dependencies = value
    1115
    1116 - def _getDependencies(self):
    1117 """ 1118 Property target used to get action dependencies information. 1119 """ 1120 return self._dependencies
    1121 1122 name = property(_getName, _setName, None, "Name of the extended action.") 1123 module = property(_getModule, _setModule, None, "Name of the module containing the extended action function.") 1124 function = property(_getFunction, _setFunction, None, "Name of the extended action function.") 1125 index = property(_getIndex, _setIndex, None, "Index of action, used for execution ordering.") 1126 dependencies = property(_getDependencies, _setDependencies, None, "Dependencies for action, used for execution ordering.")
    1127
    1128 1129 ######################################################################## 1130 # CommandOverride class definition 1131 ######################################################################## 1132 1133 @total_ordering 1134 -class CommandOverride(object):
    1135 1136 """ 1137 Class representing a piece of Cedar Backup command override configuration. 1138 1139 The following restrictions exist on data in this class: 1140 1141 - The absolute path must be absolute 1142 1143 @note: Lists within this class are "unordered" for equality comparisons. 1144 1145 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 1146 command, absolutePath 1147 """ 1148
    1149 - def __init__(self, command=None, absolutePath=None):
    1150 """ 1151 Constructor for the C{CommandOverride} class. 1152 1153 @param command: Name of command to be overridden. 1154 @param absolutePath: Absolute path of the overrridden command. 1155 1156 @raise ValueError: If one of the values is invalid. 1157 """ 1158 self._command = None 1159 self._absolutePath = None 1160 self.command = command 1161 self.absolutePath = absolutePath
    1162
    1163 - def __repr__(self):
    1164 """ 1165 Official string representation for class instance. 1166 """ 1167 return "CommandOverride(%s, %s)" % (self.command, self.absolutePath)
    1168
    1169 - def __str__(self):
    1170 """ 1171 Informal string representation for class instance. 1172 """ 1173 return self.__repr__()
    1174
    1175 - def __eq__(self, other):
    1176 """Equals operator, implemented in terms of original Python 2 compare operator.""" 1177 return self.__cmp__(other) == 0
    1178
    1179 - def __lt__(self, other):
    1180 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 1181 return self.__cmp__(other) < 0
    1182
    1183 - def __gt__(self, other):
    1184 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 1185 return self.__cmp__(other) > 0
    1186
    1187 - def __cmp__(self, other):
    1188 """ 1189 Original Python 2 comparison operator. 1190 @param other: Other object to compare to. 1191 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1192 """ 1193 if other is None: 1194 return 1 1195 if self.command != other.command: 1196 if str(self.command or "") < str(other.command or ""): 1197 return -1 1198 else: 1199 return 1 1200 if self.absolutePath != other.absolutePath: 1201 if str(self.absolutePath or "") < str(other.absolutePath or ""): 1202 return -1 1203 else: 1204 return 1 1205 return 0
    1206
    1207 - def _setCommand(self, value):
    1208 """ 1209 Property target used to set the command. 1210 The value must be a non-empty string if it is not C{None}. 1211 @raise ValueError: If the value is an empty string. 1212 """ 1213 if value is not None: 1214 if len(value) < 1: 1215 raise ValueError("The command must be a non-empty string.") 1216 self._command = value
    1217
    1218 - def _getCommand(self):
    1219 """ 1220 Property target used to get the command. 1221 """ 1222 return self._command
    1223
    1224 - def _setAbsolutePath(self, value):
    1225 """ 1226 Property target used to set the absolute path. 1227 The value must be an absolute path if it is not C{None}. 1228 It does not have to exist on disk at the time of assignment. 1229 @raise ValueError: If the value is not an absolute path. 1230 @raise ValueError: If the value cannot be encoded properly. 1231 """ 1232 if value is not None: 1233 if not os.path.isabs(value): 1234 raise ValueError("Not an absolute path: [%s]" % value) 1235 self._absolutePath = encodePath(value)
    1236
    1237 - def _getAbsolutePath(self):
    1238 """ 1239 Property target used to get the absolute path. 1240 """ 1241 return self._absolutePath
    1242 1243 command = property(_getCommand, _setCommand, None, doc="Name of command to be overridden.") 1244 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the overrridden command.")
    1245
    1246 1247 ######################################################################## 1248 # CollectFile class definition 1249 ######################################################################## 1250 1251 @total_ordering 1252 -class CollectFile(object):
    1253 1254 """ 1255 Class representing a Cedar Backup collect file. 1256 1257 The following restrictions exist on data in this class: 1258 1259 - Absolute paths must be absolute 1260 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 1261 - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. 1262 1263 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 1264 absolutePath, collectMode, archiveMode 1265 """ 1266
    1267 - def __init__(self, absolutePath=None, collectMode=None, archiveMode=None):
    1268 """ 1269 Constructor for the C{CollectFile} class. 1270 1271 @param absolutePath: Absolute path of the file to collect. 1272 @param collectMode: Overridden collect mode for this file. 1273 @param archiveMode: Overridden archive mode for this file. 1274 1275 @raise ValueError: If one of the values is invalid. 1276 """ 1277 self._absolutePath = None 1278 self._collectMode = None 1279 self._archiveMode = None 1280 self.absolutePath = absolutePath 1281 self.collectMode = collectMode 1282 self.archiveMode = archiveMode
    1283
    1284 - def __repr__(self):
    1285 """ 1286 Official string representation for class instance. 1287 """ 1288 return "CollectFile(%s, %s, %s)" % (self.absolutePath, self.collectMode, self.archiveMode)
    1289
    1290 - def __str__(self):
    1291 """ 1292 Informal string representation for class instance. 1293 """ 1294 return self.__repr__()
    1295
    1296 - def __eq__(self, other):
    1297 """Equals operator, implemented in terms of original Python 2 compare operator.""" 1298 return self.__cmp__(other) == 0
    1299
    1300 - def __lt__(self, other):
    1301 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 1302 return self.__cmp__(other) < 0
    1303
    1304 - def __gt__(self, other):
    1305 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 1306 return self.__cmp__(other) > 0
    1307
    1308 - def __cmp__(self, other):
    1309 """ 1310 Original Python 2 comparison operator. 1311 @param other: Other object to compare to. 1312 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1313 """ 1314 if other is None: 1315 return 1 1316 if self.absolutePath != other.absolutePath: 1317 if str(self.absolutePath or "") < str(other.absolutePath or ""): 1318 return -1 1319 else: 1320 return 1 1321 if self.collectMode != other.collectMode: 1322 if str(self.collectMode or "") < str(other.collectMode or ""): 1323 return -1 1324 else: 1325 return 1 1326 if self.archiveMode != other.archiveMode: 1327 if str(self.archiveMode or "") < str(other.archiveMode or ""): 1328 return -1 1329 else: 1330 return 1 1331 return 0
    1332
    1333 - def _setAbsolutePath(self, value):
    1334 """ 1335 Property target used to set the absolute path. 1336 The value must be an absolute path if it is not C{None}. 1337 It does not have to exist on disk at the time of assignment. 1338 @raise ValueError: If the value is not an absolute path. 1339 @raise ValueError: If the value cannot be encoded properly. 1340 """ 1341 if value is not None: 1342 if not os.path.isabs(value): 1343 raise ValueError("Not an absolute path: [%s]" % value) 1344 self._absolutePath = encodePath(value)
    1345
    1346 - def _getAbsolutePath(self):
    1347 """ 1348 Property target used to get the absolute path. 1349 """ 1350 return self._absolutePath
    1351
    1352 - def _setCollectMode(self, value):
    1353 """ 1354 Property target used to set the collect mode. 1355 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 1356 @raise ValueError: If the value is not valid. 1357 """ 1358 if value is not None: 1359 if value not in VALID_COLLECT_MODES: 1360 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 1361 self._collectMode = value
    1362
    1363 - def _getCollectMode(self):
    1364 """ 1365 Property target used to get the collect mode. 1366 """ 1367 return self._collectMode
    1368
    1369 - def _setArchiveMode(self, value):
    1370 """ 1371 Property target used to set the archive mode. 1372 If not C{None}, the mode must be one of the values in L{VALID_ARCHIVE_MODES}. 1373 @raise ValueError: If the value is not valid. 1374 """ 1375 if value is not None: 1376 if value not in VALID_ARCHIVE_MODES: 1377 raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) 1378 self._archiveMode = value
    1379
    1380 - def _getArchiveMode(self):
    1381 """ 1382 Property target used to get the archive mode. 1383 """ 1384 return self._archiveMode
    1385 1386 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the file to collect.") 1387 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this file.") 1388 archiveMode = property(_getArchiveMode, _setArchiveMode, None, doc="Overridden archive mode for this file.")
    1389
    1390 1391 ######################################################################## 1392 # CollectDir class definition 1393 ######################################################################## 1394 1395 @total_ordering 1396 -class CollectDir(object):
    1397 1398 """ 1399 Class representing a Cedar Backup collect directory. 1400 1401 The following restrictions exist on data in this class: 1402 1403 - Absolute paths must be absolute 1404 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 1405 - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. 1406 - The ignore file must be a non-empty string. 1407 1408 For the C{absoluteExcludePaths} list, validation is accomplished through the 1409 L{util.AbsolutePathList} list implementation that overrides common list 1410 methods and transparently does the absolute path validation for us. 1411 1412 @note: Lists within this class are "unordered" for equality comparisons. 1413 1414 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, absolutePath, collectMode, 1415 archiveMode, ignoreFile, linkDepth, dereference, absoluteExcludePaths, 1416 relativeExcludePaths, excludePatterns 1417 """ 1418
    1419 - def __init__(self, absolutePath=None, collectMode=None, archiveMode=None, ignoreFile=None, 1420 absoluteExcludePaths=None, relativeExcludePaths=None, excludePatterns=None, 1421 linkDepth=None, dereference=False, recursionLevel=None):
    1422 """ 1423 Constructor for the C{CollectDir} class. 1424 1425 @param absolutePath: Absolute path of the directory to collect. 1426 @param collectMode: Overridden collect mode for this directory. 1427 @param archiveMode: Overridden archive mode for this directory. 1428 @param ignoreFile: Overidden ignore file name for this directory. 1429 @param linkDepth: Maximum at which soft links should be followed. 1430 @param dereference: Whether to dereference links that are followed. 1431 @param absoluteExcludePaths: List of absolute paths to exclude. 1432 @param relativeExcludePaths: List of relative paths to exclude. 1433 @param excludePatterns: List of regular expression patterns to exclude. 1434 1435 @raise ValueError: If one of the values is invalid. 1436 """ 1437 self._absolutePath = None 1438 self._collectMode = None 1439 self._archiveMode = None 1440 self._ignoreFile = None 1441 self._linkDepth = None 1442 self._dereference = None 1443 self._recursionLevel = None 1444 self._absoluteExcludePaths = None 1445 self._relativeExcludePaths = None 1446 self._excludePatterns = None 1447 self.absolutePath = absolutePath 1448 self.collectMode = collectMode 1449 self.archiveMode = archiveMode 1450 self.ignoreFile = ignoreFile 1451 self.linkDepth = linkDepth 1452 self.dereference = dereference 1453 self.recursionLevel = recursionLevel 1454 self.absoluteExcludePaths = absoluteExcludePaths 1455 self.relativeExcludePaths = relativeExcludePaths 1456 self.excludePatterns = excludePatterns
    1457
    1458 - def __repr__(self):
    1459 """ 1460 Official string representation for class instance. 1461 """ 1462 return "CollectDir(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.absolutePath, self.collectMode, 1463 self.archiveMode, self.ignoreFile, 1464 self.absoluteExcludePaths, 1465 self.relativeExcludePaths, 1466 self.excludePatterns, 1467 self.linkDepth, self.dereference, 1468 self.recursionLevel)
    1469
    1470 - def __str__(self):
    1471 """ 1472 Informal string representation for class instance. 1473 """ 1474 return self.__repr__()
    1475
    1476 - def __eq__(self, other):
    1477 """Equals operator, implemented in terms of original Python 2 compare operator.""" 1478 return self.__cmp__(other) == 0
    1479
    1480 - def __lt__(self, other):
    1481 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 1482 return self.__cmp__(other) < 0
    1483
    1484 - def __gt__(self, other):
    1485 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 1486 return self.__cmp__(other) > 0
    1487
    1488 - def __cmp__(self, other):
    1489 """ 1490 Original Python 2 comparison operator. 1491 Lists within this class are "unordered" for equality comparisons. 1492 @param other: Other object to compare to. 1493 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1494 """ 1495 if other is None: 1496 return 1 1497 if self.absolutePath != other.absolutePath: 1498 if str(self.absolutePath or "") < str(other.absolutePath or ""): 1499 return -1 1500 else: 1501 return 1 1502 if self.collectMode != other.collectMode: 1503 if str(self.collectMode or "") < str(other.collectMode or ""): 1504 return -1 1505 else: 1506 return 1 1507 if self.archiveMode != other.archiveMode: 1508 if str(self.archiveMode or "") < str(other.archiveMode or ""): 1509 return -1 1510 else: 1511 return 1 1512 if self.ignoreFile != other.ignoreFile: 1513 if str(self.ignoreFile or "") < str(other.ignoreFile or ""): 1514 return -1 1515 else: 1516 return 1 1517 if self.linkDepth != other.linkDepth: 1518 if int(self.linkDepth or 0) < int(other.linkDepth or 0): 1519 return -1 1520 else: 1521 return 1 1522 if self.dereference != other.dereference: 1523 if self.dereference < other.dereference: 1524 return -1 1525 else: 1526 return 1 1527 if self.recursionLevel != other.recursionLevel: 1528 if int(self.recursionLevel or 0) < int(other.recursionLevel or 0): 1529 return -1 1530 else: 1531 return 1 1532 if self.absoluteExcludePaths != other.absoluteExcludePaths: 1533 if self.absoluteExcludePaths < other.absoluteExcludePaths: 1534 return -1 1535 else: 1536 return 1 1537 if self.relativeExcludePaths != other.relativeExcludePaths: 1538 if self.relativeExcludePaths < other.relativeExcludePaths: 1539 return -1 1540 else: 1541 return 1 1542 if self.excludePatterns != other.excludePatterns: 1543 if self.excludePatterns < other.excludePatterns: 1544 return -1 1545 else: 1546 return 1 1547 return 0
    1548
    1549 - def _setAbsolutePath(self, value):
    1550 """ 1551 Property target used to set the absolute path. 1552 The value must be an absolute path if it is not C{None}. 1553 It does not have to exist on disk at the time of assignment. 1554 @raise ValueError: If the value is not an absolute path. 1555 @raise ValueError: If the value cannot be encoded properly. 1556 """ 1557 if value is not None: 1558 if not os.path.isabs(value): 1559 raise ValueError("Not an absolute path: [%s]" % value) 1560 self._absolutePath = encodePath(value)
    1561
    1562 - def _getAbsolutePath(self):
    1563 """ 1564 Property target used to get the absolute path. 1565 """ 1566 return self._absolutePath
    1567
    1568 - def _setCollectMode(self, value):
    1569 """ 1570 Property target used to set the collect mode. 1571 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 1572 @raise ValueError: If the value is not valid. 1573 """ 1574 if value is not None: 1575 if value not in VALID_COLLECT_MODES: 1576 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 1577 self._collectMode = value
    1578
    1579 - def _getCollectMode(self):
    1580 """ 1581 Property target used to get the collect mode. 1582 """ 1583 return self._collectMode
    1584
    1585 - def _setArchiveMode(self, value):
    1586 """ 1587 Property target used to set the archive mode. 1588 If not C{None}, the mode must be one of the values in L{VALID_ARCHIVE_MODES}. 1589 @raise ValueError: If the value is not valid. 1590 """ 1591 if value is not None: 1592 if value not in VALID_ARCHIVE_MODES: 1593 raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) 1594 self._archiveMode = value
    1595
    1596 - def _getArchiveMode(self):
    1597 """ 1598 Property target used to get the archive mode. 1599 """ 1600 return self._archiveMode
    1601
    1602 - def _setIgnoreFile(self, value):
    1603 """ 1604 Property target used to set the ignore file. 1605 The value must be a non-empty string if it is not C{None}. 1606 @raise ValueError: If the value is an empty string. 1607 """ 1608 if value is not None: 1609 if len(value) < 1: 1610 raise ValueError("The ignore file must be a non-empty string.") 1611 self._ignoreFile = value
    1612
    1613 - def _getIgnoreFile(self):
    1614 """ 1615 Property target used to get the ignore file. 1616 """ 1617 return self._ignoreFile
    1618
    1619 - def _setLinkDepth(self, value):
    1620 """ 1621 Property target used to set the link depth. 1622 The value must be an integer >= 0. 1623 @raise ValueError: If the value is not valid. 1624 """ 1625 if value is None: 1626 self._linkDepth = None 1627 else: 1628 try: 1629 value = int(value) 1630 except TypeError: 1631 raise ValueError("Link depth value must be an integer >= 0.") 1632 if value < 0: 1633 raise ValueError("Link depth value must be an integer >= 0.") 1634 self._linkDepth = value
    1635
    1636 - def _getLinkDepth(self):
    1637 """ 1638 Property target used to get the action linkDepth. 1639 """ 1640 return self._linkDepth
    1641
    1642 - def _setDereference(self, value):
    1643 """ 1644 Property target used to set the dereference flag. 1645 No validations, but we normalize the value to C{True} or C{False}. 1646 """ 1647 if value: 1648 self._dereference = True 1649 else: 1650 self._dereference = False
    1651
    1652 - def _getDereference(self):
    1653 """ 1654 Property target used to get the dereference flag. 1655 """ 1656 return self._dereference
    1657
    1658 - def _setRecursionLevel(self, value):
    1659 """ 1660 Property target used to set the recursionLevel. 1661 The value must be an integer. 1662 @raise ValueError: If the value is not valid. 1663 """ 1664 if value is None: 1665 self._recursionLevel = None 1666 else: 1667 try: 1668 value = int(value) 1669 except TypeError: 1670 raise ValueError("Recusion level value must be an integer.") 1671 self._recursionLevel = value
    1672
    1673 - def _getRecursionLevel(self):
    1674 """ 1675 Property target used to get the action recursionLevel. 1676 """ 1677 return self._recursionLevel
    1678
    1679 - def _setAbsoluteExcludePaths(self, value):
    1680 """ 1681 Property target used to set the absolute exclude paths list. 1682 Either the value must be C{None} or each element must be an absolute path. 1683 Elements do not have to exist on disk at the time of assignment. 1684 @raise ValueError: If the value is not an absolute path. 1685 """ 1686 if value is None: 1687 self._absoluteExcludePaths = None 1688 else: 1689 try: 1690 saved = self._absoluteExcludePaths 1691 self._absoluteExcludePaths = AbsolutePathList() 1692 self._absoluteExcludePaths.extend(value) 1693 except Exception as e: 1694 self._absoluteExcludePaths = saved 1695 raise e
    1696
    1697 - def _getAbsoluteExcludePaths(self):
    1698 """ 1699 Property target used to get the absolute exclude paths list. 1700 """ 1701 return self._absoluteExcludePaths
    1702
    1703 - def _setRelativeExcludePaths(self, value):
    1704 """ 1705 Property target used to set the relative exclude paths list. 1706 Elements do not have to exist on disk at the time of assignment. 1707 """ 1708 if value is None: 1709 self._relativeExcludePaths = None 1710 else: 1711 try: 1712 saved = self._relativeExcludePaths 1713 self._relativeExcludePaths = UnorderedList() 1714 self._relativeExcludePaths.extend(value) 1715 except Exception as e: 1716 self._relativeExcludePaths = saved 1717 raise e
    1718
    1719 - def _getRelativeExcludePaths(self):
    1720 """ 1721 Property target used to get the relative exclude paths list. 1722 """ 1723 return self._relativeExcludePaths
    1724
    1725 - def _setExcludePatterns(self, value):
    1726 """ 1727 Property target used to set the exclude patterns list. 1728 """ 1729 if value is None: 1730 self._excludePatterns = None 1731 else: 1732 try: 1733 saved = self._excludePatterns 1734 self._excludePatterns = RegexList() 1735 self._excludePatterns.extend(value) 1736 except Exception as e: 1737 self._excludePatterns = saved 1738 raise e
    1739
    1740 - def _getExcludePatterns(self):
    1741 """ 1742 Property target used to get the exclude patterns list. 1743 """ 1744 return self._excludePatterns
    1745 1746 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the directory to collect.") 1747 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this directory.") 1748 archiveMode = property(_getArchiveMode, _setArchiveMode, None, doc="Overridden archive mode for this directory.") 1749 ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, doc="Overridden ignore file name for this directory.") 1750 linkDepth = property(_getLinkDepth, _setLinkDepth, None, doc="Maximum at which soft links should be followed.") 1751 dereference = property(_getDereference, _setDereference, None, doc="Whether to dereference links that are followed.") 1752 recursionLevel = property(_getRecursionLevel, _setRecursionLevel, None, "Recursion level to use for recursive directory collection") 1753 absoluteExcludePaths = property(_getAbsoluteExcludePaths, _setAbsoluteExcludePaths, None, "List of absolute paths to exclude.") 1754 relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") 1755 excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.")
    1756
    1757 1758 ######################################################################## 1759 # PurgeDir class definition 1760 ######################################################################## 1761 1762 @total_ordering 1763 -class PurgeDir(object):
    1764 1765 """ 1766 Class representing a Cedar Backup purge directory. 1767 1768 The following restrictions exist on data in this class: 1769 1770 - The absolute path must be an absolute path 1771 - The retain days value must be an integer >= 0. 1772 1773 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, absolutePath, retainDays 1774 """ 1775
    1776 - def __init__(self, absolutePath=None, retainDays=None):
    1777 """ 1778 Constructor for the C{PurgeDir} class. 1779 1780 @param absolutePath: Absolute path of the directory to be purged. 1781 @param retainDays: Number of days content within directory should be retained. 1782 1783 @raise ValueError: If one of the values is invalid. 1784 """ 1785 self._absolutePath = None 1786 self._retainDays = None 1787 self.absolutePath = absolutePath 1788 self.retainDays = retainDays
    1789
    1790 - def __repr__(self):
    1791 """ 1792 Official string representation for class instance. 1793 """ 1794 return "PurgeDir(%s, %s)" % (self.absolutePath, self.retainDays)
    1795
    1796 - def __str__(self):
    1797 """ 1798 Informal string representation for class instance. 1799 """ 1800 return self.__repr__()
    1801
    1802 - def __eq__(self, other):
    1803 """Equals operator, implemented in terms of original Python 2 compare operator.""" 1804 return self.__cmp__(other) == 0
    1805
    1806 - def __lt__(self, other):
    1807 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 1808 return self.__cmp__(other) < 0
    1809
    1810 - def __gt__(self, other):
    1811 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 1812 return self.__cmp__(other) > 0
    1813
    1814 - def __cmp__(self, other):
    1815 """ 1816 Original Python 2 comparison operator. 1817 @param other: Other object to compare to. 1818 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1819 """ 1820 if other is None: 1821 return 1 1822 if self.absolutePath != other.absolutePath: 1823 if str(self.absolutePath or "") < str(other.absolutePath or ""): 1824 return -1 1825 else: 1826 return 1 1827 if self.retainDays != other.retainDays: 1828 if int(self.retainDays or 0) < int(other.retainDays or 0): 1829 return -1 1830 else: 1831 return 1 1832 return 0
    1833
    1834 - def _setAbsolutePath(self, value):
    1835 """ 1836 Property target used to set the absolute path. 1837 The value must be an absolute path if it is not C{None}. 1838 It does not have to exist on disk at the time of assignment. 1839 @raise ValueError: If the value is not an absolute path. 1840 @raise ValueError: If the value cannot be encoded properly. 1841 """ 1842 if value is not None: 1843 if not os.path.isabs(value): 1844 raise ValueError("Absolute path must, er, be an absolute path.") 1845 self._absolutePath = encodePath(value)
    1846
    1847 - def _getAbsolutePath(self):
    1848 """ 1849 Property target used to get the absolute path. 1850 """ 1851 return self._absolutePath
    1852
    1853 - def _setRetainDays(self, value):
    1854 """ 1855 Property target used to set the retain days value. 1856 The value must be an integer >= 0. 1857 @raise ValueError: If the value is not valid. 1858 """ 1859 if value is None: 1860 self._retainDays = None 1861 else: 1862 try: 1863 value = int(value) 1864 except TypeError: 1865 raise ValueError("Retain days value must be an integer >= 0.") 1866 if value < 0: 1867 raise ValueError("Retain days value must be an integer >= 0.") 1868 self._retainDays = value
    1869
    1870 - def _getRetainDays(self):
    1871 """ 1872 Property target used to get the absolute path. 1873 """ 1874 return self._retainDays
    1875 1876 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, "Absolute path of directory to purge.") 1877 retainDays = property(_getRetainDays, _setRetainDays, None, "Number of days content within directory should be retained.")
    1878
    1879 1880 ######################################################################## 1881 # LocalPeer class definition 1882 ######################################################################## 1883 1884 @total_ordering 1885 -class LocalPeer(object):
    1886 1887 """ 1888 Class representing a Cedar Backup peer. 1889 1890 The following restrictions exist on data in this class: 1891 1892 - The peer name must be a non-empty string. 1893 - The collect directory must be an absolute path. 1894 - The ignore failure mode must be one of the values in L{VALID_FAILURE_MODES}. 1895 1896 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, name, collectDir 1897 """ 1898
    1899 - def __init__(self, name=None, collectDir=None, ignoreFailureMode=None):
    1900 """ 1901 Constructor for the C{LocalPeer} class. 1902 1903 @param name: Name of the peer, typically a valid hostname. 1904 @param collectDir: Collect directory to stage files from on peer. 1905 @param ignoreFailureMode: Ignore failure mode for peer. 1906 1907 @raise ValueError: If one of the values is invalid. 1908 """ 1909 self._name = None 1910 self._collectDir = None 1911 self._ignoreFailureMode = None 1912 self.name = name 1913 self.collectDir = collectDir 1914 self.ignoreFailureMode = ignoreFailureMode
    1915
    1916 - def __repr__(self):
    1917 """ 1918 Official string representation for class instance. 1919 """ 1920 return "LocalPeer(%s, %s, %s)" % (self.name, self.collectDir, self.ignoreFailureMode)
    1921
    1922 - def __str__(self):
    1923 """ 1924 Informal string representation for class instance. 1925 """ 1926 return self.__repr__()
    1927
    1928 - def __eq__(self, other):
    1929 """Equals operator, implemented in terms of original Python 2 compare operator.""" 1930 return self.__cmp__(other) == 0
    1931
    1932 - def __lt__(self, other):
    1933 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 1934 return self.__cmp__(other) < 0
    1935
    1936 - def __gt__(self, other):
    1937 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 1938 return self.__cmp__(other) > 0
    1939
    1940 - def __cmp__(self, other):
    1941 """ 1942 Original Python 2 comparison operator. 1943 @param other: Other object to compare to. 1944 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1945 """ 1946 if other is None: 1947 return 1 1948 if self.name != other.name: 1949 if str(self.name or "") < str(other.name or ""): 1950 return -1 1951 else: 1952 return 1 1953 if self.collectDir != other.collectDir: 1954 if str(self.collectDir or "") < str(other.collectDir or ""): 1955 return -1 1956 else: 1957 return 1 1958 if self.ignoreFailureMode != other.ignoreFailureMode: 1959 if str(self.ignoreFailureMode or "") < str(other.ignoreFailureMode or ""): 1960 return -1 1961 else: 1962 return 1 1963 return 0
    1964
    1965 - def _setName(self, value):
    1966 """ 1967 Property target used to set the peer name. 1968 The value must be a non-empty string if it is not C{None}. 1969 @raise ValueError: If the value is an empty string. 1970 """ 1971 if value is not None: 1972 if len(value) < 1: 1973 raise ValueError("The peer name must be a non-empty string.") 1974 self._name = value
    1975
    1976 - def _getName(self):
    1977 """ 1978 Property target used to get the peer name. 1979 """ 1980 return self._name
    1981
    1982 - def _setCollectDir(self, value):
    1983 """ 1984 Property target used to set the collect directory. 1985 The value must be an absolute path if it is not C{None}. 1986 It does not have to exist on disk at the time of assignment. 1987 @raise ValueError: If the value is not an absolute path. 1988 @raise ValueError: If the value cannot be encoded properly. 1989 """ 1990 if value is not None: 1991 if not os.path.isabs(value): 1992 raise ValueError("Collect directory must be an absolute path.") 1993 self._collectDir = encodePath(value)
    1994
    1995 - def _getCollectDir(self):
    1996 """ 1997 Property target used to get the collect directory. 1998 """ 1999 return self._collectDir
    2000
    2001 - def _setIgnoreFailureMode(self, value):
    2002 """ 2003 Property target used to set the ignoreFailure mode. 2004 If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. 2005 @raise ValueError: If the value is not valid. 2006 """ 2007 if value is not None: 2008 if value not in VALID_FAILURE_MODES: 2009 raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) 2010 self._ignoreFailureMode = value
    2011
    2012 - def _getIgnoreFailureMode(self):
    2013 """ 2014 Property target used to get the ignoreFailure mode. 2015 """ 2016 return self._ignoreFailureMode
    2017 2018 name = property(_getName, _setName, None, "Name of the peer, typically a valid hostname.") 2019 collectDir = property(_getCollectDir, _setCollectDir, None, "Collect directory to stage files from on peer.") 2020 ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.")
    2021
    2022 2023 ######################################################################## 2024 # RemotePeer class definition 2025 ######################################################################## 2026 2027 @total_ordering 2028 -class RemotePeer(object):
    2029 2030 """ 2031 Class representing a Cedar Backup peer. 2032 2033 The following restrictions exist on data in this class: 2034 2035 - The peer name must be a non-empty string. 2036 - The collect directory must be an absolute path. 2037 - The remote user must be a non-empty string. 2038 - The rcp command must be a non-empty string. 2039 - The rsh command must be a non-empty string. 2040 - The cback command must be a non-empty string. 2041 - Any managed action name must be a non-empty string matching C{ACTION_NAME_REGEX} 2042 - The ignore failure mode must be one of the values in L{VALID_FAILURE_MODES}. 2043 2044 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, name, collectDir, remoteUser, rcpCommand 2045 """ 2046
    2047 - def __init__(self, name=None, collectDir=None, remoteUser=None, 2048 rcpCommand=None, rshCommand=None, cbackCommand=None, 2049 managed=False, managedActions=None, ignoreFailureMode=None):
    2050 """ 2051 Constructor for the C{RemotePeer} class. 2052 2053 @param name: Name of the peer, must be a valid hostname. 2054 @param collectDir: Collect directory to stage files from on peer. 2055 @param remoteUser: Name of backup user on remote peer. 2056 @param rcpCommand: Overridden rcp-compatible copy command for peer. 2057 @param rshCommand: Overridden rsh-compatible remote shell command for peer. 2058 @param cbackCommand: Overridden cback-compatible command to use on remote peer. 2059 @param managed: Indicates whether this is a managed peer. 2060 @param managedActions: Overridden set of actions that are managed on the peer. 2061 @param ignoreFailureMode: Ignore failure mode for peer. 2062 2063 @raise ValueError: If one of the values is invalid. 2064 """ 2065 self._name = None 2066 self._collectDir = None 2067 self._remoteUser = None 2068 self._rcpCommand = None 2069 self._rshCommand = None 2070 self._cbackCommand = None 2071 self._managed = None 2072 self._managedActions = None 2073 self._ignoreFailureMode = None 2074 self.name = name 2075 self.collectDir = collectDir 2076 self.remoteUser = remoteUser 2077 self.rcpCommand = rcpCommand 2078 self.rshCommand = rshCommand 2079 self.cbackCommand = cbackCommand 2080 self.managed = managed 2081 self.managedActions = managedActions 2082 self.ignoreFailureMode = ignoreFailureMode
    2083
    2084 - def __repr__(self):
    2085 """ 2086 Official string representation for class instance. 2087 """ 2088 return "RemotePeer(%s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.name, self.collectDir, self.remoteUser, 2089 self.rcpCommand, self.rshCommand, self.cbackCommand, 2090 self.managed, self.managedActions, self.ignoreFailureMode)
    2091
    2092 - def __str__(self):
    2093 """ 2094 Informal string representation for class instance. 2095 """ 2096 return self.__repr__()
    2097
    2098 - def __eq__(self, other):
    2099 """Equals operator, implemented in terms of original Python 2 compare operator.""" 2100 return self.__cmp__(other) == 0
    2101
    2102 - def __lt__(self, other):
    2103 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 2104 return self.__cmp__(other) < 0
    2105
    2106 - def __gt__(self, other):
    2107 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 2108 return self.__cmp__(other) > 0
    2109
    2110 - def __cmp__(self, other):
    2111 """ 2112 Original Python 2 comparison operator. 2113 @param other: Other object to compare to. 2114 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 2115 """ 2116 if other is None: 2117 return 1 2118 if self.name != other.name: 2119 if str(self.name or "") < str(other.name or ""): 2120 return -1 2121 else: 2122 return 1 2123 if self.collectDir != other.collectDir: 2124 if str(self.collectDir or "") < str(other.collectDir or ""): 2125 return -1 2126 else: 2127 return 1 2128 if self.remoteUser != other.remoteUser: 2129 if str(self.remoteUser or "") < str(other.remoteUser or ""): 2130 return -1 2131 else: 2132 return 1 2133 if self.rcpCommand != other.rcpCommand: 2134 if str(self.rcpCommand or "") < str(other.rcpCommand or ""): 2135 return -1 2136 else: 2137 return 1 2138 if self.rshCommand != other.rshCommand: 2139 if str(self.rshCommand or "") < str(other.rshCommand or ""): 2140 return -1 2141 else: 2142 return 1 2143 if self.cbackCommand != other.cbackCommand: 2144 if str(self.cbackCommand or "") < str(other.cbackCommand or ""): 2145 return -1 2146 else: 2147 return 1 2148 if self.managed != other.managed: 2149 if str(self.managed or "") < str(other.managed or ""): 2150 return -1 2151 else: 2152 return 1 2153 if self.managedActions != other.managedActions: 2154 if self.managedActions < other.managedActions: 2155 return -1 2156 else: 2157 return 1 2158 if self.ignoreFailureMode != other.ignoreFailureMode: 2159 if str(self.ignoreFailureMode or "") < str(other.ignoreFailureMode or ""): 2160 return -1 2161 else: 2162 return 1 2163 return 0
    2164
    2165 - def _setName(self, value):
    2166 """ 2167 Property target used to set the peer name. 2168 The value must be a non-empty string if it is not C{None}. 2169 @raise ValueError: If the value is an empty string. 2170 """ 2171 if value is not None: 2172 if len(value) < 1: 2173 raise ValueError("The peer name must be a non-empty string.") 2174 self._name = value
    2175
    2176 - def _getName(self):
    2177 """ 2178 Property target used to get the peer name. 2179 """ 2180 return self._name
    2181
    2182 - def _setCollectDir(self, value):
    2183 """ 2184 Property target used to set the collect directory. 2185 The value must be an absolute path if it is not C{None}. 2186 It does not have to exist on disk at the time of assignment. 2187 @raise ValueError: If the value is not an absolute path. 2188 @raise ValueError: If the value cannot be encoded properly. 2189 """ 2190 if value is not None: 2191 if not os.path.isabs(value): 2192 raise ValueError("Collect directory must be an absolute path.") 2193 self._collectDir = encodePath(value)
    2194
    2195 - def _getCollectDir(self):
    2196 """ 2197 Property target used to get the collect directory. 2198 """ 2199 return self._collectDir
    2200
    2201 - def _setRemoteUser(self, value):
    2202 """ 2203 Property target used to set the remote user. 2204 The value must be a non-empty string if it is not C{None}. 2205 @raise ValueError: If the value is an empty string. 2206 """ 2207 if value is not None: 2208 if len(value) < 1: 2209 raise ValueError("The remote user must be a non-empty string.") 2210 self._remoteUser = value
    2211
    2212 - def _getRemoteUser(self):
    2213 """ 2214 Property target used to get the remote user. 2215 """ 2216 return self._remoteUser
    2217
    2218 - def _setRcpCommand(self, value):
    2219 """ 2220 Property target used to set the rcp command. 2221 The value must be a non-empty string if it is not C{None}. 2222 @raise ValueError: If the value is an empty string. 2223 """ 2224 if value is not None: 2225 if len(value) < 1: 2226 raise ValueError("The rcp command must be a non-empty string.") 2227 self._rcpCommand = value
    2228
    2229 - def _getRcpCommand(self):
    2230 """ 2231 Property target used to get the rcp command. 2232 """ 2233 return self._rcpCommand
    2234
    2235 - def _setRshCommand(self, value):
    2236 """ 2237 Property target used to set the rsh command. 2238 The value must be a non-empty string if it is not C{None}. 2239 @raise ValueError: If the value is an empty string. 2240 """ 2241 if value is not None: 2242 if len(value) < 1: 2243 raise ValueError("The rsh command must be a non-empty string.") 2244 self._rshCommand = value
    2245
    2246 - def _getRshCommand(self):
    2247 """ 2248 Property target used to get the rsh command. 2249 """ 2250 return self._rshCommand
    2251
    2252 - def _setCbackCommand(self, value):
    2253 """ 2254 Property target used to set the cback command. 2255 The value must be a non-empty string if it is not C{None}. 2256 @raise ValueError: If the value is an empty string. 2257 """ 2258 if value is not None: 2259 if len(value) < 1: 2260 raise ValueError("The cback command must be a non-empty string.") 2261 self._cbackCommand = value
    2262
    2263 - def _getCbackCommand(self):
    2264 """ 2265 Property target used to get the cback command. 2266 """ 2267 return self._cbackCommand
    2268
    2269 - def _setManaged(self, value):
    2270 """ 2271 Property target used to set the managed flag. 2272 No validations, but we normalize the value to C{True} or C{False}. 2273 """ 2274 if value: 2275 self._managed = True 2276 else: 2277 self._managed = False
    2278
    2279 - def _getManaged(self):
    2280 """ 2281 Property target used to get the managed flag. 2282 """ 2283 return self._managed
    2284
    2285 - def _setManagedActions(self, value):
    2286 """ 2287 Property target used to set the managed actions list. 2288 Elements do not have to exist on disk at the time of assignment. 2289 """ 2290 if value is None: 2291 self._managedActions = None 2292 else: 2293 try: 2294 saved = self._managedActions 2295 self._managedActions = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") 2296 self._managedActions.extend(value) 2297 except Exception as e: 2298 self._managedActions = saved 2299 raise e
    2300
    2301 - def _getManagedActions(self):
    2302 """ 2303 Property target used to get the managed actions list. 2304 """ 2305 return self._managedActions
    2306
    2307 - def _setIgnoreFailureMode(self, value):
    2308 """ 2309 Property target used to set the ignoreFailure mode. 2310 If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. 2311 @raise ValueError: If the value is not valid. 2312 """ 2313 if value is not None: 2314 if value not in VALID_FAILURE_MODES: 2315 raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) 2316 self._ignoreFailureMode = value
    2317
    2318 - def _getIgnoreFailureMode(self):
    2319 """ 2320 Property target used to get the ignoreFailure mode. 2321 """ 2322 return self._ignoreFailureMode
    2323 2324 name = property(_getName, _setName, None, "Name of the peer, must be a valid hostname.") 2325 collectDir = property(_getCollectDir, _setCollectDir, None, "Collect directory to stage files from on peer.") 2326 remoteUser = property(_getRemoteUser, _setRemoteUser, None, "Name of backup user on remote peer.") 2327 rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "Overridden rcp-compatible copy command for peer.") 2328 rshCommand = property(_getRshCommand, _setRshCommand, None, "Overridden rsh-compatible remote shell command for peer.") 2329 cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "Overridden cback-compatible command to use on remote peer.") 2330 managed = property(_getManaged, _setManaged, None, "Indicates whether this is a managed peer.") 2331 managedActions = property(_getManagedActions, _setManagedActions, None, "Overridden set of actions that are managed on the peer.") 2332 ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.")
    2333
    2334 2335 ######################################################################## 2336 # ReferenceConfig class definition 2337 ######################################################################## 2338 2339 @total_ordering 2340 -class ReferenceConfig(object):
    2341 2342 """ 2343 Class representing a Cedar Backup reference configuration. 2344 2345 The reference information is just used for saving off metadata about 2346 configuration and exists mostly for backwards-compatibility with Cedar 2347 Backup 1.x. 2348 2349 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, author, revision, description, generator 2350 """ 2351
    2352 - def __init__(self, author=None, revision=None, description=None, generator=None):
    2353 """ 2354 Constructor for the C{ReferenceConfig} class. 2355 2356 @param author: Author of the configuration file. 2357 @param revision: Revision of the configuration file. 2358 @param description: Description of the configuration file. 2359 @param generator: Tool that generated the configuration file. 2360 """ 2361 self._author = None 2362 self._revision = None 2363 self._description = None 2364 self._generator = None 2365 self.author = author 2366 self.revision = revision 2367 self.description = description 2368 self.generator = generator
    2369
    2370 - def __repr__(self):
    2371 """ 2372 Official string representation for class instance. 2373 """ 2374 return "ReferenceConfig(%s, %s, %s, %s)" % (self.author, self.revision, self.description, self.generator)
    2375
    2376 - def __str__(self):
    2377 """ 2378 Informal string representation for class instance. 2379 """ 2380 return self.__repr__()
    2381
    2382 - def __eq__(self, other):
    2383 """Equals operator, implemented in terms of original Python 2 compare operator.""" 2384 return self.__cmp__(other) == 0
    2385
    2386 - def __lt__(self, other):
    2387 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 2388 return self.__cmp__(other) < 0
    2389
    2390 - def __gt__(self, other):
    2391 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 2392 return self.__cmp__(other) > 0
    2393
    2394 - def __cmp__(self, other):
    2395 """ 2396 Original Python 2 comparison operator. 2397 @param other: Other object to compare to. 2398 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 2399 """ 2400 if other is None: 2401 return 1 2402 if self.author != other.author: 2403 if str(self.author or "") < str(other.author or ""): 2404 return -1 2405 else: 2406 return 1 2407 if self.revision != other.revision: 2408 if str(self.revision or "") < str(other.revision or ""): 2409 return -1 2410 else: 2411 return 1 2412 if self.description != other.description: 2413 if str(self.description or "") < str(other.description or ""): 2414 return -1 2415 else: 2416 return 1 2417 if self.generator != other.generator: 2418 if str(self.generator or "") < str(other.generator or ""): 2419 return -1 2420 else: 2421 return 1 2422 return 0
    2423
    2424 - def _setAuthor(self, value):
    2425 """ 2426 Property target used to set the author value. 2427 No validations. 2428 """ 2429 self._author = value
    2430
    2431 - def _getAuthor(self):
    2432 """ 2433 Property target used to get the author value. 2434 """ 2435 return self._author
    2436
    2437 - def _setRevision(self, value):
    2438 """ 2439 Property target used to set the revision value. 2440 No validations. 2441 """ 2442 self._revision = value
    2443
    2444 - def _getRevision(self):
    2445 """ 2446 Property target used to get the revision value. 2447 """ 2448 return self._revision
    2449
    2450 - def _setDescription(self, value):
    2451 """ 2452 Property target used to set the description value. 2453 No validations. 2454 """ 2455 self._description = value
    2456
    2457 - def _getDescription(self):
    2458 """ 2459 Property target used to get the description value. 2460 """ 2461 return self._description
    2462
    2463 - def _setGenerator(self, value):
    2464 """ 2465 Property target used to set the generator value. 2466 No validations. 2467 """ 2468 self._generator = value
    2469
    2470 - def _getGenerator(self):
    2471 """ 2472 Property target used to get the generator value. 2473 """ 2474 return self._generator
    2475 2476 author = property(_getAuthor, _setAuthor, None, "Author of the configuration file.") 2477 revision = property(_getRevision, _setRevision, None, "Revision of the configuration file.") 2478 description = property(_getDescription, _setDescription, None, "Description of the configuration file.") 2479 generator = property(_getGenerator, _setGenerator, None, "Tool that generated the configuration file.")
    2480
    2481 2482 ######################################################################## 2483 # ExtensionsConfig class definition 2484 ######################################################################## 2485 2486 @total_ordering 2487 -class ExtensionsConfig(object):
    2488 2489 """ 2490 Class representing Cedar Backup extensions configuration. 2491 2492 Extensions configuration is used to specify "extended actions" implemented 2493 by code external to Cedar Backup. For instance, a hypothetical third party 2494 might write extension code to collect database repository data. If they 2495 write a properly-formatted extension function, they can use the extension 2496 configuration to map a command-line Cedar Backup action (i.e. "database") 2497 to their function. 2498 2499 The following restrictions exist on data in this class: 2500 2501 - If set, the order mode must be one of the values in C{VALID_ORDER_MODES} 2502 - The actions list must be a list of C{ExtendedAction} objects. 2503 2504 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, orderMode, actions 2505 """ 2506
    2507 - def __init__(self, actions=None, orderMode=None):
    2508 """ 2509 Constructor for the C{ExtensionsConfig} class. 2510 @param actions: List of extended actions 2511 """ 2512 self._orderMode = None 2513 self._actions = None 2514 self.orderMode = orderMode 2515 self.actions = actions
    2516
    2517 - def __repr__(self):
    2518 """ 2519 Official string representation for class instance. 2520 """ 2521 return "ExtensionsConfig(%s, %s)" % (self.orderMode, self.actions)
    2522
    2523 - def __str__(self):
    2524 """ 2525 Informal string representation for class instance. 2526 """ 2527 return self.__repr__()
    2528
    2529 - def __eq__(self, other):
    2530 """Equals operator, implemented in terms of original Python 2 compare operator.""" 2531 return self.__cmp__(other) == 0
    2532
    2533 - def __lt__(self, other):
    2534 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 2535 return self.__cmp__(other) < 0
    2536
    2537 - def __gt__(self, other):
    2538 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 2539 return self.__cmp__(other) > 0
    2540
    2541 - def __cmp__(self, other):
    2542 """ 2543 Original Python 2 comparison operator. 2544 @param other: Other object to compare to. 2545 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 2546 """ 2547 if other is None: 2548 return 1 2549 if self.orderMode != other.orderMode: 2550 if str(self.orderMode or "") < str(other.orderMode or ""): 2551 return -1 2552 else: 2553 return 1 2554 if self.actions != other.actions: 2555 if self.actions < other.actions: 2556 return -1 2557 else: 2558 return 1 2559 return 0
    2560
    2561 - def _setOrderMode(self, value):
    2562 """ 2563 Property target used to set the order mode. 2564 The value must be one of L{VALID_ORDER_MODES}. 2565 @raise ValueError: If the value is not valid. 2566 """ 2567 if value is not None: 2568 if value not in VALID_ORDER_MODES: 2569 raise ValueError("Order mode must be one of %s." % VALID_ORDER_MODES) 2570 self._orderMode = value
    2571
    2572 - def _getOrderMode(self):
    2573 """ 2574 Property target used to get the order mode. 2575 """ 2576 return self._orderMode
    2577
    2578 - def _setActions(self, value):
    2579 """ 2580 Property target used to set the actions list. 2581 Either the value must be C{None} or each element must be an C{ExtendedAction}. 2582 @raise ValueError: If the value is not a C{ExtendedAction} 2583 """ 2584 if value is None: 2585 self._actions = None 2586 else: 2587 try: 2588 saved = self._actions 2589 self._actions = ObjectTypeList(ExtendedAction, "ExtendedAction") 2590 self._actions.extend(value) 2591 except Exception as e: 2592 self._actions = saved 2593 raise e
    2594
    2595 - def _getActions(self):
    2596 """ 2597 Property target used to get the actions list. 2598 """ 2599 return self._actions
    2600 2601 orderMode = property(_getOrderMode, _setOrderMode, None, "Order mode for extensions, to control execution ordering.") 2602 actions = property(_getActions, _setActions, None, "List of extended actions.")
    2603
    2604 2605 ######################################################################## 2606 # OptionsConfig class definition 2607 ######################################################################## 2608 2609 @total_ordering 2610 -class OptionsConfig(object):
    2611 2612 """ 2613 Class representing a Cedar Backup global options configuration. 2614 2615 The options section is used to store global configuration options and 2616 defaults that can be applied to other sections. 2617 2618 The following restrictions exist on data in this class: 2619 2620 - The working directory must be an absolute path. 2621 - The starting day must be a day of the week in English, i.e. C{"monday"}, C{"tuesday"}, etc. 2622 - All of the other values must be non-empty strings if they are set to something other than C{None}. 2623 - The overrides list must be a list of C{CommandOverride} objects. 2624 - The hooks list must be a list of C{ActionHook} objects. 2625 - The cback command must be a non-empty string. 2626 - Any managed action name must be a non-empty string matching C{ACTION_NAME_REGEX} 2627 2628 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, startingDay, workingDir, 2629 backupUser, backupGroup, rcpCommand, rshCommand, overrides 2630 """ 2631
    2632 - def __init__(self, startingDay=None, workingDir=None, backupUser=None, 2633 backupGroup=None, rcpCommand=None, overrides=None, 2634 hooks=None, rshCommand=None, cbackCommand=None, 2635 managedActions=None):
    2636 """ 2637 Constructor for the C{OptionsConfig} class. 2638 2639 @param startingDay: Day that starts the week. 2640 @param workingDir: Working (temporary) directory to use for backups. 2641 @param backupUser: Effective user that backups should run as. 2642 @param backupGroup: Effective group that backups should run as. 2643 @param rcpCommand: Default rcp-compatible copy command for staging. 2644 @param rshCommand: Default rsh-compatible command to use for remote shells. 2645 @param cbackCommand: Default cback-compatible command to use on managed remote peers. 2646 @param overrides: List of configured command path overrides, if any. 2647 @param hooks: List of configured pre- and post-action hooks. 2648 @param managedActions: Default set of actions that are managed on remote peers. 2649 2650 @raise ValueError: If one of the values is invalid. 2651 """ 2652 self._startingDay = None 2653 self._workingDir = None 2654 self._backupUser = None 2655 self._backupGroup = None 2656 self._rcpCommand = None 2657 self._rshCommand = None 2658 self._cbackCommand = None 2659 self._overrides = None 2660 self._hooks = None 2661 self._managedActions = None 2662 self.startingDay = startingDay 2663 self.workingDir = workingDir 2664 self.backupUser = backupUser 2665 self.backupGroup = backupGroup 2666 self.rcpCommand = rcpCommand 2667 self.rshCommand = rshCommand 2668 self.cbackCommand = cbackCommand 2669 self.overrides = overrides 2670 self.hooks = hooks 2671 self.managedActions = managedActions
    2672
    2673 - def __repr__(self):
    2674 """ 2675 Official string representation for class instance. 2676 """ 2677 return "OptionsConfig(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.startingDay, self.workingDir, 2678 self.backupUser, self.backupGroup, 2679 self.rcpCommand, self.overrides, 2680 self.hooks, self.rshCommand, 2681 self.cbackCommand, self.managedActions)
    2682
    2683 - def __str__(self):
    2684 """ 2685 Informal string representation for class instance. 2686 """ 2687 return self.__repr__()
    2688
    2689 - def __eq__(self, other):
    2690 """Equals operator, implemented in terms of original Python 2 compare operator.""" 2691 return self.__cmp__(other) == 0
    2692
    2693 - def __lt__(self, other):
    2694 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 2695 return self.__cmp__(other) < 0
    2696
    2697 - def __gt__(self, other):
    2698 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 2699 return self.__cmp__(other) > 0
    2700
    2701 - def __cmp__(self, other):
    2702 """ 2703 Original Python 2 comparison operator. 2704 @param other: Other object to compare to. 2705 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 2706 """ 2707 if other is None: 2708 return 1 2709 if self.startingDay != other.startingDay: 2710 if str(self.startingDay or "") < str(other.startingDay or ""): 2711 return -1 2712 else: 2713 return 1 2714 if self.workingDir != other.workingDir: 2715 if str(self.workingDir or "") < str(other.workingDir or ""): 2716 return -1 2717 else: 2718 return 1 2719 if self.backupUser != other.backupUser: 2720 if str(self.backupUser or "") < str(other.backupUser or ""): 2721 return -1 2722 else: 2723 return 1 2724 if self.backupGroup != other.backupGroup: 2725 if str(self.backupGroup or "") < str(other.backupGroup or ""): 2726 return -1 2727 else: 2728 return 1 2729 if self.rcpCommand != other.rcpCommand: 2730 if str(self.rcpCommand or "") < str(other.rcpCommand or ""): 2731 return -1 2732 else: 2733 return 1 2734 if self.rshCommand != other.rshCommand: 2735 if str(self.rshCommand or "") < str(other.rshCommand or ""): 2736 return -1 2737 else: 2738 return 1 2739 if self.cbackCommand != other.cbackCommand: 2740 if str(self.cbackCommand or "") < str(other.cbackCommand or ""): 2741 return -1 2742 else: 2743 return 1 2744 if self.overrides != other.overrides: 2745 if self.overrides < other.overrides: 2746 return -1 2747 else: 2748 return 1 2749 if self.hooks != other.hooks: 2750 if self.hooks < other.hooks: 2751 return -1 2752 else: 2753 return 1 2754 if self.managedActions != other.managedActions: 2755 if self.managedActions < other.managedActions: 2756 return -1 2757 else: 2758 return 1 2759 return 0
    2760
    2761 - def addOverride(self, command, absolutePath):
    2762 """ 2763 If no override currently exists for the command, add one. 2764 @param command: Name of command to be overridden. 2765 @param absolutePath: Absolute path of the overrridden command. 2766 """ 2767 override = CommandOverride(command, absolutePath) 2768 if self.overrides is None: 2769 self.overrides = [ override, ] 2770 else: 2771 exists = False 2772 for obj in self.overrides: 2773 if obj.command == override.command: 2774 exists = True 2775 break 2776 if not exists: 2777 self.overrides.append(override)
    2778
    2779 - def replaceOverride(self, command, absolutePath):
    2780 """ 2781 If override currently exists for the command, replace it; otherwise add it. 2782 @param command: Name of command to be overridden. 2783 @param absolutePath: Absolute path of the overrridden command. 2784 """ 2785 override = CommandOverride(command, absolutePath) 2786 if self.overrides is None: 2787 self.overrides = [ override, ] 2788 else: 2789 exists = False 2790 for obj in self.overrides: 2791 if obj.command == override.command: 2792 exists = True 2793 obj.absolutePath = override.absolutePath 2794 break 2795 if not exists: 2796 self.overrides.append(override)
    2797
    2798 - def _setStartingDay(self, value):
    2799 """ 2800 Property target used to set the starting day. 2801 If it is not C{None}, the value must be a valid English day of the week, 2802 one of C{"monday"}, C{"tuesday"}, C{"wednesday"}, etc. 2803 @raise ValueError: If the value is not a valid day of the week. 2804 """ 2805 if value is not None: 2806 if value not in ["monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday", ]: 2807 raise ValueError("Starting day must be an English day of the week, i.e. \"monday\".") 2808 self._startingDay = value
    2809
    2810 - def _getStartingDay(self):
    2811 """ 2812 Property target used to get the starting day. 2813 """ 2814 return self._startingDay
    2815
    2816 - def _setWorkingDir(self, value):
    2817 """ 2818 Property target used to set the working directory. 2819 The value must be an absolute path if it is not C{None}. 2820 It does not have to exist on disk at the time of assignment. 2821 @raise ValueError: If the value is not an absolute path. 2822 @raise ValueError: If the value cannot be encoded properly. 2823 """ 2824 if value is not None: 2825 if not os.path.isabs(value): 2826 raise ValueError("Working directory must be an absolute path.") 2827 self._workingDir = encodePath(value)
    2828
    2829 - def _getWorkingDir(self):
    2830 """ 2831 Property target used to get the working directory. 2832 """ 2833 return self._workingDir
    2834
    2835 - def _setBackupUser(self, value):
    2836 """ 2837 Property target used to set the backup user. 2838 The value must be a non-empty string if it is not C{None}. 2839 @raise ValueError: If the value is an empty string. 2840 """ 2841 if value is not None: 2842 if len(value) < 1: 2843 raise ValueError("Backup user must be a non-empty string.") 2844 self._backupUser = value
    2845
    2846 - def _getBackupUser(self):
    2847 """ 2848 Property target used to get the backup user. 2849 """ 2850 return self._backupUser
    2851
    2852 - def _setBackupGroup(self, value):
    2853 """ 2854 Property target used to set the backup group. 2855 The value must be a non-empty string if it is not C{None}. 2856 @raise ValueError: If the value is an empty string. 2857 """ 2858 if value is not None: 2859 if len(value) < 1: 2860 raise ValueError("Backup group must be a non-empty string.") 2861 self._backupGroup = value
    2862
    2863 - def _getBackupGroup(self):
    2864 """ 2865 Property target used to get the backup group. 2866 """ 2867 return self._backupGroup
    2868
    2869 - def _setRcpCommand(self, value):
    2870 """ 2871 Property target used to set the rcp command. 2872 The value must be a non-empty string if it is not C{None}. 2873 @raise ValueError: If the value is an empty string. 2874 """ 2875 if value is not None: 2876 if len(value) < 1: 2877 raise ValueError("The rcp command must be a non-empty string.") 2878 self._rcpCommand = value
    2879
    2880 - def _getRcpCommand(self):
    2881 """ 2882 Property target used to get the rcp command. 2883 """ 2884 return self._rcpCommand
    2885
    2886 - def _setRshCommand(self, value):
    2887 """ 2888 Property target used to set the rsh command. 2889 The value must be a non-empty string if it is not C{None}. 2890 @raise ValueError: If the value is an empty string. 2891 """ 2892 if value is not None: 2893 if len(value) < 1: 2894 raise ValueError("The rsh command must be a non-empty string.") 2895 self._rshCommand = value
    2896
    2897 - def _getRshCommand(self):
    2898 """ 2899 Property target used to get the rsh command. 2900 """ 2901 return self._rshCommand
    2902
    2903 - def _setCbackCommand(self, value):
    2904 """ 2905 Property target used to set the cback command. 2906 The value must be a non-empty string if it is not C{None}. 2907 @raise ValueError: If the value is an empty string. 2908 """ 2909 if value is not None: 2910 if len(value) < 1: 2911 raise ValueError("The cback command must be a non-empty string.") 2912 self._cbackCommand = value
    2913
    2914 - def _getCbackCommand(self):
    2915 """ 2916 Property target used to get the cback command. 2917 """ 2918 return self._cbackCommand
    2919
    2920 - def _setOverrides(self, value):
    2921 """ 2922 Property target used to set the command path overrides list. 2923 Either the value must be C{None} or each element must be a C{CommandOverride}. 2924 @raise ValueError: If the value is not a C{CommandOverride} 2925 """ 2926 if value is None: 2927 self._overrides = None 2928 else: 2929 try: 2930 saved = self._overrides 2931 self._overrides = ObjectTypeList(CommandOverride, "CommandOverride") 2932 self._overrides.extend(value) 2933 except Exception as e: 2934 self._overrides = saved 2935 raise e
    2936
    2937 - def _getOverrides(self):
    2938 """ 2939 Property target used to get the command path overrides list. 2940 """ 2941 return self._overrides
    2942
    2943 - def _setHooks(self, value):
    2944 """ 2945 Property target used to set the pre- and post-action hooks list. 2946 Either the value must be C{None} or each element must be an C{ActionHook}. 2947 @raise ValueError: If the value is not a C{CommandOverride} 2948 """ 2949 if value is None: 2950 self._hooks = None 2951 else: 2952 try: 2953 saved = self._hooks 2954 self._hooks = ObjectTypeList(ActionHook, "ActionHook") 2955 self._hooks.extend(value) 2956 except Exception as e: 2957 self._hooks = saved 2958 raise e
    2959
    2960 - def _getHooks(self):
    2961 """ 2962 Property target used to get the command path hooks list. 2963 """ 2964 return self._hooks
    2965
    2966 - def _setManagedActions(self, value):
    2967 """ 2968 Property target used to set the managed actions list. 2969 Elements do not have to exist on disk at the time of assignment. 2970 """ 2971 if value is None: 2972 self._managedActions = None 2973 else: 2974 try: 2975 saved = self._managedActions 2976 self._managedActions = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") 2977 self._managedActions.extend(value) 2978 except Exception as e: 2979 self._managedActions = saved 2980 raise e
    2981
    2982 - def _getManagedActions(self):
    2983 """ 2984 Property target used to get the managed actions list. 2985 """ 2986 return self._managedActions
    2987 2988 startingDay = property(_getStartingDay, _setStartingDay, None, "Day that starts the week.") 2989 workingDir = property(_getWorkingDir, _setWorkingDir, None, "Working (temporary) directory to use for backups.") 2990 backupUser = property(_getBackupUser, _setBackupUser, None, "Effective user that backups should run as.") 2991 backupGroup = property(_getBackupGroup, _setBackupGroup, None, "Effective group that backups should run as.") 2992 rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "Default rcp-compatible copy command for staging.") 2993 rshCommand = property(_getRshCommand, _setRshCommand, None, "Default rsh-compatible command to use for remote shells.") 2994 cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "Default cback-compatible command to use on managed remote peers.") 2995 overrides = property(_getOverrides, _setOverrides, None, "List of configured command path overrides, if any.") 2996 hooks = property(_getHooks, _setHooks, None, "List of configured pre- and post-action hooks.") 2997 managedActions = property(_getManagedActions, _setManagedActions, None, "Default set of actions that are managed on remote peers.")
    2998
    2999 3000 ######################################################################## 3001 # PeersConfig class definition 3002 ######################################################################## 3003 3004 @total_ordering 3005 -class PeersConfig(object):
    3006 3007 """ 3008 Class representing Cedar Backup global peer configuration. 3009 3010 This section contains a list of local and remote peers in a master's backup 3011 pool. The section is optional. If a master does not define this section, 3012 then all peers are unmanaged, and the stage configuration section must 3013 explicitly list any peer that is to be staged. If this section is 3014 configured, then peers may be managed or unmanaged, and the stage section 3015 peer configuration (if any) completely overrides this configuration. 3016 3017 The following restrictions exist on data in this class: 3018 3019 - The list of local peers must contain only C{LocalPeer} objects 3020 - The list of remote peers must contain only C{RemotePeer} objects 3021 3022 @note: Lists within this class are "unordered" for equality comparisons. 3023 3024 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, localPeers, remotePeers 3025 """ 3026
    3027 - def __init__(self, localPeers=None, remotePeers=None):
    3028 """ 3029 Constructor for the C{PeersConfig} class. 3030 3031 @param localPeers: List of local peers. 3032 @param remotePeers: List of remote peers. 3033 3034 @raise ValueError: If one of the values is invalid. 3035 """ 3036 self._localPeers = None 3037 self._remotePeers = None 3038 self.localPeers = localPeers 3039 self.remotePeers = remotePeers
    3040
    3041 - def __repr__(self):
    3042 """ 3043 Official string representation for class instance. 3044 """ 3045 return "PeersConfig(%s, %s)" % (self.localPeers, self.remotePeers)
    3046
    3047 - def __str__(self):
    3048 """ 3049 Informal string representation for class instance. 3050 """ 3051 return self.__repr__()
    3052
    3053 - def __eq__(self, other):
    3054 """Equals operator, implemented in terms of original Python 2 compare operator.""" 3055 return self.__cmp__(other) == 0
    3056
    3057 - def __lt__(self, other):
    3058 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 3059 return self.__cmp__(other) < 0
    3060
    3061 - def __gt__(self, other):
    3062 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 3063 return self.__cmp__(other) > 0
    3064
    3065 - def __cmp__(self, other):
    3066 """ 3067 Original Python 2 comparison operator. 3068 Lists within this class are "unordered" for equality comparisons. 3069 @param other: Other object to compare to. 3070 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 3071 """ 3072 if other is None: 3073 return 1 3074 if self.localPeers != other.localPeers: 3075 if self.localPeers < other.localPeers: 3076 return -1 3077 else: 3078 return 1 3079 if self.remotePeers != other.remotePeers: 3080 if self.remotePeers < other.remotePeers: 3081 return -1 3082 else: 3083 return 1 3084 return 0
    3085
    3086 - def hasPeers(self):
    3087 """ 3088 Indicates whether any peers are filled into this object. 3089 @return: Boolean true if any local or remote peers are filled in, false otherwise. 3090 """ 3091 return ((self.localPeers is not None and len(self.localPeers) > 0) or 3092 (self.remotePeers is not None and len(self.remotePeers) > 0))
    3093
    3094 - def _setLocalPeers(self, value):
    3095 """ 3096 Property target used to set the local peers list. 3097 Either the value must be C{None} or each element must be a C{LocalPeer}. 3098 @raise ValueError: If the value is not an absolute path. 3099 """ 3100 if value is None: 3101 self._localPeers = None 3102 else: 3103 try: 3104 saved = self._localPeers 3105 self._localPeers = ObjectTypeList(LocalPeer, "LocalPeer") 3106 self._localPeers.extend(value) 3107 except Exception as e: 3108 self._localPeers = saved 3109 raise e
    3110
    3111 - def _getLocalPeers(self):
    3112 """ 3113 Property target used to get the local peers list. 3114 """ 3115 return self._localPeers
    3116
    3117 - def _setRemotePeers(self, value):
    3118 """ 3119 Property target used to set the remote peers list. 3120 Either the value must be C{None} or each element must be a C{RemotePeer}. 3121 @raise ValueError: If the value is not a C{RemotePeer} 3122 """ 3123 if value is None: 3124 self._remotePeers = None 3125 else: 3126 try: 3127 saved = self._remotePeers 3128 self._remotePeers = ObjectTypeList(RemotePeer, "RemotePeer") 3129 self._remotePeers.extend(value) 3130 except Exception as e: 3131 self._remotePeers = saved 3132 raise e
    3133
    3134 - def _getRemotePeers(self):
    3135 """ 3136 Property target used to get the remote peers list. 3137 """ 3138 return self._remotePeers
    3139 3140 localPeers = property(_getLocalPeers, _setLocalPeers, None, "List of local peers.") 3141 remotePeers = property(_getRemotePeers, _setRemotePeers, None, "List of remote peers.")
    3142
    3143 3144 ######################################################################## 3145 # CollectConfig class definition 3146 ######################################################################## 3147 3148 @total_ordering 3149 -class CollectConfig(object):
    3150 3151 """ 3152 Class representing a Cedar Backup collect configuration. 3153 3154 The following restrictions exist on data in this class: 3155 3156 - The target directory must be an absolute path. 3157 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 3158 - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. 3159 - The ignore file must be a non-empty string. 3160 - Each of the paths in C{absoluteExcludePaths} must be an absolute path 3161 - The collect file list must be a list of C{CollectFile} objects. 3162 - The collect directory list must be a list of C{CollectDir} objects. 3163 3164 For the C{absoluteExcludePaths} list, validation is accomplished through the 3165 L{util.AbsolutePathList} list implementation that overrides common list 3166 methods and transparently does the absolute path validation for us. 3167 3168 For the C{collectFiles} and C{collectDirs} list, validation is accomplished 3169 through the L{util.ObjectTypeList} list implementation that overrides common 3170 list methods and transparently ensures that each element has an appropriate 3171 type. 3172 3173 @note: Lists within this class are "unordered" for equality comparisons. 3174 3175 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, targetDir, 3176 collectMode, archiveMode, ignoreFile, absoluteExcludePaths, 3177 excludePatterns, collectFiles, collectDirs 3178 """ 3179
    3180 - def __init__(self, targetDir=None, collectMode=None, archiveMode=None, ignoreFile=None, 3181 absoluteExcludePaths=None, excludePatterns=None, collectFiles=None, 3182 collectDirs=None):
    3183 """ 3184 Constructor for the C{CollectConfig} class. 3185 3186 @param targetDir: Directory to collect files into. 3187 @param collectMode: Default collect mode. 3188 @param archiveMode: Default archive mode for collect files. 3189 @param ignoreFile: Default ignore file name. 3190 @param absoluteExcludePaths: List of absolute paths to exclude. 3191 @param excludePatterns: List of regular expression patterns to exclude. 3192 @param collectFiles: List of collect files. 3193 @param collectDirs: List of collect directories. 3194 3195 @raise ValueError: If one of the values is invalid. 3196 """ 3197 self._targetDir = None 3198 self._collectMode = None 3199 self._archiveMode = None 3200 self._ignoreFile = None 3201 self._absoluteExcludePaths = None 3202 self._excludePatterns = None 3203 self._collectFiles = None 3204 self._collectDirs = None 3205 self.targetDir = targetDir 3206 self.collectMode = collectMode 3207 self.archiveMode = archiveMode 3208 self.ignoreFile = ignoreFile 3209 self.absoluteExcludePaths = absoluteExcludePaths 3210 self.excludePatterns = excludePatterns 3211 self.collectFiles = collectFiles 3212 self.collectDirs = collectDirs
    3213
    3214 - def __repr__(self):
    3215 """ 3216 Official string representation for class instance. 3217 """ 3218 return "CollectConfig(%s, %s, %s, %s, %s, %s, %s, %s)" % (self.targetDir, self.collectMode, self.archiveMode, 3219 self.ignoreFile, self.absoluteExcludePaths, 3220 self.excludePatterns, self.collectFiles, self.collectDirs)
    3221
    3222 - def __str__(self):
    3223 """ 3224 Informal string representation for class instance. 3225 """ 3226 return self.__repr__()
    3227
    3228 - def __eq__(self, other):
    3229 """Equals operator, implemented in terms of original Python 2 compare operator.""" 3230 return self.__cmp__(other) == 0
    3231
    3232 - def __lt__(self, other):
    3233 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 3234 return self.__cmp__(other) < 0
    3235
    3236 - def __gt__(self, other):
    3237 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 3238 return self.__cmp__(other) > 0
    3239
    3240 - def __cmp__(self, other):
    3241 """ 3242 Original Python 2 comparison operator. 3243 Lists within this class are "unordered" for equality comparisons. 3244 @param other: Other object to compare to. 3245 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 3246 """ 3247 if other is None: 3248 return 1 3249 if self.targetDir != other.targetDir: 3250 if str(self.targetDir or "") < str(other.targetDir or ""): 3251 return -1 3252 else: 3253 return 1 3254 if self.collectMode != other.collectMode: 3255 if str(self.collectMode or "") < str(other.collectMode or ""): 3256 return -1 3257 else: 3258 return 1 3259 if self.archiveMode != other.archiveMode: 3260 if str(self.archiveMode or "") < str(other.archiveMode or ""): 3261 return -1 3262 else: 3263 return 1 3264 if self.ignoreFile != other.ignoreFile: 3265 if str(self.ignoreFile or "") < str(other.ignoreFile or ""): 3266 return -1 3267 else: 3268 return 1 3269 if self.absoluteExcludePaths != other.absoluteExcludePaths: 3270 if self.absoluteExcludePaths < other.absoluteExcludePaths: 3271 return -1 3272 else: 3273 return 1 3274 if self.excludePatterns != other.excludePatterns: 3275 if self.excludePatterns < other.excludePatterns: 3276 return -1 3277 else: 3278 return 1 3279 if self.collectFiles != other.collectFiles: 3280 if self.collectFiles < other.collectFiles: 3281 return -1 3282 else: 3283 return 1 3284 if self.collectDirs != other.collectDirs: 3285 if self.collectDirs < other.collectDirs: 3286 return -1 3287 else: 3288 return 1 3289 return 0
    3290
    3291 - def _setTargetDir(self, value):
    3292 """ 3293 Property target used to set the target directory. 3294 The value must be an absolute path if it is not C{None}. 3295 It does not have to exist on disk at the time of assignment. 3296 @raise ValueError: If the value is not an absolute path. 3297 @raise ValueError: If the value cannot be encoded properly. 3298 """ 3299 if value is not None: 3300 if not os.path.isabs(value): 3301 raise ValueError("Target directory must be an absolute path.") 3302 self._targetDir = encodePath(value)
    3303
    3304 - def _getTargetDir(self):
    3305 """ 3306 Property target used to get the target directory. 3307 """ 3308 return self._targetDir
    3309
    3310 - def _setCollectMode(self, value):
    3311 """ 3312 Property target used to set the collect mode. 3313 If not C{None}, the mode must be one of L{VALID_COLLECT_MODES}. 3314 @raise ValueError: If the value is not valid. 3315 """ 3316 if value is not None: 3317 if value not in VALID_COLLECT_MODES: 3318 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 3319 self._collectMode = value
    3320
    3321 - def _getCollectMode(self):
    3322 """ 3323 Property target used to get the collect mode. 3324 """ 3325 return self._collectMode
    3326
    3327 - def _setArchiveMode(self, value):
    3328 """ 3329 Property target used to set the archive mode. 3330 If not C{None}, the mode must be one of L{VALID_ARCHIVE_MODES}. 3331 @raise ValueError: If the value is not valid. 3332 """ 3333 if value is not None: 3334 if value not in VALID_ARCHIVE_MODES: 3335 raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) 3336 self._archiveMode = value
    3337
    3338 - def _getArchiveMode(self):
    3339 """ 3340 Property target used to get the archive mode. 3341 """ 3342 return self._archiveMode
    3343
    3344 - def _setIgnoreFile(self, value):
    3345 """ 3346 Property target used to set the ignore file. 3347 The value must be a non-empty string if it is not C{None}. 3348 @raise ValueError: If the value is an empty string. 3349 @raise ValueError: If the value cannot be encoded properly. 3350 """ 3351 if value is not None: 3352 if len(value) < 1: 3353 raise ValueError("The ignore file must be a non-empty string.") 3354 self._ignoreFile = encodePath(value)
    3355
    3356 - def _getIgnoreFile(self):
    3357 """ 3358 Property target used to get the ignore file. 3359 """ 3360 return self._ignoreFile
    3361
    3362 - def _setAbsoluteExcludePaths(self, value):
    3363 """ 3364 Property target used to set the absolute exclude paths list. 3365 Either the value must be C{None} or each element must be an absolute path. 3366 Elements do not have to exist on disk at the time of assignment. 3367 @raise ValueError: If the value is not an absolute path. 3368 """ 3369 if value is None: 3370 self._absoluteExcludePaths = None 3371 else: 3372 try: 3373 saved = self._absoluteExcludePaths 3374 self._absoluteExcludePaths = AbsolutePathList() 3375 self._absoluteExcludePaths.extend(value) 3376 except Exception as e: 3377 self._absoluteExcludePaths = saved 3378 raise e
    3379
    3380 - def _getAbsoluteExcludePaths(self):
    3381 """ 3382 Property target used to get the absolute exclude paths list. 3383 """ 3384 return self._absoluteExcludePaths
    3385
    3386 - def _setExcludePatterns(self, value):
    3387 """ 3388 Property target used to set the exclude patterns list. 3389 """ 3390 if value is None: 3391 self._excludePatterns = None 3392 else: 3393 try: 3394 saved = self._excludePatterns 3395 self._excludePatterns = RegexList() 3396 self._excludePatterns.extend(value) 3397 except Exception as e: 3398 self._excludePatterns = saved 3399 raise e
    3400
    3401 - def _getExcludePatterns(self):
    3402 """ 3403 Property target used to get the exclude patterns list. 3404 """ 3405 return self._excludePatterns
    3406
    3407 - def _setCollectFiles(self, value):
    3408 """ 3409 Property target used to set the collect files list. 3410 Either the value must be C{None} or each element must be a C{CollectFile}. 3411 @raise ValueError: If the value is not a C{CollectFile} 3412 """ 3413 if value is None: 3414 self._collectFiles = None 3415 else: 3416 try: 3417 saved = self._collectFiles 3418 self._collectFiles = ObjectTypeList(CollectFile, "CollectFile") 3419 self._collectFiles.extend(value) 3420 except Exception as e: 3421 self._collectFiles = saved 3422 raise e
    3423
    3424 - def _getCollectFiles(self):
    3425 """ 3426 Property target used to get the collect files list. 3427 """ 3428 return self._collectFiles
    3429
    3430 - def _setCollectDirs(self, value):
    3431 """ 3432 Property target used to set the collect dirs list. 3433 Either the value must be C{None} or each element must be a C{CollectDir}. 3434 @raise ValueError: If the value is not a C{CollectDir} 3435 """ 3436 if value is None: 3437 self._collectDirs = None 3438 else: 3439 try: 3440 saved = self._collectDirs 3441 self._collectDirs = ObjectTypeList(CollectDir, "CollectDir") 3442 self._collectDirs.extend(value) 3443 except Exception as e: 3444 self._collectDirs = saved 3445 raise e
    3446
    3447 - def _getCollectDirs(self):
    3448 """ 3449 Property target used to get the collect dirs list. 3450 """ 3451 return self._collectDirs
    3452 3453 targetDir = property(_getTargetDir, _setTargetDir, None, "Directory to collect files into.") 3454 collectMode = property(_getCollectMode, _setCollectMode, None, "Default collect mode.") 3455 archiveMode = property(_getArchiveMode, _setArchiveMode, None, "Default archive mode for collect files.") 3456 ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, "Default ignore file name.") 3457 absoluteExcludePaths = property(_getAbsoluteExcludePaths, _setAbsoluteExcludePaths, None, "List of absolute paths to exclude.") 3458 excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expressions patterns to exclude.") 3459 collectFiles = property(_getCollectFiles, _setCollectFiles, None, "List of collect files.") 3460 collectDirs = property(_getCollectDirs, _setCollectDirs, None, "List of collect directories.")
    3461
    3462 3463 ######################################################################## 3464 # StageConfig class definition 3465 ######################################################################## 3466 3467 @total_ordering 3468 -class StageConfig(object):
    3469 3470 """ 3471 Class representing a Cedar Backup stage configuration. 3472 3473 The following restrictions exist on data in this class: 3474 3475 - The target directory must be an absolute path 3476 - The list of local peers must contain only C{LocalPeer} objects 3477 - The list of remote peers must contain only C{RemotePeer} objects 3478 3479 @note: Lists within this class are "unordered" for equality comparisons. 3480 3481 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, targetDir, localPeers, remotePeers 3482 """ 3483
    3484 - def __init__(self, targetDir=None, localPeers=None, remotePeers=None):
    3485 """ 3486 Constructor for the C{StageConfig} class. 3487 3488 @param targetDir: Directory to stage files into, by peer name. 3489 @param localPeers: List of local peers. 3490 @param remotePeers: List of remote peers. 3491 3492 @raise ValueError: If one of the values is invalid. 3493 """ 3494 self._targetDir = None 3495 self._localPeers = None 3496 self._remotePeers = None 3497 self.targetDir = targetDir 3498 self.localPeers = localPeers 3499 self.remotePeers = remotePeers
    3500
    3501 - def __repr__(self):
    3502 """ 3503 Official string representation for class instance. 3504 """ 3505 return "StageConfig(%s, %s, %s)" % (self.targetDir, self.localPeers, self.remotePeers)
    3506
    3507 - def __str__(self):
    3508 """ 3509 Informal string representation for class instance. 3510 """ 3511 return self.__repr__()
    3512
    3513 - def __eq__(self, other):
    3514 """Equals operator, implemented in terms of original Python 2 compare operator.""" 3515 return self.__cmp__(other) == 0
    3516
    3517 - def __lt__(self, other):
    3518 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 3519 return self.__cmp__(other) < 0
    3520
    3521 - def __gt__(self, other):
    3522 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 3523 return self.__cmp__(other) > 0
    3524
    3525 - def __cmp__(self, other):
    3526 """ 3527 Original Python 2 comparison operator. 3528 Lists within this class are "unordered" for equality comparisons. 3529 @param other: Other object to compare to. 3530 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 3531 """ 3532 if other is None: 3533 return 1 3534 if self.targetDir != other.targetDir: 3535 if str(self.targetDir or "") < str(other.targetDir or ""): 3536 return -1 3537 else: 3538 return 1 3539 if self.localPeers != other.localPeers: 3540 if self.localPeers < other.localPeers: 3541 return -1 3542 else: 3543 return 1 3544 if self.remotePeers != other.remotePeers: 3545 if self.remotePeers < other.remotePeers: 3546 return -1 3547 else: 3548 return 1 3549 return 0
    3550
    3551 - def hasPeers(self):
    3552 """ 3553 Indicates whether any peers are filled into this object. 3554 @return: Boolean true if any local or remote peers are filled in, false otherwise. 3555 """ 3556 return ((self.localPeers is not None and len(self.localPeers) > 0) or 3557 (self.remotePeers is not None and len(self.remotePeers) > 0))
    3558
    3559 - def _setTargetDir(self, value):
    3560 """ 3561 Property target used to set the target directory. 3562 The value must be an absolute path if it is not C{None}. 3563 It does not have to exist on disk at the time of assignment. 3564 @raise ValueError: If the value is not an absolute path. 3565 @raise ValueError: If the value cannot be encoded properly. 3566 """ 3567 if value is not None: 3568 if not os.path.isabs(value): 3569 raise ValueError("Target directory must be an absolute path.") 3570 self._targetDir = encodePath(value)
    3571
    3572 - def _getTargetDir(self):
    3573 """ 3574 Property target used to get the target directory. 3575 """ 3576 return self._targetDir
    3577
    3578 - def _setLocalPeers(self, value):
    3579 """ 3580 Property target used to set the local peers list. 3581 Either the value must be C{None} or each element must be a C{LocalPeer}. 3582 @raise ValueError: If the value is not an absolute path. 3583 """ 3584 if value is None: 3585 self._localPeers = None 3586 else: 3587 try: 3588 saved = self._localPeers 3589 self._localPeers = ObjectTypeList(LocalPeer, "LocalPeer") 3590 self._localPeers.extend(value) 3591 except Exception as e: 3592 self._localPeers = saved 3593 raise e
    3594
    3595 - def _getLocalPeers(self):
    3596 """ 3597 Property target used to get the local peers list. 3598 """ 3599 return self._localPeers
    3600
    3601 - def _setRemotePeers(self, value):
    3602 """ 3603 Property target used to set the remote peers list. 3604 Either the value must be C{None} or each element must be a C{RemotePeer}. 3605 @raise ValueError: If the value is not a C{RemotePeer} 3606 """ 3607 if value is None: 3608 self._remotePeers = None 3609 else: 3610 try: 3611 saved = self._remotePeers 3612 self._remotePeers = ObjectTypeList(RemotePeer, "RemotePeer") 3613 self._remotePeers.extend(value) 3614 except Exception as e: 3615 self._remotePeers = saved 3616 raise e
    3617
    3618 - def _getRemotePeers(self):
    3619 """ 3620 Property target used to get the remote peers list. 3621 """ 3622 return self._remotePeers
    3623 3624 targetDir = property(_getTargetDir, _setTargetDir, None, "Directory to stage files into, by peer name.") 3625 localPeers = property(_getLocalPeers, _setLocalPeers, None, "List of local peers.") 3626 remotePeers = property(_getRemotePeers, _setRemotePeers, None, "List of remote peers.")
    3627
    3628 3629 ######################################################################## 3630 # StoreConfig class definition 3631 ######################################################################## 3632 3633 @total_ordering 3634 -class StoreConfig(object):
    3635 3636 """ 3637 Class representing a Cedar Backup store configuration. 3638 3639 The following restrictions exist on data in this class: 3640 3641 - The source directory must be an absolute path. 3642 - The media type must be one of the values in L{VALID_MEDIA_TYPES}. 3643 - The device type must be one of the values in L{VALID_DEVICE_TYPES}. 3644 - The device path must be an absolute path. 3645 - The SCSI id, if provided, must be in the form specified by L{validateScsiId}. 3646 - The drive speed must be an integer >= 1 3647 - The blanking behavior must be a C{BlankBehavior} object 3648 - The refresh media delay must be an integer >= 0 3649 - The eject delay must be an integer >= 0 3650 3651 Note that although the blanking factor must be a positive floating point 3652 number, it is stored as a string. This is done so that we can losslessly go 3653 back and forth between XML and object representations of configuration. 3654 3655 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, sourceDir, 3656 mediaType, deviceType, devicePath, deviceScsiId, 3657 driveSpeed, checkData, checkMedia, warnMidnite, noEject, 3658 blankBehavior, refreshMediaDelay, ejectDelay 3659 """ 3660
    3661 - def __init__(self, sourceDir=None, mediaType=None, deviceType=None, 3662 devicePath=None, deviceScsiId=None, driveSpeed=None, 3663 checkData=False, warnMidnite=False, noEject=False, 3664 checkMedia=False, blankBehavior=None, refreshMediaDelay=None, 3665 ejectDelay=None):
    3666 """ 3667 Constructor for the C{StoreConfig} class. 3668 3669 @param sourceDir: Directory whose contents should be written to media. 3670 @param mediaType: Type of the media (see notes above). 3671 @param deviceType: Type of the device (optional, see notes above). 3672 @param devicePath: Filesystem device name for writer device, i.e. C{/dev/cdrw}. 3673 @param deviceScsiId: SCSI id for writer device, i.e. C{[<method>:]scsibus,target,lun}. 3674 @param driveSpeed: Speed of the drive, i.e. C{2} for 2x drive, etc. 3675 @param checkData: Whether resulting image should be validated. 3676 @param checkMedia: Whether media should be checked before being written to. 3677 @param warnMidnite: Whether to generate warnings for crossing midnite. 3678 @param noEject: Indicates that the writer device should not be ejected. 3679 @param blankBehavior: Controls optimized blanking behavior. 3680 @param refreshMediaDelay: Delay, in seconds, to add after refreshing media 3681 @param ejectDelay: Delay, in seconds, to add after ejecting media before closing the tray 3682 3683 @raise ValueError: If one of the values is invalid. 3684 """ 3685 self._sourceDir = None 3686 self._mediaType = None 3687 self._deviceType = None 3688 self._devicePath = None 3689 self._deviceScsiId = None 3690 self._driveSpeed = None 3691 self._checkData = None 3692 self._checkMedia = None 3693 self._warnMidnite = None 3694 self._noEject = None 3695 self._blankBehavior = None 3696 self._refreshMediaDelay = None 3697 self._ejectDelay = None 3698 self.sourceDir = sourceDir 3699 self.mediaType = mediaType 3700 self.deviceType = deviceType 3701 self.devicePath = devicePath 3702 self.deviceScsiId = deviceScsiId 3703 self.driveSpeed = driveSpeed 3704 self.checkData = checkData 3705 self.checkMedia = checkMedia 3706 self.warnMidnite = warnMidnite 3707 self.noEject = noEject 3708 self.blankBehavior = blankBehavior 3709 self.refreshMediaDelay = refreshMediaDelay 3710 self.ejectDelay = ejectDelay
    3711
    3712 - def __repr__(self):
    3713 """ 3714 Official string representation for class instance. 3715 """ 3716 return "StoreConfig(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % ( 3717 self.sourceDir, self.mediaType, self.deviceType, 3718 self.devicePath, self.deviceScsiId, self.driveSpeed, 3719 self.checkData, self.warnMidnite, self.noEject, 3720 self.checkMedia, self.blankBehavior, self.refreshMediaDelay, 3721 self.ejectDelay)
    3722
    3723 - def __str__(self):
    3724 """ 3725 Informal string representation for class instance. 3726 """ 3727 return self.__repr__()
    3728
    3729 - def __eq__(self, other):
    3730 """Equals operator, implemented in terms of original Python 2 compare operator.""" 3731 return self.__cmp__(other) == 0
    3732
    3733 - def __lt__(self, other):
    3734 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 3735 return self.__cmp__(other) < 0
    3736
    3737 - def __gt__(self, other):
    3738 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 3739 return self.__cmp__(other) > 0
    3740
    3741 - def __cmp__(self, other):
    3742 """ 3743 Original Python 2 comparison operator. 3744 @param other: Other object to compare to. 3745 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 3746 """ 3747 if other is None: 3748 return 1 3749 if self.sourceDir != other.sourceDir: 3750 if str(self.sourceDir or "") < str(other.sourceDir or ""): 3751 return -1 3752 else: 3753 return 1 3754 if self.mediaType != other.mediaType: 3755 if str(self.mediaType or "") < str(other.mediaType or ""): 3756 return -1 3757 else: 3758 return 1 3759 if self.deviceType != other.deviceType: 3760 if str(self.deviceType or "") < str(other.deviceType or ""): 3761 return -1 3762 else: 3763 return 1 3764 if self.devicePath != other.devicePath: 3765 if str(self.devicePath or "") < str(other.devicePath or ""): 3766 return -1 3767 else: 3768 return 1 3769 if self.deviceScsiId != other.deviceScsiId: 3770 if str(self.deviceScsiId or "") < str(other.deviceScsiId or ""): 3771 return -1 3772 else: 3773 return 1 3774 if self.driveSpeed != other.driveSpeed: 3775 if str(self.driveSpeed or "") < str(other.driveSpeed or ""): 3776 return -1 3777 else: 3778 return 1 3779 if self.checkData != other.checkData: 3780 if self.checkData < other.checkData: 3781 return -1 3782 else: 3783 return 1 3784 if self.checkMedia != other.checkMedia: 3785 if self.checkMedia < other.checkMedia: 3786 return -1 3787 else: 3788 return 1 3789 if self.warnMidnite != other.warnMidnite: 3790 if self.warnMidnite < other.warnMidnite: 3791 return -1 3792 else: 3793 return 1 3794 if self.noEject != other.noEject: 3795 if self.noEject < other.noEject: 3796 return -1 3797 else: 3798 return 1 3799 if self.blankBehavior != other.blankBehavior: 3800 if str(self.blankBehavior or "") < str(other.blankBehavior or ""): 3801 return -1 3802 else: 3803 return 1 3804 if self.refreshMediaDelay != other.refreshMediaDelay: 3805 if int(self.refreshMediaDelay or 0) < int(other.refreshMediaDelay or 0): 3806 return -1 3807 else: 3808 return 1 3809 if self.ejectDelay != other.ejectDelay: 3810 if int(self.ejectDelay or 0) < int(other.ejectDelay or 0): 3811 return -1 3812 else: 3813 return 1 3814 return 0
    3815
    3816 - def _setSourceDir(self, value):
    3817 """ 3818 Property target used to set the source directory. 3819 The value must be an absolute path if it is not C{None}. 3820 It does not have to exist on disk at the time of assignment. 3821 @raise ValueError: If the value is not an absolute path. 3822 @raise ValueError: If the value cannot be encoded properly. 3823 """ 3824 if value is not None: 3825 if not os.path.isabs(value): 3826 raise ValueError("Source directory must be an absolute path.") 3827 self._sourceDir = encodePath(value)
    3828
    3829 - def _getSourceDir(self):
    3830 """ 3831 Property target used to get the source directory. 3832 """ 3833 return self._sourceDir
    3834
    3835 - def _setMediaType(self, value):
    3836 """ 3837 Property target used to set the media type. 3838 The value must be one of L{VALID_MEDIA_TYPES}. 3839 @raise ValueError: If the value is not valid. 3840 """ 3841 if value is not None: 3842 if value not in VALID_MEDIA_TYPES: 3843 raise ValueError("Media type must be one of %s." % VALID_MEDIA_TYPES) 3844 self._mediaType = value
    3845
    3846 - def _getMediaType(self):
    3847 """ 3848 Property target used to get the media type. 3849 """ 3850 return self._mediaType
    3851
    3852 - def _setDeviceType(self, value):
    3853 """ 3854 Property target used to set the device type. 3855 The value must be one of L{VALID_DEVICE_TYPES}. 3856 @raise ValueError: If the value is not valid. 3857 """ 3858 if value is not None: 3859 if value not in VALID_DEVICE_TYPES: 3860 raise ValueError("Device type must be one of %s." % VALID_DEVICE_TYPES) 3861 self._deviceType = value
    3862
    3863 - def _getDeviceType(self):
    3864 """ 3865 Property target used to get the device type. 3866 """ 3867 return self._deviceType
    3868
    3869 - def _setDevicePath(self, value):
    3870 """ 3871 Property target used to set the device path. 3872 The value must be an absolute path if it is not C{None}. 3873 It does not have to exist on disk at the time of assignment. 3874 @raise ValueError: If the value is not an absolute path. 3875 @raise ValueError: If the value cannot be encoded properly. 3876 """ 3877 if value is not None: 3878 if not os.path.isabs(value): 3879 raise ValueError("Device path must be an absolute path.") 3880 self._devicePath = encodePath(value)
    3881
    3882 - def _getDevicePath(self):
    3883 """ 3884 Property target used to get the device path. 3885 """ 3886 return self._devicePath
    3887
    3888 - def _setDeviceScsiId(self, value):
    3889 """ 3890 Property target used to set the SCSI id 3891 The SCSI id must be valid per L{validateScsiId}. 3892 @raise ValueError: If the value is not valid. 3893 """ 3894 if value is None: 3895 self._deviceScsiId = None 3896 else: 3897 self._deviceScsiId = validateScsiId(value)
    3898
    3899 - def _getDeviceScsiId(self):
    3900 """ 3901 Property target used to get the SCSI id. 3902 """ 3903 return self._deviceScsiId
    3904
    3905 - def _setDriveSpeed(self, value):
    3906 """ 3907 Property target used to set the drive speed. 3908 The drive speed must be valid per L{validateDriveSpeed}. 3909 @raise ValueError: If the value is not valid. 3910 """ 3911 self._driveSpeed = validateDriveSpeed(value)
    3912
    3913 - def _getDriveSpeed(self):
    3914 """ 3915 Property target used to get the drive speed. 3916 """ 3917 return self._driveSpeed
    3918
    3919 - def _setCheckData(self, value):
    3920 """ 3921 Property target used to set the check data flag. 3922 No validations, but we normalize the value to C{True} or C{False}. 3923 """ 3924 if value: 3925 self._checkData = True 3926 else: 3927 self._checkData = False
    3928
    3929 - def _getCheckData(self):
    3930 """ 3931 Property target used to get the check data flag. 3932 """ 3933 return self._checkData
    3934
    3935 - def _setCheckMedia(self, value):
    3936 """ 3937 Property target used to set the check media flag. 3938 No validations, but we normalize the value to C{True} or C{False}. 3939 """ 3940 if value: 3941 self._checkMedia = True 3942 else: 3943 self._checkMedia = False
    3944
    3945 - def _getCheckMedia(self):
    3946 """ 3947 Property target used to get the check media flag. 3948 """ 3949 return self._checkMedia
    3950
    3951 - def _setWarnMidnite(self, value):
    3952 """ 3953 Property target used to set the midnite warning flag. 3954 No validations, but we normalize the value to C{True} or C{False}. 3955 """ 3956 if value: 3957 self._warnMidnite = True 3958 else: 3959 self._warnMidnite = False
    3960
    3961 - def _getWarnMidnite(self):
    3962 """ 3963 Property target used to get the midnite warning flag. 3964 """ 3965 return self._warnMidnite
    3966
    3967 - def _setNoEject(self, value):
    3968 """ 3969 Property target used to set the no-eject flag. 3970 No validations, but we normalize the value to C{True} or C{False}. 3971 """ 3972 if value: 3973 self._noEject = True 3974 else: 3975 self._noEject = False
    3976
    3977 - def _getNoEject(self):
    3978 """ 3979 Property target used to get the no-eject flag. 3980 """ 3981 return self._noEject
    3982
    3983 - def _setBlankBehavior(self, value):
    3984 """ 3985 Property target used to set blanking behavior configuration. 3986 If not C{None}, the value must be a C{BlankBehavior} object. 3987 @raise ValueError: If the value is not a C{BlankBehavior} 3988 """ 3989 if value is None: 3990 self._blankBehavior = None 3991 else: 3992 if not isinstance(value, BlankBehavior): 3993 raise ValueError("Value must be a C{BlankBehavior} object.") 3994 self._blankBehavior = value
    3995
    3996 - def _getBlankBehavior(self):
    3997 """ 3998 Property target used to get the blanking behavior configuration. 3999 """ 4000 return self._blankBehavior
    4001
    4002 - def _setRefreshMediaDelay(self, value):
    4003 """ 4004 Property target used to set the refreshMediaDelay. 4005 The value must be an integer >= 0. 4006 @raise ValueError: If the value is not valid. 4007 """ 4008 if value is None: 4009 self._refreshMediaDelay = None 4010 else: 4011 try: 4012 value = int(value) 4013 except TypeError: 4014 raise ValueError("Action refreshMediaDelay value must be an integer >= 0.") 4015 if value < 0: 4016 raise ValueError("Action refreshMediaDelay value must be an integer >= 0.") 4017 if value == 0: 4018 value = None # normalize this out, since it's the default 4019 self._refreshMediaDelay = value
    4020
    4021 - def _getRefreshMediaDelay(self):
    4022 """ 4023 Property target used to get the action refreshMediaDelay. 4024 """ 4025 return self._refreshMediaDelay
    4026
    4027 - def _setEjectDelay(self, value):
    4028 """ 4029 Property target used to set the ejectDelay. 4030 The value must be an integer >= 0. 4031 @raise ValueError: If the value is not valid. 4032 """ 4033 if value is None: 4034 self._ejectDelay = None 4035 else: 4036 try: 4037 value = int(value) 4038 except TypeError: 4039 raise ValueError("Action ejectDelay value must be an integer >= 0.") 4040 if value < 0: 4041 raise ValueError("Action ejectDelay value must be an integer >= 0.") 4042 if value == 0: 4043 value = None # normalize this out, since it's the default 4044 self._ejectDelay = value
    4045
    4046 - def _getEjectDelay(self):
    4047 """ 4048 Property target used to get the action ejectDelay. 4049 """ 4050 return self._ejectDelay
    4051 4052 sourceDir = property(_getSourceDir, _setSourceDir, None, "Directory whose contents should be written to media.") 4053 mediaType = property(_getMediaType, _setMediaType, None, "Type of the media (see notes above).") 4054 deviceType = property(_getDeviceType, _setDeviceType, None, "Type of the device (optional, see notes above).") 4055 devicePath = property(_getDevicePath, _setDevicePath, None, "Filesystem device name for writer device.") 4056 deviceScsiId = property(_getDeviceScsiId, _setDeviceScsiId, None, "SCSI id for writer device (optional, see notes above).") 4057 driveSpeed = property(_getDriveSpeed, _setDriveSpeed, None, "Speed of the drive.") 4058 checkData = property(_getCheckData, _setCheckData, None, "Whether resulting image should be validated.") 4059 checkMedia = property(_getCheckMedia, _setCheckMedia, None, "Whether media should be checked before being written to.") 4060 warnMidnite = property(_getWarnMidnite, _setWarnMidnite, None, "Whether to generate warnings for crossing midnite.") 4061 noEject = property(_getNoEject, _setNoEject, None, "Indicates that the writer device should not be ejected.") 4062 blankBehavior = property(_getBlankBehavior, _setBlankBehavior, None, "Controls optimized blanking behavior.") 4063 refreshMediaDelay = property(_getRefreshMediaDelay, _setRefreshMediaDelay, None, "Delay, in seconds, to add after refreshing media.") 4064 ejectDelay = property(_getEjectDelay, _setEjectDelay, None, "Delay, in seconds, to add after ejecting media before closing the tray")
    4065
    4066 4067 ######################################################################## 4068 # PurgeConfig class definition 4069 ######################################################################## 4070 4071 @total_ordering 4072 -class PurgeConfig(object):
    4073 4074 """ 4075 Class representing a Cedar Backup purge configuration. 4076 4077 The following restrictions exist on data in this class: 4078 4079 - The purge directory list must be a list of C{PurgeDir} objects. 4080 4081 For the C{purgeDirs} list, validation is accomplished through the 4082 L{util.ObjectTypeList} list implementation that overrides common list 4083 methods and transparently ensures that each element is a C{PurgeDir}. 4084 4085 @note: Lists within this class are "unordered" for equality comparisons. 4086 4087 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, purgeDirs 4088 """ 4089
    4090 - def __init__(self, purgeDirs=None):
    4091 """ 4092 Constructor for the C{Purge} class. 4093 @param purgeDirs: List of purge directories. 4094 @raise ValueError: If one of the values is invalid. 4095 """ 4096 self._purgeDirs = None 4097 self.purgeDirs = purgeDirs
    4098
    4099 - def __repr__(self):
    4100 """ 4101 Official string representation for class instance. 4102 """ 4103 return "PurgeConfig(%s)" % self.purgeDirs
    4104
    4105 - def __str__(self):
    4106 """ 4107 Informal string representation for class instance. 4108 """ 4109 return self.__repr__()
    4110
    4111 - def __eq__(self, other):
    4112 """Equals operator, implemented in terms of original Python 2 compare operator.""" 4113 return self.__cmp__(other) == 0
    4114
    4115 - def __lt__(self, other):
    4116 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 4117 return self.__cmp__(other) < 0
    4118
    4119 - def __gt__(self, other):
    4120 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 4121 return self.__cmp__(other) > 0
    4122
    4123 - def __cmp__(self, other):
    4124 """ 4125 Original Python 2 comparison operator. 4126 Lists within this class are "unordered" for equality comparisons. 4127 @param other: Other object to compare to. 4128 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 4129 """ 4130 if other is None: 4131 return 1 4132 if self.purgeDirs != other.purgeDirs: 4133 if self.purgeDirs < other.purgeDirs: 4134 return -1 4135 else: 4136 return 1 4137 return 0
    4138
    4139 - def _setPurgeDirs(self, value):
    4140 """ 4141 Property target used to set the purge dirs list. 4142 Either the value must be C{None} or each element must be a C{PurgeDir}. 4143 @raise ValueError: If the value is not a C{PurgeDir} 4144 """ 4145 if value is None: 4146 self._purgeDirs = None 4147 else: 4148 try: 4149 saved = self._purgeDirs 4150 self._purgeDirs = ObjectTypeList(PurgeDir, "PurgeDir") 4151 self._purgeDirs.extend(value) 4152 except Exception as e: 4153 self._purgeDirs = saved 4154 raise e
    4155
    4156 - def _getPurgeDirs(self):
    4157 """ 4158 Property target used to get the purge dirs list. 4159 """ 4160 return self._purgeDirs
    4161 4162 purgeDirs = property(_getPurgeDirs, _setPurgeDirs, None, "List of directories to purge.")
    4163
    4164 4165 ######################################################################## 4166 # Config class definition 4167 ######################################################################## 4168 4169 @total_ordering 4170 -class Config(object):
    4171 4172 ###################### 4173 # Class documentation 4174 ###################### 4175 4176 """ 4177 Class representing a Cedar Backup XML configuration document. 4178 4179 The C{Config} class is a Python object representation of a Cedar Backup XML 4180 configuration file. It is intended to be the only Python-language interface 4181 to Cedar Backup configuration on disk for both Cedar Backup itself and for 4182 external applications. 4183 4184 The object representation is two-way: XML data can be used to create a 4185 C{Config} object, and then changes to the object can be propogated back to 4186 disk. A C{Config} object can even be used to create a configuration file 4187 from scratch programmatically. 4188 4189 This class and the classes it is composed from often use Python's 4190 C{property} construct to validate input and limit access to values. Some 4191 validations can only be done once a document is considered "complete" 4192 (see module notes for more details). 4193 4194 Assignments to the various instance variables must match the expected 4195 type, i.e. C{reference} must be a C{ReferenceConfig}. The internal check 4196 uses the built-in C{isinstance} function, so it should be OK to use 4197 subclasses if you want to. 4198 4199 If an instance variable is not set, its value will be C{None}. When an 4200 object is initialized without using an XML document, all of the values 4201 will be C{None}. Even when an object is initialized using XML, some of 4202 the values might be C{None} because not every section is required. 4203 4204 @note: Lists within this class are "unordered" for equality comparisons. 4205 4206 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, extractXml, validate, 4207 reference, extensions, options, collect, stage, store, purge, 4208 _getReference, _setReference, _getExtensions, _setExtensions, 4209 _getOptions, _setOptions, _getPeers, _setPeers, _getCollect, 4210 _setCollect, _getStage, _setStage, _getStore, _setStore, 4211 _getPurge, _setPurge 4212 """ 4213 4214 ############## 4215 # Constructor 4216 ############## 4217
    4218 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    4219 """ 4220 Initializes a configuration object. 4221 4222 If you initialize the object without passing either C{xmlData} or 4223 C{xmlPath}, then configuration will be empty and will be invalid until it 4224 is filled in properly. 4225 4226 No reference to the original XML data or original path is saved off by 4227 this class. Once the data has been parsed (successfully or not) this 4228 original information is discarded. 4229 4230 Unless the C{validate} argument is C{False}, the L{Config.validate} 4231 method will be called (with its default arguments) against configuration 4232 after successfully parsing any passed-in XML. Keep in mind that even if 4233 C{validate} is C{False}, it might not be possible to parse the passed-in 4234 XML document if lower-level validations fail. 4235 4236 @note: It is strongly suggested that the C{validate} option always be set 4237 to C{True} (the default) unless there is a specific need to read in 4238 invalid configuration from disk. 4239 4240 @param xmlData: XML data representing configuration. 4241 @type xmlData: String data. 4242 4243 @param xmlPath: Path to an XML file on disk. 4244 @type xmlPath: Absolute path to a file on disk. 4245 4246 @param validate: Validate the document after parsing it. 4247 @type validate: Boolean true/false. 4248 4249 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 4250 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 4251 @raise ValueError: If the parsed configuration document is not valid. 4252 """ 4253 self._reference = None 4254 self._extensions = None 4255 self._options = None 4256 self._peers = None 4257 self._collect = None 4258 self._stage = None 4259 self._store = None 4260 self._purge = None 4261 self.reference = None 4262 self.extensions = None 4263 self.options = None 4264 self.peers = None 4265 self.collect = None 4266 self.stage = None 4267 self.store = None 4268 self.purge = None 4269 if xmlData is not None and xmlPath is not None: 4270 raise ValueError("Use either xmlData or xmlPath, but not both.") 4271 if xmlData is not None: 4272 self._parseXmlData(xmlData) 4273 if validate: 4274 self.validate() 4275 elif xmlPath is not None: 4276 with open(xmlPath) as f: 4277 xmlData = f.read() 4278 self._parseXmlData(xmlData) 4279 if validate: 4280 self.validate()
    4281 4282 4283 ######################### 4284 # String representations 4285 ######################### 4286
    4287 - def __repr__(self):
    4288 """ 4289 Official string representation for class instance. 4290 """ 4291 return "Config(%s, %s, %s, %s, %s, %s, %s, %s)" % (self.reference, self.extensions, self.options, 4292 self.peers, self.collect, self.stage, self.store, 4293 self.purge)
    4294
    4295 - def __str__(self):
    4296 """ 4297 Informal string representation for class instance. 4298 """ 4299 return self.__repr__()
    4300 4301 4302 ############################# 4303 # Standard comparison method 4304 ############################# 4305
    4306 - def __eq__(self, other):
    4307 """Equals operator, implemented in terms of original Python 2 compare operator.""" 4308 return self.__cmp__(other) == 0
    4309
    4310 - def __lt__(self, other):
    4311 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 4312 return self.__cmp__(other) < 0
    4313
    4314 - def __gt__(self, other):
    4315 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 4316 return self.__cmp__(other) > 0
    4317
    4318 - def __cmp__(self, other):
    4319 """ 4320 Original Python 2 comparison operator. 4321 Lists within this class are "unordered" for equality comparisons. 4322 @param other: Other object to compare to. 4323 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 4324 """ 4325 if other is None: 4326 return 1 4327 if self.reference != other.reference: 4328 if self.reference < other.reference: 4329 return -1 4330 else: 4331 return 1 4332 if self.extensions != other.extensions: 4333 if self.extensions < other.extensions: 4334 return -1 4335 else: 4336 return 1 4337 if self.options != other.options: 4338 if self.options < other.options: 4339 return -1 4340 else: 4341 return 1 4342 if self.peers != other.peers: 4343 if self.peers < other.peers: 4344 return -1 4345 else: 4346 return 1 4347 if self.collect != other.collect: 4348 if self.collect < other.collect: 4349 return -1 4350 else: 4351 return 1 4352 if self.stage != other.stage: 4353 if self.stage < other.stage: 4354 return -1 4355 else: 4356 return 1 4357 if self.store != other.store: 4358 if self.store < other.store: 4359 return -1 4360 else: 4361 return 1 4362 if self.purge != other.purge: 4363 if self.purge < other.purge: 4364 return -1 4365 else: 4366 return 1 4367 return 0
    4368 4369 4370 ############# 4371 # Properties 4372 ############# 4373
    4374 - def _setReference(self, value):
    4375 """ 4376 Property target used to set the reference configuration value. 4377 If not C{None}, the value must be a C{ReferenceConfig} object. 4378 @raise ValueError: If the value is not a C{ReferenceConfig} 4379 """ 4380 if value is None: 4381 self._reference = None 4382 else: 4383 if not isinstance(value, ReferenceConfig): 4384 raise ValueError("Value must be a C{ReferenceConfig} object.") 4385 self._reference = value
    4386
    4387 - def _getReference(self):
    4388 """ 4389 Property target used to get the reference configuration value. 4390 """ 4391 return self._reference
    4392
    4393 - def _setExtensions(self, value):
    4394 """ 4395 Property target used to set the extensions configuration value. 4396 If not C{None}, the value must be a C{ExtensionsConfig} object. 4397 @raise ValueError: If the value is not a C{ExtensionsConfig} 4398 """ 4399 if value is None: 4400 self._extensions = None 4401 else: 4402 if not isinstance(value, ExtensionsConfig): 4403 raise ValueError("Value must be a C{ExtensionsConfig} object.") 4404 self._extensions = value
    4405
    4406 - def _getExtensions(self):
    4407 """ 4408 Property target used to get the extensions configuration value. 4409 """ 4410 return self._extensions
    4411
    4412 - def _setOptions(self, value):
    4413 """ 4414 Property target used to set the options configuration value. 4415 If not C{None}, the value must be an C{OptionsConfig} object. 4416 @raise ValueError: If the value is not a C{OptionsConfig} 4417 """ 4418 if value is None: 4419 self._options = None 4420 else: 4421 if not isinstance(value, OptionsConfig): 4422 raise ValueError("Value must be a C{OptionsConfig} object.") 4423 self._options = value
    4424
    4425 - def _getOptions(self):
    4426 """ 4427 Property target used to get the options configuration value. 4428 """ 4429 return self._options
    4430
    4431 - def _setPeers(self, value):
    4432 """ 4433 Property target used to set the peers configuration value. 4434 If not C{None}, the value must be an C{PeersConfig} object. 4435 @raise ValueError: If the value is not a C{PeersConfig} 4436 """ 4437 if value is None: 4438 self._peers = None 4439 else: 4440 if not isinstance(value, PeersConfig): 4441 raise ValueError("Value must be a C{PeersConfig} object.") 4442 self._peers = value
    4443
    4444 - def _getPeers(self):
    4445 """ 4446 Property target used to get the peers configuration value. 4447 """ 4448 return self._peers
    4449
    4450 - def _setCollect(self, value):
    4451 """ 4452 Property target used to set the collect configuration value. 4453 If not C{None}, the value must be a C{CollectConfig} object. 4454 @raise ValueError: If the value is not a C{CollectConfig} 4455 """ 4456 if value is None: 4457 self._collect = None 4458 else: 4459 if not isinstance(value, CollectConfig): 4460 raise ValueError("Value must be a C{CollectConfig} object.") 4461 self._collect = value
    4462
    4463 - def _getCollect(self):
    4464 """ 4465 Property target used to get the collect configuration value. 4466 """ 4467 return self._collect
    4468
    4469 - def _setStage(self, value):
    4470 """ 4471 Property target used to set the stage configuration value. 4472 If not C{None}, the value must be a C{StageConfig} object. 4473 @raise ValueError: If the value is not a C{StageConfig} 4474 """ 4475 if value is None: 4476 self._stage = None 4477 else: 4478 if not isinstance(value, StageConfig): 4479 raise ValueError("Value must be a C{StageConfig} object.") 4480 self._stage = value
    4481
    4482 - def _getStage(self):
    4483 """ 4484 Property target used to get the stage configuration value. 4485 """ 4486 return self._stage
    4487
    4488 - def _setStore(self, value):
    4489 """ 4490 Property target used to set the store configuration value. 4491 If not C{None}, the value must be a C{StoreConfig} object. 4492 @raise ValueError: If the value is not a C{StoreConfig} 4493 """ 4494 if value is None: 4495 self._store = None 4496 else: 4497 if not isinstance(value, StoreConfig): 4498 raise ValueError("Value must be a C{StoreConfig} object.") 4499 self._store = value
    4500
    4501 - def _getStore(self):
    4502 """ 4503 Property target used to get the store configuration value. 4504 """ 4505 return self._store
    4506
    4507 - def _setPurge(self, value):
    4508 """ 4509 Property target used to set the purge configuration value. 4510 If not C{None}, the value must be a C{PurgeConfig} object. 4511 @raise ValueError: If the value is not a C{PurgeConfig} 4512 """ 4513 if value is None: 4514 self._purge = None 4515 else: 4516 if not isinstance(value, PurgeConfig): 4517 raise ValueError("Value must be a C{PurgeConfig} object.") 4518 self._purge = value
    4519
    4520 - def _getPurge(self):
    4521 """ 4522 Property target used to get the purge configuration value. 4523 """ 4524 return self._purge
    4525 4526 reference = property(_getReference, _setReference, None, "Reference configuration in terms of a C{ReferenceConfig} object.") 4527 extensions = property(_getExtensions, _setExtensions, None, "Extensions configuration in terms of a C{ExtensionsConfig} object.") 4528 options = property(_getOptions, _setOptions, None, "Options configuration in terms of a C{OptionsConfig} object.") 4529 peers = property(_getPeers, _setPeers, None, "Peers configuration in terms of a C{PeersConfig} object.") 4530 collect = property(_getCollect, _setCollect, None, "Collect configuration in terms of a C{CollectConfig} object.") 4531 stage = property(_getStage, _setStage, None, "Stage configuration in terms of a C{StageConfig} object.") 4532 store = property(_getStore, _setStore, None, "Store configuration in terms of a C{StoreConfig} object.") 4533 purge = property(_getPurge, _setPurge, None, "Purge configuration in terms of a C{PurgeConfig} object.") 4534 4535 4536 ################# 4537 # Public methods 4538 ################# 4539
    4540 - def extractXml(self, xmlPath=None, validate=True):
    4541 """ 4542 Extracts configuration into an XML document. 4543 4544 If C{xmlPath} is not provided, then the XML document will be returned as 4545 a string. If C{xmlPath} is provided, then the XML document will be written 4546 to the file and C{None} will be returned. 4547 4548 Unless the C{validate} parameter is C{False}, the L{Config.validate} 4549 method will be called (with its default arguments) against the 4550 configuration before extracting the XML. If configuration is not valid, 4551 then an XML document will not be extracted. 4552 4553 @note: It is strongly suggested that the C{validate} option always be set 4554 to C{True} (the default) unless there is a specific need to write an 4555 invalid configuration file to disk. 4556 4557 @param xmlPath: Path to an XML file to create on disk. 4558 @type xmlPath: Absolute path to a file. 4559 4560 @param validate: Validate the document before extracting it. 4561 @type validate: Boolean true/false. 4562 4563 @return: XML string data or C{None} as described above. 4564 4565 @raise ValueError: If configuration within the object is not valid. 4566 @raise IOError: If there is an error writing to the file. 4567 @raise OSError: If there is an error writing to the file. 4568 """ 4569 if validate: 4570 self.validate() 4571 xmlData = self._extractXml() 4572 if xmlPath is not None: 4573 with open(xmlPath, "w") as f: 4574 f.write(xmlData) 4575 return None 4576 else: 4577 return xmlData
    4578
    4579 - def validate(self, requireOneAction=True, requireReference=False, requireExtensions=False, requireOptions=True, 4580 requireCollect=False, requireStage=False, requireStore=False, requirePurge=False, requirePeers=False):
    4581 """ 4582 Validates configuration represented by the object. 4583 4584 This method encapsulates all of the validations that should apply to a 4585 fully "complete" document but are not already taken care of by earlier 4586 validations. It also provides some extra convenience functionality which 4587 might be useful to some people. The process of validation is laid out in 4588 the I{Validation} section in the class notes (above). 4589 4590 @param requireOneAction: Require at least one of the collect, stage, store or purge sections. 4591 @param requireReference: Require the reference section. 4592 @param requireExtensions: Require the extensions section. 4593 @param requireOptions: Require the options section. 4594 @param requirePeers: Require the peers section. 4595 @param requireCollect: Require the collect section. 4596 @param requireStage: Require the stage section. 4597 @param requireStore: Require the store section. 4598 @param requirePurge: Require the purge section. 4599 4600 @raise ValueError: If one of the validations fails. 4601 """ 4602 if requireOneAction and (self.collect, self.stage, self.store, self.purge) == (None, None, None, None): 4603 raise ValueError("At least one of the collect, stage, store and purge sections is required.") 4604 if requireReference and self.reference is None: 4605 raise ValueError("The reference is section is required.") 4606 if requireExtensions and self.extensions is None: 4607 raise ValueError("The extensions is section is required.") 4608 if requireOptions and self.options is None: 4609 raise ValueError("The options is section is required.") 4610 if requirePeers and self.peers is None: 4611 raise ValueError("The peers is section is required.") 4612 if requireCollect and self.collect is None: 4613 raise ValueError("The collect is section is required.") 4614 if requireStage and self.stage is None: 4615 raise ValueError("The stage is section is required.") 4616 if requireStore and self.store is None: 4617 raise ValueError("The store is section is required.") 4618 if requirePurge and self.purge is None: 4619 raise ValueError("The purge is section is required.") 4620 self._validateContents()
    4621 4622 4623 ##################################### 4624 # High-level methods for parsing XML 4625 ##################################### 4626
    4627 - def _parseXmlData(self, xmlData):
    4628 """ 4629 Internal method to parse an XML string into the object. 4630 4631 This method parses the XML document into a DOM tree (C{xmlDom}) and then 4632 calls individual static methods to parse each of the individual 4633 configuration sections. 4634 4635 Most of the validation we do here has to do with whether the document can 4636 be parsed and whether any values which exist are valid. We don't do much 4637 validation as to whether required elements actually exist unless we have 4638 to to make sense of the document (instead, that's the job of the 4639 L{validate} method). 4640 4641 @param xmlData: XML data to be parsed 4642 @type xmlData: String data 4643 4644 @raise ValueError: If the XML cannot be successfully parsed. 4645 """ 4646 (xmlDom, parentNode) = createInputDom(xmlData) 4647 self._reference = Config._parseReference(parentNode) 4648 self._extensions = Config._parseExtensions(parentNode) 4649 self._options = Config._parseOptions(parentNode) 4650 self._peers = Config._parsePeers(parentNode) 4651 self._collect = Config._parseCollect(parentNode) 4652 self._stage = Config._parseStage(parentNode) 4653 self._store = Config._parseStore(parentNode) 4654 self._purge = Config._parsePurge(parentNode)
    4655 4656 @staticmethod
    4657 - def _parseReference(parentNode):
    4658 """ 4659 Parses a reference configuration section. 4660 4661 We read the following fields:: 4662 4663 author //cb_config/reference/author 4664 revision //cb_config/reference/revision 4665 description //cb_config/reference/description 4666 generator //cb_config/reference/generator 4667 4668 @param parentNode: Parent node to search beneath. 4669 4670 @return: C{ReferenceConfig} object or C{None} if the section does not exist. 4671 @raise ValueError: If some filled-in value is invalid. 4672 """ 4673 reference = None 4674 sectionNode = readFirstChild(parentNode, "reference") 4675 if sectionNode is not None: 4676 reference = ReferenceConfig() 4677 reference.author = readString(sectionNode, "author") 4678 reference.revision = readString(sectionNode, "revision") 4679 reference.description = readString(sectionNode, "description") 4680 reference.generator = readString(sectionNode, "generator") 4681 return reference
    4682 4683 @staticmethod
    4684 - def _parseExtensions(parentNode):
    4685 """ 4686 Parses an extensions configuration section. 4687 4688 We read the following fields:: 4689 4690 orderMode //cb_config/extensions/order_mode 4691 4692 We also read groups of the following items, one list element per item:: 4693 4694 name //cb_config/extensions/action/name 4695 module //cb_config/extensions/action/module 4696 function //cb_config/extensions/action/function 4697 index //cb_config/extensions/action/index 4698 dependencies //cb_config/extensions/action/depends 4699 4700 The extended actions are parsed by L{_parseExtendedActions}. 4701 4702 @param parentNode: Parent node to search beneath. 4703 4704 @return: C{ExtensionsConfig} object or C{None} if the section does not exist. 4705 @raise ValueError: If some filled-in value is invalid. 4706 """ 4707 extensions = None 4708 sectionNode = readFirstChild(parentNode, "extensions") 4709 if sectionNode is not None: 4710 extensions = ExtensionsConfig() 4711 extensions.orderMode = readString(sectionNode, "order_mode") 4712 extensions.actions = Config._parseExtendedActions(sectionNode) 4713 return extensions
    4714 4715 @staticmethod
    4716 - def _parseOptions(parentNode):
    4717 """ 4718 Parses a options configuration section. 4719 4720 We read the following fields:: 4721 4722 startingDay //cb_config/options/starting_day 4723 workingDir //cb_config/options/working_dir 4724 backupUser //cb_config/options/backup_user 4725 backupGroup //cb_config/options/backup_group 4726 rcpCommand //cb_config/options/rcp_command 4727 rshCommand //cb_config/options/rsh_command 4728 cbackCommand //cb_config/options/cback_command 4729 managedActions //cb_config/options/managed_actions 4730 4731 The list of managed actions is a comma-separated list of action names. 4732 4733 We also read groups of the following items, one list element per 4734 item:: 4735 4736 overrides //cb_config/options/override 4737 hooks //cb_config/options/hook 4738 4739 The overrides are parsed by L{_parseOverrides} and the hooks are parsed 4740 by L{_parseHooks}. 4741 4742 @param parentNode: Parent node to search beneath. 4743 4744 @return: C{OptionsConfig} object or C{None} if the section does not exist. 4745 @raise ValueError: If some filled-in value is invalid. 4746 """ 4747 options = None 4748 sectionNode = readFirstChild(parentNode, "options") 4749 if sectionNode is not None: 4750 options = OptionsConfig() 4751 options.startingDay = readString(sectionNode, "starting_day") 4752 options.workingDir = readString(sectionNode, "working_dir") 4753 options.backupUser = readString(sectionNode, "backup_user") 4754 options.backupGroup = readString(sectionNode, "backup_group") 4755 options.rcpCommand = readString(sectionNode, "rcp_command") 4756 options.rshCommand = readString(sectionNode, "rsh_command") 4757 options.cbackCommand = readString(sectionNode, "cback_command") 4758 options.overrides = Config._parseOverrides(sectionNode) 4759 options.hooks = Config._parseHooks(sectionNode) 4760 managedActions = readString(sectionNode, "managed_actions") 4761 options.managedActions = parseCommaSeparatedString(managedActions) 4762 return options
    4763 4764 @staticmethod
    4765 - def _parsePeers(parentNode):
    4766 """ 4767 Parses a peers configuration section. 4768 4769 We read groups of the following items, one list element per 4770 item:: 4771 4772 localPeers //cb_config/stage/peer 4773 remotePeers //cb_config/stage/peer 4774 4775 The individual peer entries are parsed by L{_parsePeerList}. 4776 4777 @param parentNode: Parent node to search beneath. 4778 4779 @return: C{StageConfig} object or C{None} if the section does not exist. 4780 @raise ValueError: If some filled-in value is invalid. 4781 """ 4782 peers = None 4783 sectionNode = readFirstChild(parentNode, "peers") 4784 if sectionNode is not None: 4785 peers = PeersConfig() 4786 (peers.localPeers, peers.remotePeers) = Config._parsePeerList(sectionNode) 4787 return peers
    4788 4789 @staticmethod
    4790 - def _parseCollect(parentNode):
    4791 """ 4792 Parses a collect configuration section. 4793 4794 We read the following individual fields:: 4795 4796 targetDir //cb_config/collect/collect_dir 4797 collectMode //cb_config/collect/collect_mode 4798 archiveMode //cb_config/collect/archive_mode 4799 ignoreFile //cb_config/collect/ignore_file 4800 4801 We also read groups of the following items, one list element per 4802 item:: 4803 4804 absoluteExcludePaths //cb_config/collect/exclude/abs_path 4805 excludePatterns //cb_config/collect/exclude/pattern 4806 collectFiles //cb_config/collect/file 4807 collectDirs //cb_config/collect/dir 4808 4809 The exclusions are parsed by L{_parseExclusions}, the collect files are 4810 parsed by L{_parseCollectFiles}, and the directories are parsed by 4811 L{_parseCollectDirs}. 4812 4813 @param parentNode: Parent node to search beneath. 4814 4815 @return: C{CollectConfig} object or C{None} if the section does not exist. 4816 @raise ValueError: If some filled-in value is invalid. 4817 """ 4818 collect = None 4819 sectionNode = readFirstChild(parentNode, "collect") 4820 if sectionNode is not None: 4821 collect = CollectConfig() 4822 collect.targetDir = readString(sectionNode, "collect_dir") 4823 collect.collectMode = readString(sectionNode, "collect_mode") 4824 collect.archiveMode = readString(sectionNode, "archive_mode") 4825 collect.ignoreFile = readString(sectionNode, "ignore_file") 4826 (collect.absoluteExcludePaths, unused, collect.excludePatterns) = Config._parseExclusions(sectionNode) 4827 collect.collectFiles = Config._parseCollectFiles(sectionNode) 4828 collect.collectDirs = Config._parseCollectDirs(sectionNode) 4829 return collect
    4830 4831 @staticmethod
    4832 - def _parseStage(parentNode):
    4833 """ 4834 Parses a stage configuration section. 4835 4836 We read the following individual fields:: 4837 4838 targetDir //cb_config/stage/staging_dir 4839 4840 We also read groups of the following items, one list element per 4841 item:: 4842 4843 localPeers //cb_config/stage/peer 4844 remotePeers //cb_config/stage/peer 4845 4846 The individual peer entries are parsed by L{_parsePeerList}. 4847 4848 @param parentNode: Parent node to search beneath. 4849 4850 @return: C{StageConfig} object or C{None} if the section does not exist. 4851 @raise ValueError: If some filled-in value is invalid. 4852 """ 4853 stage = None 4854 sectionNode = readFirstChild(parentNode, "stage") 4855 if sectionNode is not None: 4856 stage = StageConfig() 4857 stage.targetDir = readString(sectionNode, "staging_dir") 4858 (stage.localPeers, stage.remotePeers) = Config._parsePeerList(sectionNode) 4859 return stage
    4860 4861 @staticmethod
    4862 - def _parseStore(parentNode):
    4863 """ 4864 Parses a store configuration section. 4865 4866 We read the following fields:: 4867 4868 sourceDir //cb_config/store/source_dir 4869 mediaType //cb_config/store/media_type 4870 deviceType //cb_config/store/device_type 4871 devicePath //cb_config/store/target_device 4872 deviceScsiId //cb_config/store/target_scsi_id 4873 driveSpeed //cb_config/store/drive_speed 4874 checkData //cb_config/store/check_data 4875 checkMedia //cb_config/store/check_media 4876 warnMidnite //cb_config/store/warn_midnite 4877 noEject //cb_config/store/no_eject 4878 4879 Blanking behavior configuration is parsed by the C{_parseBlankBehavior} 4880 method. 4881 4882 @param parentNode: Parent node to search beneath. 4883 4884 @return: C{StoreConfig} object or C{None} if the section does not exist. 4885 @raise ValueError: If some filled-in value is invalid. 4886 """ 4887 store = None 4888 sectionNode = readFirstChild(parentNode, "store") 4889 if sectionNode is not None: 4890 store = StoreConfig() 4891 store.sourceDir = readString(sectionNode, "source_dir") 4892 store.mediaType = readString(sectionNode, "media_type") 4893 store.deviceType = readString(sectionNode, "device_type") 4894 store.devicePath = readString(sectionNode, "target_device") 4895 store.deviceScsiId = readString(sectionNode, "target_scsi_id") 4896 store.driveSpeed = readInteger(sectionNode, "drive_speed") 4897 store.checkData = readBoolean(sectionNode, "check_data") 4898 store.checkMedia = readBoolean(sectionNode, "check_media") 4899 store.warnMidnite = readBoolean(sectionNode, "warn_midnite") 4900 store.noEject = readBoolean(sectionNode, "no_eject") 4901 store.blankBehavior = Config._parseBlankBehavior(sectionNode) 4902 store.refreshMediaDelay = readInteger(sectionNode, "refresh_media_delay") 4903 store.ejectDelay = readInteger(sectionNode, "eject_delay") 4904 return store
    4905 4906 @staticmethod
    4907 - def _parsePurge(parentNode):
    4908 """ 4909 Parses a purge configuration section. 4910 4911 We read groups of the following items, one list element per 4912 item:: 4913 4914 purgeDirs //cb_config/purge/dir 4915 4916 The individual directory entries are parsed by L{_parsePurgeDirs}. 4917 4918 @param parentNode: Parent node to search beneath. 4919 4920 @return: C{PurgeConfig} object or C{None} if the section does not exist. 4921 @raise ValueError: If some filled-in value is invalid. 4922 """ 4923 purge = None 4924 sectionNode = readFirstChild(parentNode, "purge") 4925 if sectionNode is not None: 4926 purge = PurgeConfig() 4927 purge.purgeDirs = Config._parsePurgeDirs(sectionNode) 4928 return purge
    4929 4930 @staticmethod
    4931 - def _parseExtendedActions(parentNode):
    4932 """ 4933 Reads extended actions data from immediately beneath the parent. 4934 4935 We read the following individual fields from each extended action:: 4936 4937 name name 4938 module module 4939 function function 4940 index index 4941 dependencies depends 4942 4943 Dependency information is parsed by the C{_parseDependencies} method. 4944 4945 @param parentNode: Parent node to search beneath. 4946 4947 @return: List of extended actions. 4948 @raise ValueError: If the data at the location can't be read 4949 """ 4950 lst = [] 4951 for entry in readChildren(parentNode, "action"): 4952 if isElement(entry): 4953 action = ExtendedAction() 4954 action.name = readString(entry, "name") 4955 action.module = readString(entry, "module") 4956 action.function = readString(entry, "function") 4957 action.index = readInteger(entry, "index") 4958 action.dependencies = Config._parseDependencies(entry) 4959 lst.append(action) 4960 if lst == []: 4961 lst = None 4962 return lst
    4963 4964 @staticmethod
    4965 - def _parseExclusions(parentNode):
    4966 """ 4967 Reads exclusions data from immediately beneath the parent. 4968 4969 We read groups of the following items, one list element per item:: 4970 4971 absolute exclude/abs_path 4972 relative exclude/rel_path 4973 patterns exclude/pattern 4974 4975 If there are none of some pattern (i.e. no relative path items) then 4976 C{None} will be returned for that item in the tuple. 4977 4978 This method can be used to parse exclusions on both the collect 4979 configuration level and on the collect directory level within collect 4980 configuration. 4981 4982 @param parentNode: Parent node to search beneath. 4983 4984 @return: Tuple of (absolute, relative, patterns) exclusions. 4985 """ 4986 sectionNode = readFirstChild(parentNode, "exclude") 4987 if sectionNode is None: 4988 return (None, None, None) 4989 else: 4990 absolute = readStringList(sectionNode, "abs_path") 4991 relative = readStringList(sectionNode, "rel_path") 4992 patterns = readStringList(sectionNode, "pattern") 4993 return (absolute, relative, patterns)
    4994 4995 @staticmethod
    4996 - def _parseOverrides(parentNode):
    4997 """ 4998 Reads a list of C{CommandOverride} objects from immediately beneath the parent. 4999 5000 We read the following individual fields:: 5001 5002 command command 5003 absolutePath abs_path 5004 5005 @param parentNode: Parent node to search beneath. 5006 5007 @return: List of C{CommandOverride} objects or C{None} if none are found. 5008 @raise ValueError: If some filled-in value is invalid. 5009 """ 5010 lst = [] 5011 for entry in readChildren(parentNode, "override"): 5012 if isElement(entry): 5013 override = CommandOverride() 5014 override.command = readString(entry, "command") 5015 override.absolutePath = readString(entry, "abs_path") 5016 lst.append(override) 5017 if lst == []: 5018 lst = None 5019 return lst
    5020 5021 @staticmethod 5022 #pylint: disable=R0204
    5023 - def _parseHooks(parentNode):
    5024 """ 5025 Reads a list of C{ActionHook} objects from immediately beneath the parent. 5026 5027 We read the following individual fields:: 5028 5029 action action 5030 command command 5031 5032 @param parentNode: Parent node to search beneath. 5033 5034 @return: List of C{ActionHook} objects or C{None} if none are found. 5035 @raise ValueError: If some filled-in value is invalid. 5036 """ 5037 lst = [] 5038 for entry in readChildren(parentNode, "pre_action_hook"): 5039 if isElement(entry): 5040 hook = PreActionHook() 5041 hook.action = readString(entry, "action") 5042 hook.command = readString(entry, "command") 5043 lst.append(hook) 5044 for entry in readChildren(parentNode, "post_action_hook"): 5045 if isElement(entry): 5046 hook = PostActionHook() 5047 hook.action = readString(entry, "action") 5048 hook.command = readString(entry, "command") 5049 lst.append(hook) 5050 if lst == []: 5051 lst = None 5052 return lst
    5053 5054 @staticmethod
    5055 - def _parseCollectFiles(parentNode):
    5056 """ 5057 Reads a list of C{CollectFile} objects from immediately beneath the parent. 5058 5059 We read the following individual fields:: 5060 5061 absolutePath abs_path 5062 collectMode mode I{or} collect_mode 5063 archiveMode archive_mode 5064 5065 The collect mode is a special case. Just a C{mode} tag is accepted, but 5066 we prefer C{collect_mode} for consistency with the rest of the config 5067 file and to avoid confusion with the archive mode. If both are provided, 5068 only C{mode} will be used. 5069 5070 @param parentNode: Parent node to search beneath. 5071 5072 @return: List of C{CollectFile} objects or C{None} if none are found. 5073 @raise ValueError: If some filled-in value is invalid. 5074 """ 5075 lst = [] 5076 for entry in readChildren(parentNode, "file"): 5077 if isElement(entry): 5078 cfile = CollectFile() 5079 cfile.absolutePath = readString(entry, "abs_path") 5080 cfile.collectMode = readString(entry, "mode") 5081 if cfile.collectMode is None: 5082 cfile.collectMode = readString(entry, "collect_mode") 5083 cfile.archiveMode = readString(entry, "archive_mode") 5084 lst.append(cfile) 5085 if lst == []: 5086 lst = None 5087 return lst
    5088 5089 @staticmethod
    5090 - def _parseCollectDirs(parentNode):
    5091 """ 5092 Reads a list of C{CollectDir} objects from immediately beneath the parent. 5093 5094 We read the following individual fields:: 5095 5096 absolutePath abs_path 5097 collectMode mode I{or} collect_mode 5098 archiveMode archive_mode 5099 ignoreFile ignore_file 5100 linkDepth link_depth 5101 dereference dereference 5102 recursionLevel recursion_level 5103 5104 The collect mode is a special case. Just a C{mode} tag is accepted for 5105 backwards compatibility, but we prefer C{collect_mode} for consistency 5106 with the rest of the config file and to avoid confusion with the archive 5107 mode. If both are provided, only C{mode} will be used. 5108 5109 We also read groups of the following items, one list element per 5110 item:: 5111 5112 absoluteExcludePaths exclude/abs_path 5113 relativeExcludePaths exclude/rel_path 5114 excludePatterns exclude/pattern 5115 5116 The exclusions are parsed by L{_parseExclusions}. 5117 5118 @param parentNode: Parent node to search beneath. 5119 5120 @return: List of C{CollectDir} objects or C{None} if none are found. 5121 @raise ValueError: If some filled-in value is invalid. 5122 """ 5123 lst = [] 5124 for entry in readChildren(parentNode, "dir"): 5125 if isElement(entry): 5126 cdir = CollectDir() 5127 cdir.absolutePath = readString(entry, "abs_path") 5128 cdir.collectMode = readString(entry, "mode") 5129 if cdir.collectMode is None: 5130 cdir.collectMode = readString(entry, "collect_mode") 5131 cdir.archiveMode = readString(entry, "archive_mode") 5132 cdir.ignoreFile = readString(entry, "ignore_file") 5133 cdir.linkDepth = readInteger(entry, "link_depth") 5134 cdir.dereference = readBoolean(entry, "dereference") 5135 cdir.recursionLevel = readInteger(entry, "recursion_level") 5136 (cdir.absoluteExcludePaths, cdir.relativeExcludePaths, cdir.excludePatterns) = Config._parseExclusions(entry) 5137 lst.append(cdir) 5138 if lst == []: 5139 lst = None 5140 return lst
    5141 5142 @staticmethod
    5143 - def _parsePurgeDirs(parentNode):
    5144 """ 5145 Reads a list of C{PurgeDir} objects from immediately beneath the parent. 5146 5147 We read the following individual fields:: 5148 5149 absolutePath <baseExpr>/abs_path 5150 retainDays <baseExpr>/retain_days 5151 5152 @param parentNode: Parent node to search beneath. 5153 5154 @return: List of C{PurgeDir} objects or C{None} if none are found. 5155 @raise ValueError: If the data at the location can't be read 5156 """ 5157 lst = [] 5158 for entry in readChildren(parentNode, "dir"): 5159 if isElement(entry): 5160 cdir = PurgeDir() 5161 cdir.absolutePath = readString(entry, "abs_path") 5162 cdir.retainDays = readInteger(entry, "retain_days") 5163 lst.append(cdir) 5164 if lst == []: 5165 lst = None 5166 return lst
    5167 5168 @staticmethod
    5169 - def _parsePeerList(parentNode):
    5170 """ 5171 Reads remote and local peer data from immediately beneath the parent. 5172 5173 We read the following individual fields for both remote 5174 and local peers:: 5175 5176 name name 5177 collectDir collect_dir 5178 5179 We also read the following individual fields for remote peers 5180 only:: 5181 5182 remoteUser backup_user 5183 rcpCommand rcp_command 5184 rshCommand rsh_command 5185 cbackCommand cback_command 5186 managed managed 5187 managedActions managed_actions 5188 5189 Additionally, the value in the C{type} field is used to determine whether 5190 this entry is a remote peer. If the type is C{"remote"}, it's a remote 5191 peer, and if the type is C{"local"}, it's a remote peer. 5192 5193 If there are none of one type of peer (i.e. no local peers) then C{None} 5194 will be returned for that item in the tuple. 5195 5196 @param parentNode: Parent node to search beneath. 5197 5198 @return: Tuple of (local, remote) peer lists. 5199 @raise ValueError: If the data at the location can't be read 5200 """ 5201 localPeers = [] 5202 remotePeers = [] 5203 for entry in readChildren(parentNode, "peer"): 5204 if isElement(entry): 5205 peerType = readString(entry, "type") 5206 if peerType == "local": 5207 localPeer = LocalPeer() 5208 localPeer.name = readString(entry, "name") 5209 localPeer.collectDir = readString(entry, "collect_dir") 5210 localPeer.ignoreFailureMode = readString(entry, "ignore_failures") 5211 localPeers.append(localPeer) 5212 elif peerType == "remote": 5213 remotePeer = RemotePeer() 5214 remotePeer.name = readString(entry, "name") 5215 remotePeer.collectDir = readString(entry, "collect_dir") 5216 remotePeer.remoteUser = readString(entry, "backup_user") 5217 remotePeer.rcpCommand = readString(entry, "rcp_command") 5218 remotePeer.rshCommand = readString(entry, "rsh_command") 5219 remotePeer.cbackCommand = readString(entry, "cback_command") 5220 remotePeer.ignoreFailureMode = readString(entry, "ignore_failures") 5221 remotePeer.managed = readBoolean(entry, "managed") 5222 managedActions = readString(entry, "managed_actions") 5223 remotePeer.managedActions = parseCommaSeparatedString(managedActions) 5224 remotePeers.append(remotePeer) 5225 if localPeers == []: 5226 localPeers = None 5227 if remotePeers == []: 5228 remotePeers = None 5229 return (localPeers, remotePeers)
    5230 5231 @staticmethod
    5232 - def _parseDependencies(parentNode):
    5233 """ 5234 Reads extended action dependency information from a parent node. 5235 5236 We read the following individual fields:: 5237 5238 runBefore depends/run_before 5239 runAfter depends/run_after 5240 5241 Each of these fields is a comma-separated list of action names. 5242 5243 The result is placed into an C{ActionDependencies} object. 5244 5245 If the dependencies parent node does not exist, C{None} will be returned. 5246 Otherwise, an C{ActionDependencies} object will always be created, even 5247 if it does not contain any actual dependencies in it. 5248 5249 @param parentNode: Parent node to search beneath. 5250 5251 @return: C{ActionDependencies} object or C{None}. 5252 @raise ValueError: If the data at the location can't be read 5253 """ 5254 sectionNode = readFirstChild(parentNode, "depends") 5255 if sectionNode is None: 5256 return None 5257 else: 5258 runBefore = readString(sectionNode, "run_before") 5259 runAfter = readString(sectionNode, "run_after") 5260 beforeList = parseCommaSeparatedString(runBefore) 5261 afterList = parseCommaSeparatedString(runAfter) 5262 return ActionDependencies(beforeList, afterList)
    5263 5264 @staticmethod
    5265 - def _parseBlankBehavior(parentNode):
    5266 """ 5267 Reads a single C{BlankBehavior} object from immediately beneath the parent. 5268 5269 We read the following individual fields:: 5270 5271 blankMode blank_behavior/mode 5272 blankFactor blank_behavior/factor 5273 5274 @param parentNode: Parent node to search beneath. 5275 5276 @return: C{BlankBehavior} object or C{None} if none if the section is not found 5277 @raise ValueError: If some filled-in value is invalid. 5278 """ 5279 blankBehavior = None 5280 sectionNode = readFirstChild(parentNode, "blank_behavior") 5281 if sectionNode is not None: 5282 blankBehavior = BlankBehavior() 5283 blankBehavior.blankMode = readString(sectionNode, "mode") 5284 blankBehavior.blankFactor = readString(sectionNode, "factor") 5285 return blankBehavior
    5286 5287 5288 ######################################## 5289 # High-level methods for generating XML 5290 ######################################## 5291
    5292 - def _extractXml(self):
    5293 """ 5294 Internal method to extract configuration into an XML string. 5295 5296 This method assumes that the internal L{validate} method has been called 5297 prior to extracting the XML, if the caller cares. No validation will be 5298 done internally. 5299 5300 As a general rule, fields that are set to C{None} will be extracted into 5301 the document as empty tags. The same goes for container tags that are 5302 filled based on lists - if the list is empty or C{None}, the container 5303 tag will be empty. 5304 """ 5305 (xmlDom, parentNode) = createOutputDom() 5306 Config._addReference(xmlDom, parentNode, self.reference) 5307 Config._addExtensions(xmlDom, parentNode, self.extensions) 5308 Config._addOptions(xmlDom, parentNode, self.options) 5309 Config._addPeers(xmlDom, parentNode, self.peers) 5310 Config._addCollect(xmlDom, parentNode, self.collect) 5311 Config._addStage(xmlDom, parentNode, self.stage) 5312 Config._addStore(xmlDom, parentNode, self.store) 5313 Config._addPurge(xmlDom, parentNode, self.purge) 5314 xmlData = serializeDom(xmlDom) 5315 xmlDom.unlink() 5316 return xmlData
    5317 5318 @staticmethod
    5319 - def _addReference(xmlDom, parentNode, referenceConfig):
    5320 """ 5321 Adds a <reference> configuration section as the next child of a parent. 5322 5323 We add the following fields to the document:: 5324 5325 author //cb_config/reference/author 5326 revision //cb_config/reference/revision 5327 description //cb_config/reference/description 5328 generator //cb_config/reference/generator 5329 5330 If C{referenceConfig} is C{None}, then no container will be added. 5331 5332 @param xmlDom: DOM tree as from L{createOutputDom}. 5333 @param parentNode: Parent that the section should be appended to. 5334 @param referenceConfig: Reference configuration section to be added to the document. 5335 """ 5336 if referenceConfig is not None: 5337 sectionNode = addContainerNode(xmlDom, parentNode, "reference") 5338 addStringNode(xmlDom, sectionNode, "author", referenceConfig.author) 5339 addStringNode(xmlDom, sectionNode, "revision", referenceConfig.revision) 5340 addStringNode(xmlDom, sectionNode, "description", referenceConfig.description) 5341 addStringNode(xmlDom, sectionNode, "generator", referenceConfig.generator)
    5342 5343 @staticmethod
    5344 - def _addExtensions(xmlDom, parentNode, extensionsConfig):
    5345 """ 5346 Adds an <extensions> configuration section as the next child of a parent. 5347 5348 We add the following fields to the document:: 5349 5350 order_mode //cb_config/extensions/order_mode 5351 5352 We also add groups of the following items, one list element per item:: 5353 5354 actions //cb_config/extensions/action 5355 5356 The extended action entries are added by L{_addExtendedAction}. 5357 5358 If C{extensionsConfig} is C{None}, then no container will be added. 5359 5360 @param xmlDom: DOM tree as from L{createOutputDom}. 5361 @param parentNode: Parent that the section should be appended to. 5362 @param extensionsConfig: Extensions configuration section to be added to the document. 5363 """ 5364 if extensionsConfig is not None: 5365 sectionNode = addContainerNode(xmlDom, parentNode, "extensions") 5366 addStringNode(xmlDom, sectionNode, "order_mode", extensionsConfig.orderMode) 5367 if extensionsConfig.actions is not None: 5368 for action in extensionsConfig.actions: 5369 Config._addExtendedAction(xmlDom, sectionNode, action)
    5370 5371 @staticmethod
    5372 - def _addOptions(xmlDom, parentNode, optionsConfig):
    5373 """ 5374 Adds a <options> configuration section as the next child of a parent. 5375 5376 We add the following fields to the document:: 5377 5378 startingDay //cb_config/options/starting_day 5379 workingDir //cb_config/options/working_dir 5380 backupUser //cb_config/options/backup_user 5381 backupGroup //cb_config/options/backup_group 5382 rcpCommand //cb_config/options/rcp_command 5383 rshCommand //cb_config/options/rsh_command 5384 cbackCommand //cb_config/options/cback_command 5385 managedActions //cb_config/options/managed_actions 5386 5387 We also add groups of the following items, one list element per 5388 item:: 5389 5390 overrides //cb_config/options/override 5391 hooks //cb_config/options/pre_action_hook 5392 hooks //cb_config/options/post_action_hook 5393 5394 The individual override items are added by L{_addOverride}. The 5395 individual hook items are added by L{_addHook}. 5396 5397 If C{optionsConfig} is C{None}, then no container will be added. 5398 5399 @param xmlDom: DOM tree as from L{createOutputDom}. 5400 @param parentNode: Parent that the section should be appended to. 5401 @param optionsConfig: Options configuration section to be added to the document. 5402 """ 5403 if optionsConfig is not None: 5404 sectionNode = addContainerNode(xmlDom, parentNode, "options") 5405 addStringNode(xmlDom, sectionNode, "starting_day", optionsConfig.startingDay) 5406 addStringNode(xmlDom, sectionNode, "working_dir", optionsConfig.workingDir) 5407 addStringNode(xmlDom, sectionNode, "backup_user", optionsConfig.backupUser) 5408 addStringNode(xmlDom, sectionNode, "backup_group", optionsConfig.backupGroup) 5409 addStringNode(xmlDom, sectionNode, "rcp_command", optionsConfig.rcpCommand) 5410 addStringNode(xmlDom, sectionNode, "rsh_command", optionsConfig.rshCommand) 5411 addStringNode(xmlDom, sectionNode, "cback_command", optionsConfig.cbackCommand) 5412 managedActions = Config._buildCommaSeparatedString(optionsConfig.managedActions) 5413 addStringNode(xmlDom, sectionNode, "managed_actions", managedActions) 5414 if optionsConfig.overrides is not None: 5415 for override in optionsConfig.overrides: 5416 Config._addOverride(xmlDom, sectionNode, override) 5417 if optionsConfig.hooks is not None: 5418 for hook in optionsConfig.hooks: 5419 Config._addHook(xmlDom, sectionNode, hook)
    5420 5421 @staticmethod
    5422 - def _addPeers(xmlDom, parentNode, peersConfig):
    5423 """ 5424 Adds a <peers> configuration section as the next child of a parent. 5425 5426 We add groups of the following items, one list element per 5427 item:: 5428 5429 localPeers //cb_config/peers/peer 5430 remotePeers //cb_config/peers/peer 5431 5432 The individual local and remote peer entries are added by 5433 L{_addLocalPeer} and L{_addRemotePeer}, respectively. 5434 5435 If C{peersConfig} is C{None}, then no container will be added. 5436 5437 @param xmlDom: DOM tree as from L{createOutputDom}. 5438 @param parentNode: Parent that the section should be appended to. 5439 @param peersConfig: Peers configuration section to be added to the document. 5440 """ 5441 if peersConfig is not None: 5442 sectionNode = addContainerNode(xmlDom, parentNode, "peers") 5443 if peersConfig.localPeers is not None: 5444 for localPeer in peersConfig.localPeers: 5445 Config._addLocalPeer(xmlDom, sectionNode, localPeer) 5446 if peersConfig.remotePeers is not None: 5447 for remotePeer in peersConfig.remotePeers: 5448 Config._addRemotePeer(xmlDom, sectionNode, remotePeer)
    5449 5450 @staticmethod
    5451 - def _addCollect(xmlDom, parentNode, collectConfig):
    5452 """ 5453 Adds a <collect> configuration section as the next child of a parent. 5454 5455 We add the following fields to the document:: 5456 5457 targetDir //cb_config/collect/collect_dir 5458 collectMode //cb_config/collect/collect_mode 5459 archiveMode //cb_config/collect/archive_mode 5460 ignoreFile //cb_config/collect/ignore_file 5461 5462 We also add groups of the following items, one list element per 5463 item:: 5464 5465 absoluteExcludePaths //cb_config/collect/exclude/abs_path 5466 excludePatterns //cb_config/collect/exclude/pattern 5467 collectFiles //cb_config/collect/file 5468 collectDirs //cb_config/collect/dir 5469 5470 The individual collect files are added by L{_addCollectFile} and 5471 individual collect directories are added by L{_addCollectDir}. 5472 5473 If C{collectConfig} is C{None}, then no container will be added. 5474 5475 @param xmlDom: DOM tree as from L{createOutputDom}. 5476 @param parentNode: Parent that the section should be appended to. 5477 @param collectConfig: Collect configuration section to be added to the document. 5478 """ 5479 if collectConfig is not None: 5480 sectionNode = addContainerNode(xmlDom, parentNode, "collect") 5481 addStringNode(xmlDom, sectionNode, "collect_dir", collectConfig.targetDir) 5482 addStringNode(xmlDom, sectionNode, "collect_mode", collectConfig.collectMode) 5483 addStringNode(xmlDom, sectionNode, "archive_mode", collectConfig.archiveMode) 5484 addStringNode(xmlDom, sectionNode, "ignore_file", collectConfig.ignoreFile) 5485 if ((collectConfig.absoluteExcludePaths is not None and collectConfig.absoluteExcludePaths != []) or 5486 (collectConfig.excludePatterns is not None and collectConfig.excludePatterns != [])): 5487 excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") 5488 if collectConfig.absoluteExcludePaths is not None: 5489 for absolutePath in collectConfig.absoluteExcludePaths: 5490 addStringNode(xmlDom, excludeNode, "abs_path", absolutePath) 5491 if collectConfig.excludePatterns is not None: 5492 for pattern in collectConfig.excludePatterns: 5493 addStringNode(xmlDom, excludeNode, "pattern", pattern) 5494 if collectConfig.collectFiles is not None: 5495 for collectFile in collectConfig.collectFiles: 5496 Config._addCollectFile(xmlDom, sectionNode, collectFile) 5497 if collectConfig.collectDirs is not None: 5498 for collectDir in collectConfig.collectDirs: 5499 Config._addCollectDir(xmlDom, sectionNode, collectDir)
    5500 5501 @staticmethod
    5502 - def _addStage(xmlDom, parentNode, stageConfig):
    5503 """ 5504 Adds a <stage> configuration section as the next child of a parent. 5505 5506 We add the following fields to the document:: 5507 5508 targetDir //cb_config/stage/staging_dir 5509 5510 We also add groups of the following items, one list element per 5511 item:: 5512 5513 localPeers //cb_config/stage/peer 5514 remotePeers //cb_config/stage/peer 5515 5516 The individual local and remote peer entries are added by 5517 L{_addLocalPeer} and L{_addRemotePeer}, respectively. 5518 5519 If C{stageConfig} is C{None}, then no container will be added. 5520 5521 @param xmlDom: DOM tree as from L{createOutputDom}. 5522 @param parentNode: Parent that the section should be appended to. 5523 @param stageConfig: Stage configuration section to be added to the document. 5524 """ 5525 if stageConfig is not None: 5526 sectionNode = addContainerNode(xmlDom, parentNode, "stage") 5527 addStringNode(xmlDom, sectionNode, "staging_dir", stageConfig.targetDir) 5528 if stageConfig.localPeers is not None: 5529 for localPeer in stageConfig.localPeers: 5530 Config._addLocalPeer(xmlDom, sectionNode, localPeer) 5531 if stageConfig.remotePeers is not None: 5532 for remotePeer in stageConfig.remotePeers: 5533 Config._addRemotePeer(xmlDom, sectionNode, remotePeer)
    5534 5535 @staticmethod
    5536 - def _addStore(xmlDom, parentNode, storeConfig):
    5537 """ 5538 Adds a <store> configuration section as the next child of a parent. 5539 5540 We add the following fields to the document:: 5541 5542 sourceDir //cb_config/store/source_dir 5543 mediaType //cb_config/store/media_type 5544 deviceType //cb_config/store/device_type 5545 devicePath //cb_config/store/target_device 5546 deviceScsiId //cb_config/store/target_scsi_id 5547 driveSpeed //cb_config/store/drive_speed 5548 checkData //cb_config/store/check_data 5549 checkMedia //cb_config/store/check_media 5550 warnMidnite //cb_config/store/warn_midnite 5551 noEject //cb_config/store/no_eject 5552 refreshMediaDelay //cb_config/store/refresh_media_delay 5553 ejectDelay //cb_config/store/eject_delay 5554 5555 Blanking behavior configuration is added by the L{_addBlankBehavior} 5556 method. 5557 5558 If C{storeConfig} is C{None}, then no container will be added. 5559 5560 @param xmlDom: DOM tree as from L{createOutputDom}. 5561 @param parentNode: Parent that the section should be appended to. 5562 @param storeConfig: Store configuration section to be added to the document. 5563 """ 5564 if storeConfig is not None: 5565 sectionNode = addContainerNode(xmlDom, parentNode, "store") 5566 addStringNode(xmlDom, sectionNode, "source_dir", storeConfig.sourceDir) 5567 addStringNode(xmlDom, sectionNode, "media_type", storeConfig.mediaType) 5568 addStringNode(xmlDom, sectionNode, "device_type", storeConfig.deviceType) 5569 addStringNode(xmlDom, sectionNode, "target_device", storeConfig.devicePath) 5570 addStringNode(xmlDom, sectionNode, "target_scsi_id", storeConfig.deviceScsiId) 5571 addIntegerNode(xmlDom, sectionNode, "drive_speed", storeConfig.driveSpeed) 5572 addBooleanNode(xmlDom, sectionNode, "check_data", storeConfig.checkData) 5573 addBooleanNode(xmlDom, sectionNode, "check_media", storeConfig.checkMedia) 5574 addBooleanNode(xmlDom, sectionNode, "warn_midnite", storeConfig.warnMidnite) 5575 addBooleanNode(xmlDom, sectionNode, "no_eject", storeConfig.noEject) 5576 addIntegerNode(xmlDom, sectionNode, "refresh_media_delay", storeConfig.refreshMediaDelay) 5577 addIntegerNode(xmlDom, sectionNode, "eject_delay", storeConfig.ejectDelay) 5578 Config._addBlankBehavior(xmlDom, sectionNode, storeConfig.blankBehavior)
    5579 5580 @staticmethod
    5581 - def _addPurge(xmlDom, parentNode, purgeConfig):
    5582 """ 5583 Adds a <purge> configuration section as the next child of a parent. 5584 5585 We add the following fields to the document:: 5586 5587 purgeDirs //cb_config/purge/dir 5588 5589 The individual directory entries are added by L{_addPurgeDir}. 5590 5591 If C{purgeConfig} is C{None}, then no container will be added. 5592 5593 @param xmlDom: DOM tree as from L{createOutputDom}. 5594 @param parentNode: Parent that the section should be appended to. 5595 @param purgeConfig: Purge configuration section to be added to the document. 5596 """ 5597 if purgeConfig is not None: 5598 sectionNode = addContainerNode(xmlDom, parentNode, "purge") 5599 if purgeConfig.purgeDirs is not None: 5600 for purgeDir in purgeConfig.purgeDirs: 5601 Config._addPurgeDir(xmlDom, sectionNode, purgeDir)
    5602 5603 @staticmethod
    5604 - def _addExtendedAction(xmlDom, parentNode, action):
    5605 """ 5606 Adds an extended action container as the next child of a parent. 5607 5608 We add the following fields to the document:: 5609 5610 name action/name 5611 module action/module 5612 function action/function 5613 index action/index 5614 dependencies action/depends 5615 5616 Dependencies are added by the L{_addDependencies} method. 5617 5618 The <action> node itself is created as the next child of the parent node. 5619 This method only adds one action node. The parent must loop for each action 5620 in the C{ExtensionsConfig} object. 5621 5622 If C{action} is C{None}, this method call will be a no-op. 5623 5624 @param xmlDom: DOM tree as from L{createOutputDom}. 5625 @param parentNode: Parent that the section should be appended to. 5626 @param action: Purge directory to be added to the document. 5627 """ 5628 if action is not None: 5629 sectionNode = addContainerNode(xmlDom, parentNode, "action") 5630 addStringNode(xmlDom, sectionNode, "name", action.name) 5631 addStringNode(xmlDom, sectionNode, "module", action.module) 5632 addStringNode(xmlDom, sectionNode, "function", action.function) 5633 addIntegerNode(xmlDom, sectionNode, "index", action.index) 5634 Config._addDependencies(xmlDom, sectionNode, action.dependencies)
    5635 5636 @staticmethod
    5637 - def _addOverride(xmlDom, parentNode, override):
    5638 """ 5639 Adds a command override container as the next child of a parent. 5640 5641 We add the following fields to the document:: 5642 5643 command override/command 5644 absolutePath override/abs_path 5645 5646 The <override> node itself is created as the next child of the parent 5647 node. This method only adds one override node. The parent must loop for 5648 each override in the C{OptionsConfig} object. 5649 5650 If C{override} is C{None}, this method call will be a no-op. 5651 5652 @param xmlDom: DOM tree as from L{createOutputDom}. 5653 @param parentNode: Parent that the section should be appended to. 5654 @param override: Command override to be added to the document. 5655 """ 5656 if override is not None: 5657 sectionNode = addContainerNode(xmlDom, parentNode, "override") 5658 addStringNode(xmlDom, sectionNode, "command", override.command) 5659 addStringNode(xmlDom, sectionNode, "abs_path", override.absolutePath)
    5660 5661 @staticmethod
    5662 - def _addHook(xmlDom, parentNode, hook):
    5663 """ 5664 Adds an action hook container as the next child of a parent. 5665 5666 The behavior varies depending on the value of the C{before} and C{after} 5667 flags on the hook. If the C{before} flag is set, it's a pre-action hook, 5668 and we'll add the following fields:: 5669 5670 action pre_action_hook/action 5671 command pre_action_hook/command 5672 5673 If the C{after} flag is set, it's a post-action hook, and we'll add the 5674 following fields:: 5675 5676 action post_action_hook/action 5677 command post_action_hook/command 5678 5679 The <pre_action_hook> or <post_action_hook> node itself is created as the 5680 next child of the parent node. This method only adds one hook node. The 5681 parent must loop for each hook in the C{OptionsConfig} object. 5682 5683 If C{hook} is C{None}, this method call will be a no-op. 5684 5685 @param xmlDom: DOM tree as from L{createOutputDom}. 5686 @param parentNode: Parent that the section should be appended to. 5687 @param hook: Command hook to be added to the document. 5688 """ 5689 if hook is not None: 5690 if hook.before: 5691 sectionNode = addContainerNode(xmlDom, parentNode, "pre_action_hook") 5692 else: 5693 sectionNode = addContainerNode(xmlDom, parentNode, "post_action_hook") 5694 addStringNode(xmlDom, sectionNode, "action", hook.action) 5695 addStringNode(xmlDom, sectionNode, "command", hook.command)
    5696 5697 @staticmethod
    5698 - def _addCollectFile(xmlDom, parentNode, collectFile):
    5699 """ 5700 Adds a collect file container as the next child of a parent. 5701 5702 We add the following fields to the document:: 5703 5704 absolutePath dir/abs_path 5705 collectMode dir/collect_mode 5706 archiveMode dir/archive_mode 5707 5708 Note that for consistency with collect directory handling we'll only emit 5709 the preferred C{collect_mode} tag. 5710 5711 The <file> node itself is created as the next child of the parent node. 5712 This method only adds one collect file node. The parent must loop 5713 for each collect file in the C{CollectConfig} object. 5714 5715 If C{collectFile} is C{None}, this method call will be a no-op. 5716 5717 @param xmlDom: DOM tree as from L{createOutputDom}. 5718 @param parentNode: Parent that the section should be appended to. 5719 @param collectFile: Collect file to be added to the document. 5720 """ 5721 if collectFile is not None: 5722 sectionNode = addContainerNode(xmlDom, parentNode, "file") 5723 addStringNode(xmlDom, sectionNode, "abs_path", collectFile.absolutePath) 5724 addStringNode(xmlDom, sectionNode, "collect_mode", collectFile.collectMode) 5725 addStringNode(xmlDom, sectionNode, "archive_mode", collectFile.archiveMode)
    5726 5727 @staticmethod
    5728 - def _addCollectDir(xmlDom, parentNode, collectDir):
    5729 """ 5730 Adds a collect directory container as the next child of a parent. 5731 5732 We add the following fields to the document:: 5733 5734 absolutePath dir/abs_path 5735 collectMode dir/collect_mode 5736 archiveMode dir/archive_mode 5737 ignoreFile dir/ignore_file 5738 linkDepth dir/link_depth 5739 dereference dir/dereference 5740 recursionLevel dir/recursion_level 5741 5742 Note that an original XML document might have listed the collect mode 5743 using the C{mode} tag, since we accept both C{collect_mode} and C{mode}. 5744 However, here we'll only emit the preferred C{collect_mode} tag. 5745 5746 We also add groups of the following items, one list element per item:: 5747 5748 absoluteExcludePaths dir/exclude/abs_path 5749 relativeExcludePaths dir/exclude/rel_path 5750 excludePatterns dir/exclude/pattern 5751 5752 The <dir> node itself is created as the next child of the parent node. 5753 This method only adds one collect directory node. The parent must loop 5754 for each collect directory in the C{CollectConfig} object. 5755 5756 If C{collectDir} is C{None}, this method call will be a no-op. 5757 5758 @param xmlDom: DOM tree as from L{createOutputDom}. 5759 @param parentNode: Parent that the section should be appended to. 5760 @param collectDir: Collect directory to be added to the document. 5761 """ 5762 if collectDir is not None: 5763 sectionNode = addContainerNode(xmlDom, parentNode, "dir") 5764 addStringNode(xmlDom, sectionNode, "abs_path", collectDir.absolutePath) 5765 addStringNode(xmlDom, sectionNode, "collect_mode", collectDir.collectMode) 5766 addStringNode(xmlDom, sectionNode, "archive_mode", collectDir.archiveMode) 5767 addStringNode(xmlDom, sectionNode, "ignore_file", collectDir.ignoreFile) 5768 addIntegerNode(xmlDom, sectionNode, "link_depth", collectDir.linkDepth) 5769 addBooleanNode(xmlDom, sectionNode, "dereference", collectDir.dereference) 5770 addIntegerNode(xmlDom, sectionNode, "recursion_level", collectDir.recursionLevel) 5771 if ((collectDir.absoluteExcludePaths is not None and collectDir.absoluteExcludePaths != []) or 5772 (collectDir.relativeExcludePaths is not None and collectDir.relativeExcludePaths != []) or 5773 (collectDir.excludePatterns is not None and collectDir.excludePatterns != [])): 5774 excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") 5775 if collectDir.absoluteExcludePaths is not None: 5776 for absolutePath in collectDir.absoluteExcludePaths: 5777 addStringNode(xmlDom, excludeNode, "abs_path", absolutePath) 5778 if collectDir.relativeExcludePaths is not None: 5779 for relativePath in collectDir.relativeExcludePaths: 5780 addStringNode(xmlDom, excludeNode, "rel_path", relativePath) 5781 if collectDir.excludePatterns is not None: 5782 for pattern in collectDir.excludePatterns: 5783 addStringNode(xmlDom, excludeNode, "pattern", pattern)
    5784 5785 @staticmethod
    5786 - def _addLocalPeer(xmlDom, parentNode, localPeer):
    5787 """ 5788 Adds a local peer container as the next child of a parent. 5789 5790 We add the following fields to the document:: 5791 5792 name peer/name 5793 collectDir peer/collect_dir 5794 ignoreFailureMode peer/ignore_failures 5795 5796 Additionally, C{peer/type} is filled in with C{"local"}, since this is a 5797 local peer. 5798 5799 The <peer> node itself is created as the next child of the parent node. 5800 This method only adds one peer node. The parent must loop for each peer 5801 in the C{StageConfig} object. 5802 5803 If C{localPeer} is C{None}, this method call will be a no-op. 5804 5805 @param xmlDom: DOM tree as from L{createOutputDom}. 5806 @param parentNode: Parent that the section should be appended to. 5807 @param localPeer: Purge directory to be added to the document. 5808 """ 5809 if localPeer is not None: 5810 sectionNode = addContainerNode(xmlDom, parentNode, "peer") 5811 addStringNode(xmlDom, sectionNode, "name", localPeer.name) 5812 addStringNode(xmlDom, sectionNode, "type", "local") 5813 addStringNode(xmlDom, sectionNode, "collect_dir", localPeer.collectDir) 5814 addStringNode(xmlDom, sectionNode, "ignore_failures", localPeer.ignoreFailureMode)
    5815 5816 @staticmethod
    5817 - def _addRemotePeer(xmlDom, parentNode, remotePeer):
    5818 """ 5819 Adds a remote peer container as the next child of a parent. 5820 5821 We add the following fields to the document:: 5822 5823 name peer/name 5824 collectDir peer/collect_dir 5825 remoteUser peer/backup_user 5826 rcpCommand peer/rcp_command 5827 rcpCommand peer/rcp_command 5828 rshCommand peer/rsh_command 5829 cbackCommand peer/cback_command 5830 ignoreFailureMode peer/ignore_failures 5831 managed peer/managed 5832 managedActions peer/managed_actions 5833 5834 Additionally, C{peer/type} is filled in with C{"remote"}, since this is a 5835 remote peer. 5836 5837 The <peer> node itself is created as the next child of the parent node. 5838 This method only adds one peer node. The parent must loop for each peer 5839 in the C{StageConfig} object. 5840 5841 If C{remotePeer} is C{None}, this method call will be a no-op. 5842 5843 @param xmlDom: DOM tree as from L{createOutputDom}. 5844 @param parentNode: Parent that the section should be appended to. 5845 @param remotePeer: Purge directory to be added to the document. 5846 """ 5847 if remotePeer is not None: 5848 sectionNode = addContainerNode(xmlDom, parentNode, "peer") 5849 addStringNode(xmlDom, sectionNode, "name", remotePeer.name) 5850 addStringNode(xmlDom, sectionNode, "type", "remote") 5851 addStringNode(xmlDom, sectionNode, "collect_dir", remotePeer.collectDir) 5852 addStringNode(xmlDom, sectionNode, "backup_user", remotePeer.remoteUser) 5853 addStringNode(xmlDom, sectionNode, "rcp_command", remotePeer.rcpCommand) 5854 addStringNode(xmlDom, sectionNode, "rsh_command", remotePeer.rshCommand) 5855 addStringNode(xmlDom, sectionNode, "cback_command", remotePeer.cbackCommand) 5856 addStringNode(xmlDom, sectionNode, "ignore_failures", remotePeer.ignoreFailureMode) 5857 addBooleanNode(xmlDom, sectionNode, "managed", remotePeer.managed) 5858 managedActions = Config._buildCommaSeparatedString(remotePeer.managedActions) 5859 addStringNode(xmlDom, sectionNode, "managed_actions", managedActions)
    5860 5861 @staticmethod
    5862 - def _addPurgeDir(xmlDom, parentNode, purgeDir):
    5863 """ 5864 Adds a purge directory container as the next child of a parent. 5865 5866 We add the following fields to the document:: 5867 5868 absolutePath dir/abs_path 5869 retainDays dir/retain_days 5870 5871 The <dir> node itself is created as the next child of the parent node. 5872 This method only adds one purge directory node. The parent must loop for 5873 each purge directory in the C{PurgeConfig} object. 5874 5875 If C{purgeDir} is C{None}, this method call will be a no-op. 5876 5877 @param xmlDom: DOM tree as from L{createOutputDom}. 5878 @param parentNode: Parent that the section should be appended to. 5879 @param purgeDir: Purge directory to be added to the document. 5880 """ 5881 if purgeDir is not None: 5882 sectionNode = addContainerNode(xmlDom, parentNode, "dir") 5883 addStringNode(xmlDom, sectionNode, "abs_path", purgeDir.absolutePath) 5884 addIntegerNode(xmlDom, sectionNode, "retain_days", purgeDir.retainDays)
    5885 5886 @staticmethod
    5887 - def _addDependencies(xmlDom, parentNode, dependencies):
    5888 """ 5889 Adds a extended action dependencies to parent node. 5890 5891 We add the following fields to the document:: 5892 5893 runBefore depends/run_before 5894 runAfter depends/run_after 5895 5896 If C{dependencies} is C{None}, this method call will be a no-op. 5897 5898 @param xmlDom: DOM tree as from L{createOutputDom}. 5899 @param parentNode: Parent that the section should be appended to. 5900 @param dependencies: C{ActionDependencies} object to be added to the document 5901 """ 5902 if dependencies is not None: 5903 sectionNode = addContainerNode(xmlDom, parentNode, "depends") 5904 runBefore = Config._buildCommaSeparatedString(dependencies.beforeList) 5905 runAfter = Config._buildCommaSeparatedString(dependencies.afterList) 5906 addStringNode(xmlDom, sectionNode, "run_before", runBefore) 5907 addStringNode(xmlDom, sectionNode, "run_after", runAfter)
    5908 5909 @staticmethod
    5910 - def _buildCommaSeparatedString(valueList):
    5911 """ 5912 Creates a comma-separated string from a list of values. 5913 5914 As a special case, if C{valueList} is C{None}, then C{None} will be 5915 returned. 5916 5917 @param valueList: List of values to be placed into a string 5918 5919 @return: Values from valueList as a comma-separated string. 5920 """ 5921 if valueList is None: 5922 return None 5923 return ",".join(valueList)
    5924 5925 @staticmethod
    5926 - def _addBlankBehavior(xmlDom, parentNode, blankBehavior):
    5927 """ 5928 Adds a blanking behavior container as the next child of a parent. 5929 5930 We add the following fields to the document:: 5931 5932 blankMode blank_behavior/mode 5933 blankFactor blank_behavior/factor 5934 5935 The <blank_behavior> node itself is created as the next child of the 5936 parent node. 5937 5938 If C{blankBehavior} is C{None}, this method call will be a no-op. 5939 5940 @param xmlDom: DOM tree as from L{createOutputDom}. 5941 @param parentNode: Parent that the section should be appended to. 5942 @param blankBehavior: Blanking behavior to be added to the document. 5943 """ 5944 if blankBehavior is not None: 5945 sectionNode = addContainerNode(xmlDom, parentNode, "blank_behavior") 5946 addStringNode(xmlDom, sectionNode, "mode", blankBehavior.blankMode) 5947 addStringNode(xmlDom, sectionNode, "factor", blankBehavior.blankFactor)
    5948 5949 5950 ################################################# 5951 # High-level methods used for validating content 5952 ################################################# 5953
    5954 - def _validateContents(self):
    5955 """ 5956 Validates configuration contents per rules discussed in module 5957 documentation. 5958 5959 This is the second pass at validation. It ensures that any filled-in 5960 section contains valid data. Any sections which is not set to C{None} is 5961 validated per the rules for that section, laid out in the module 5962 documentation (above). 5963 5964 @raise ValueError: If configuration is invalid. 5965 """ 5966 self._validateReference() 5967 self._validateExtensions() 5968 self._validateOptions() 5969 self._validatePeers() 5970 self._validateCollect() 5971 self._validateStage() 5972 self._validateStore() 5973 self._validatePurge()
    5974
    5975 - def _validateReference(self):
    5976 """ 5977 Validates reference configuration. 5978 There are currently no reference-related validations. 5979 @raise ValueError: If reference configuration is invalid. 5980 """ 5981 pass
    5982
    5983 - def _validateExtensions(self):
    5984 """ 5985 Validates extensions configuration. 5986 5987 The list of actions may be either C{None} or an empty list C{[]} if 5988 desired. Each extended action must include a name, a module, and a 5989 function. 5990 5991 Then, if the order mode is None or "index", an index is required; and if 5992 the order mode is "dependency", dependency information is required. 5993 5994 @raise ValueError: If reference configuration is invalid. 5995 """ 5996 if self.extensions is not None: 5997 if self.extensions.actions is not None: 5998 names = [] 5999 for action in self.extensions.actions: 6000 if action.name is None: 6001 raise ValueError("Each extended action must set a name.") 6002 names.append(action.name) 6003 if action.module is None: 6004 raise ValueError("Each extended action must set a module.") 6005 if action.function is None: 6006 raise ValueError("Each extended action must set a function.") 6007 if self.extensions.orderMode is None or self.extensions.orderMode == "index": 6008 if action.index is None: 6009 raise ValueError("Each extended action must set an index, based on order mode.") 6010 elif self.extensions.orderMode == "dependency": 6011 if action.dependencies is None: 6012 raise ValueError("Each extended action must set dependency information, based on order mode.") 6013 checkUnique("Duplicate extension names exist:", names)
    6014
    6015 - def _validateOptions(self):
    6016 """ 6017 Validates options configuration. 6018 6019 All fields must be filled in except the rsh command. The rcp and rsh 6020 commands are used as default values for all remote peers. Remote peers 6021 can also rely on the backup user as the default remote user name if they 6022 choose. 6023 6024 @raise ValueError: If reference configuration is invalid. 6025 """ 6026 if self.options is not None: 6027 if self.options.startingDay is None: 6028 raise ValueError("Options section starting day must be filled in.") 6029 if self.options.workingDir is None: 6030 raise ValueError("Options section working directory must be filled in.") 6031 if self.options.backupUser is None: 6032 raise ValueError("Options section backup user must be filled in.") 6033 if self.options.backupGroup is None: 6034 raise ValueError("Options section backup group must be filled in.") 6035 if self.options.rcpCommand is None: 6036 raise ValueError("Options section remote copy command must be filled in.")
    6037
    6038 - def _validatePeers(self):
    6039 """ 6040 Validates peers configuration per rules in L{_validatePeerList}. 6041 @raise ValueError: If peers configuration is invalid. 6042 """ 6043 if self.peers is not None: 6044 self._validatePeerList(self.peers.localPeers, self.peers.remotePeers)
    6045
    6046 - def _validateCollect(self):
    6047 """ 6048 Validates collect configuration. 6049 6050 The target directory must be filled in. The collect mode, archive mode, 6051 ignore file, and recursion level are all optional. The list of absolute 6052 paths to exclude and patterns to exclude may be either C{None} or an 6053 empty list C{[]} if desired. 6054 6055 Each collect directory entry must contain an absolute path to collect, 6056 and then must either be able to take collect mode, archive mode and 6057 ignore file configuration from the parent C{CollectConfig} object, or 6058 must set each value on its own. The list of absolute paths to exclude, 6059 relative paths to exclude and patterns to exclude may be either C{None} 6060 or an empty list C{[]} if desired. Any list of absolute paths to exclude 6061 or patterns to exclude will be combined with the same list in the 6062 C{CollectConfig} object to make the complete list for a given directory. 6063 6064 @raise ValueError: If collect configuration is invalid. 6065 """ 6066 if self.collect is not None: 6067 if self.collect.targetDir is None: 6068 raise ValueError("Collect section target directory must be filled in.") 6069 if self.collect.collectFiles is not None: 6070 for collectFile in self.collect.collectFiles: 6071 if collectFile.absolutePath is None: 6072 raise ValueError("Each collect file must set an absolute path.") 6073 if self.collect.collectMode is None and collectFile.collectMode is None: 6074 raise ValueError("Collect mode must either be set in parent collect section or individual collect file.") 6075 if self.collect.archiveMode is None and collectFile.archiveMode is None: 6076 raise ValueError("Archive mode must either be set in parent collect section or individual collect file.") 6077 if self.collect.collectDirs is not None: 6078 for collectDir in self.collect.collectDirs: 6079 if collectDir.absolutePath is None: 6080 raise ValueError("Each collect directory must set an absolute path.") 6081 if self.collect.collectMode is None and collectDir.collectMode is None: 6082 raise ValueError("Collect mode must either be set in parent collect section or individual collect directory.") 6083 if self.collect.archiveMode is None and collectDir.archiveMode is None: 6084 raise ValueError("Archive mode must either be set in parent collect section or individual collect directory.") 6085 if self.collect.ignoreFile is None and collectDir.ignoreFile is None: 6086 raise ValueError("Ignore file must either be set in parent collect section or individual collect directory.") 6087 if (collectDir.linkDepth is None or collectDir.linkDepth < 1) and collectDir.dereference: 6088 raise ValueError("Dereference flag is only valid when a non-zero link depth is in use.")
    6089
    6090 - def _validateStage(self):
    6091 """ 6092 Validates stage configuration. 6093 6094 The target directory must be filled in, and the peers are 6095 also validated. 6096 6097 Peers are only required in this section if the peers configuration 6098 section is not filled in. However, if any peers are filled in 6099 here, they override the peers configuration and must meet the 6100 validation criteria in L{_validatePeerList}. 6101 6102 @raise ValueError: If stage configuration is invalid. 6103 """ 6104 if self.stage is not None: 6105 if self.stage.targetDir is None: 6106 raise ValueError("Stage section target directory must be filled in.") 6107 if self.peers is None: 6108 # In this case, stage configuration is our only configuration and must be valid. 6109 self._validatePeerList(self.stage.localPeers, self.stage.remotePeers) 6110 else: 6111 # In this case, peers configuration is the default and stage configuration overrides. 6112 # Validation is only needed if it's stage configuration is actually filled in. 6113 if self.stage.hasPeers(): 6114 self._validatePeerList(self.stage.localPeers, self.stage.remotePeers)
    6115
    6116 - def _validateStore(self):
    6117 """ 6118 Validates store configuration. 6119 6120 The device type, drive speed, and blanking behavior are optional. All 6121 other values are required. Missing booleans will be set to defaults. 6122 6123 If blanking behavior is provided, then both a blanking mode and a 6124 blanking factor are required. 6125 6126 The image writer functionality in the C{writer} module is supposed to be 6127 able to handle a device speed of C{None}. 6128 6129 Any caller which needs a "real" (non-C{None}) value for the device type 6130 can use C{DEFAULT_DEVICE_TYPE}, which is guaranteed to be sensible. 6131 6132 This is also where we make sure that the media type -- which is already a 6133 valid type -- matches up properly with the device type. 6134 6135 @raise ValueError: If store configuration is invalid. 6136 """ 6137 if self.store is not None: 6138 if self.store.sourceDir is None: 6139 raise ValueError("Store section source directory must be filled in.") 6140 if self.store.mediaType is None: 6141 raise ValueError("Store section media type must be filled in.") 6142 if self.store.devicePath is None: 6143 raise ValueError("Store section device path must be filled in.") 6144 if self.store.deviceType is None or self.store.deviceType == "cdwriter": 6145 if self.store.mediaType not in VALID_CD_MEDIA_TYPES: 6146 raise ValueError("Media type must match device type.") 6147 elif self.store.deviceType == "dvdwriter": 6148 if self.store.mediaType not in VALID_DVD_MEDIA_TYPES: 6149 raise ValueError("Media type must match device type.") 6150 if self.store.blankBehavior is not None: 6151 if self.store.blankBehavior.blankMode is None and self.store.blankBehavior.blankFactor is None: 6152 raise ValueError("If blanking behavior is provided, all values must be filled in.")
    6153
    6154 - def _validatePurge(self):
    6155 """ 6156 Validates purge configuration. 6157 6158 The list of purge directories may be either C{None} or an empty list 6159 C{[]} if desired. All purge directories must contain a path and a retain 6160 days value. 6161 6162 @raise ValueError: If purge configuration is invalid. 6163 """ 6164 if self.purge is not None: 6165 if self.purge.purgeDirs is not None: 6166 for purgeDir in self.purge.purgeDirs: 6167 if purgeDir.absolutePath is None: 6168 raise ValueError("Each purge directory must set an absolute path.") 6169 if purgeDir.retainDays is None: 6170 raise ValueError("Each purge directory must set a retain days value.")
    6171
    6172 - def _validatePeerList(self, localPeers, remotePeers):
    6173 """ 6174 Validates the set of local and remote peers. 6175 6176 Local peers must be completely filled in, including both name and collect 6177 directory. Remote peers must also fill in the name and collect 6178 directory, but can leave the remote user and rcp command unset. In this 6179 case, the remote user is assumed to match the backup user from the 6180 options section and rcp command is taken directly from the options 6181 section. 6182 6183 @param localPeers: List of local peers 6184 @param remotePeers: List of remote peers 6185 6186 @raise ValueError: If stage configuration is invalid. 6187 """ 6188 if localPeers is None and remotePeers is None: 6189 raise ValueError("Peer list must contain at least one backup peer.") 6190 if localPeers is None and remotePeers is not None: 6191 if len(remotePeers) < 1: 6192 raise ValueError("Peer list must contain at least one backup peer.") 6193 elif localPeers is not None and remotePeers is None: 6194 if len(localPeers) < 1: 6195 raise ValueError("Peer list must contain at least one backup peer.") 6196 elif localPeers is not None and remotePeers is not None: 6197 if len(localPeers) + len(remotePeers) < 1: 6198 raise ValueError("Peer list must contain at least one backup peer.") 6199 names = [] 6200 if localPeers is not None: 6201 for localPeer in localPeers: 6202 if localPeer.name is None: 6203 raise ValueError("Local peers must set a name.") 6204 names.append(localPeer.name) 6205 if localPeer.collectDir is None: 6206 raise ValueError("Local peers must set a collect directory.") 6207 if remotePeers is not None: 6208 for remotePeer in remotePeers: 6209 if remotePeer.name is None: 6210 raise ValueError("Remote peers must set a name.") 6211 names.append(remotePeer.name) 6212 if remotePeer.collectDir is None: 6213 raise ValueError("Remote peers must set a collect directory.") 6214 if (self.options is None or self.options.backupUser is None) and remotePeer.remoteUser is None: 6215 raise ValueError("Remote user must either be set in options section or individual remote peer.") 6216 if (self.options is None or self.options.rcpCommand is None) and remotePeer.rcpCommand is None: 6217 raise ValueError("Remote copy command must either be set in options section or individual remote peer.") 6218 if remotePeer.managed: 6219 if (self.options is None or self.options.rshCommand is None) and remotePeer.rshCommand is None: 6220 raise ValueError("Remote shell command must either be set in options section or individual remote peer.") 6221 if (self.options is None or self.options.cbackCommand is None) and remotePeer.cbackCommand is None: 6222 raise ValueError("Remote cback command must either be set in options section or individual remote peer.") 6223 if ((self.options is None or self.options.managedActions is None or len(self.options.managedActions) < 1) 6224 and (remotePeer.managedActions is None or len(remotePeer.managedActions) < 1)): 6225 raise ValueError("Managed actions list must be set in options section or individual remote peer.") 6226 checkUnique("Duplicate peer names exist:", names)
    6227
    6228 6229 ######################################################################## 6230 # General utility functions 6231 ######################################################################## 6232 6233 -def readByteQuantity(parent, name):
    6234 """ 6235 Read a byte size value from an XML document. 6236 6237 A byte size value is an interpreted string value. If the string value 6238 ends with "MB" or "GB", then the string before that is interpreted as 6239 megabytes or gigabytes. Otherwise, it is intepreted as bytes. 6240 6241 @param parent: Parent node to search beneath. 6242 @param name: Name of node to search for. 6243 6244 @return: ByteQuantity parsed from XML document 6245 """ 6246 data = readString(parent, name) 6247 if data is None: 6248 return None 6249 data = data.strip() 6250 if data.endswith("KB"): 6251 quantity = data[0:data.rfind("KB")].strip() 6252 units = UNIT_KBYTES 6253 elif data.endswith("MB"): 6254 quantity = data[0:data.rfind("MB")].strip() 6255 units = UNIT_MBYTES 6256 elif data.endswith("GB"): 6257 quantity = data[0:data.rfind("GB")].strip() 6258 units = UNIT_GBYTES 6259 else: 6260 quantity = data.strip() 6261 units = UNIT_BYTES 6262 return ByteQuantity(quantity, units)
    6263
    6264 -def addByteQuantityNode(xmlDom, parentNode, nodeName, byteQuantity):
    6265 """ 6266 Adds a text node as the next child of a parent, to contain a byte size. 6267 6268 If the C{byteQuantity} is None, then the node will be created, but will 6269 be empty (i.e. will contain no text node child). 6270 6271 The size in bytes will be normalized. If it is larger than 1.0 GB, it will 6272 be shown in GB ("1.0 GB"). If it is larger than 1.0 MB ("1.0 MB"), it will 6273 be shown in MB. Otherwise, it will be shown in bytes ("423413"). 6274 6275 @param xmlDom: DOM tree as from C{impl.createDocument()}. 6276 @param parentNode: Parent node to create child for. 6277 @param nodeName: Name of the new container node. 6278 @param byteQuantity: ByteQuantity object to put into the XML document 6279 6280 @return: Reference to the newly-created node. 6281 """ 6282 if byteQuantity is None: 6283 byteString = None 6284 elif byteQuantity.units == UNIT_KBYTES: 6285 byteString = "%s KB" % byteQuantity.quantity 6286 elif byteQuantity.units == UNIT_MBYTES: 6287 byteString = "%s MB" % byteQuantity.quantity 6288 elif byteQuantity.units == UNIT_GBYTES: 6289 byteString = "%s GB" % byteQuantity.quantity 6290 else: 6291 byteString = byteQuantity.quantity 6292 return addStringNode(xmlDom, parentNode, nodeName, byteString)
    6293

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.writers-module.html0000664000175000017500000001547112657665544026331 0ustar pronovicpronovic00000000000000 CedarBackup3.writers
    Package CedarBackup3 :: Package writers
    [hide private]
    [frames] | no frames]

    Package writers

    source code

    Cedar Backup writers.

    This package consolidates all of the modules that implenent "image writer" functionality, including utilities and specific writer implementations.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Submodules [hide private]

    Variables [hide private]
      __package__ = None
    hash(x)
    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.mysql.MysqlConfig-class.html0000664000175000017500000011063712657665545031340 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.mysql.MysqlConfig
    Package CedarBackup3 :: Package extend :: Module mysql :: Class MysqlConfig
    [hide private]
    [frames] | no frames]

    Class MysqlConfig

    source code

    object --+
             |
            MysqlConfig
    

    Class representing MySQL configuration.

    The MySQL configuration information is used for backing up MySQL databases.

    The following restrictions exist on data in this class:

    • The compress mode must be one of the values in VALID_COMPRESS_MODES.
    • The 'all' flag must be 'Y' if no databases are defined.
    • The 'all' flag must be 'N' if any databases are defined.
    • Any values in the databases list must be strings.
    Instance Methods [hide private]
     
    __init__(self, user=None, password=None, compressMode=None, all=None, databases=None)
    Constructor for the MysqlConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    _setUser(self, value)
    Property target used to set the user value.
    source code
     
    _getUser(self)
    Property target used to get the user value.
    source code
     
    _setPassword(self, value)
    Property target used to set the password value.
    source code
     
    _getPassword(self)
    Property target used to get the password value.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setAll(self, value)
    Property target used to set the 'all' flag.
    source code
     
    _getAll(self)
    Property target used to get the 'all' flag.
    source code
     
    _setDatabases(self, value)
    Property target used to set the databases list.
    source code
     
    _getDatabases(self)
    Property target used to get the databases list.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      user
    User to execute backup as.
      password
    Password associated with user.
      all
    Indicates whether to back up all databases.
      databases
    List of databases to back up.
      compressMode
    Compress mode to be used for backed-up files.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, user=None, password=None, compressMode=None, all=None, databases=None)
    (Constructor)

    source code 

    Constructor for the MysqlConfig class.

    Parameters:
    • user - User to execute backup as.
    • password - Password associated with user.
    • compressMode - Compress mode for backed-up files.
    • all - Indicates whether to back up all databases.
    • databases - List of databases to back up.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setAll(self, value)

    source code 

    Property target used to set the 'all' flag. No validations, but we normalize the value to True or False.

    _setDatabases(self, value)

    source code 

    Property target used to set the databases list. Either the value must be None or each element must be a string.

    Raises:
    • ValueError - If the value is not a string.

    Property Details [hide private]

    user

    User to execute backup as.

    Get Method:
    _getUser(self) - Property target used to get the user value.
    Set Method:
    _setUser(self, value) - Property target used to set the user value.

    password

    Password associated with user.

    Get Method:
    _getPassword(self) - Property target used to get the password value.
    Set Method:
    _setPassword(self, value) - Property target used to set the password value.

    all

    Indicates whether to back up all databases.

    Get Method:
    _getAll(self) - Property target used to get the 'all' flag.
    Set Method:
    _setAll(self, value) - Property target used to set the 'all' flag.

    databases

    List of databases to back up.

    Get Method:
    _getDatabases(self) - Property target used to get the databases list.
    Set Method:
    _setDatabases(self, value) - Property target used to set the databases list.

    compressMode

    Compress mode to be used for backed-up files.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.util.Diagnostics-class.html0000664000175000017500000007727512657665545027710 0ustar pronovicpronovic00000000000000 CedarBackup3.util.Diagnostics
    Package CedarBackup3 :: Module util :: Class Diagnostics
    [hide private]
    [frames] | no frames]

    Class Diagnostics

    source code

    object --+
             |
            Diagnostics
    

    Class holding runtime diagnostic information.

    Diagnostic information is information that is useful to get from users for debugging purposes. I'm consolidating it all here into one object.

    Instance Methods [hide private]
     
    __init__(self)
    Constructor for the Diagnostics class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    getValues(self)
    Get a map containing all of the diagnostic values.
    source code
     
    printDiagnostics(self, fd=sys.stdout, prefix='')
    Pretty-print diagnostic information to a file descriptor.
    source code
     
    logDiagnostics(self, method, prefix='')
    Pretty-print diagnostic information using a logger method.
    source code
     
    _buildDiagnosticLines(self, prefix='')
    Build a set of pretty-printed diagnostic lines.
    source code
     
    _getVersion(self)
    Property target to get the Cedar Backup version.
    source code
     
    _getInterpreter(self)
    Property target to get the Python interpreter version.
    source code
     
    _getEncoding(self)
    Property target to get the filesystem encoding.
    source code
     
    _getPlatform(self)
    Property target to get the operating system platform.
    source code
     
    _getLocale(self)
    Property target to get the default locale that is in effect.
    source code
     
    _getTimestamp(self)
    Property target to get a current date/time stamp.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _getMaxLength(values)
    Get the maximum length from among a list of strings.
    source code
    Properties [hide private]
      version
    Cedar Backup version.
      interpreter
    Python interpreter version.
      platform
    Platform identifying information.
      encoding
    Filesystem encoding that is in effect.
      locale
    Locale that is in effect.
      timestamp
    Current timestamp.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    Constructor for the Diagnostics class.

    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    getValues(self)

    source code 

    Get a map containing all of the diagnostic values.

    Returns:
    Map from diagnostic name to diagnostic value.

    printDiagnostics(self, fd=sys.stdout, prefix='')

    source code 

    Pretty-print diagnostic information to a file descriptor.

    Parameters:
    • fd - File descriptor used to print information.
    • prefix - Prefix string (if any) to place onto printed lines

    Note: The fd is used rather than print to facilitate unit testing.

    logDiagnostics(self, method, prefix='')

    source code 

    Pretty-print diagnostic information using a logger method.

    Parameters:
    • method - Logger method to use for logging (i.e. logger.info)
    • prefix - Prefix string (if any) to place onto printed lines

    _buildDiagnosticLines(self, prefix='')

    source code 

    Build a set of pretty-printed diagnostic lines.

    Parameters:
    • prefix - Prefix string (if any) to place onto printed lines
    Returns:
    List of strings, not terminated by newlines.

    Property Details [hide private]

    version

    Cedar Backup version.

    Get Method:
    _getVersion(self) - Property target to get the Cedar Backup version.

    interpreter

    Python interpreter version.

    Get Method:
    _getInterpreter(self) - Property target to get the Python interpreter version.

    platform

    Platform identifying information.

    Get Method:
    _getPlatform(self) - Property target to get the operating system platform.

    encoding

    Filesystem encoding that is in effect.

    Get Method:
    _getEncoding(self) - Property target to get the filesystem encoding.

    locale

    Locale that is in effect.

    Get Method:
    _getLocale(self) - Property target to get the default locale that is in effect.

    timestamp

    Current timestamp.

    Get Method:
    _getTimestamp(self) - Property target to get a current date/time stamp.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.subversion.RepositoryDir-class.html0000664000175000017500000012626612657665545032762 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.subversion.RepositoryDir
    Package CedarBackup3 :: Package extend :: Module subversion :: Class RepositoryDir
    [hide private]
    [frames] | no frames]

    Class RepositoryDir

    source code

    object --+
             |
            RepositoryDir
    

    Class representing Subversion repository directory.

    A repository directory is a directory that contains one or more Subversion repositories.

    The following restrictions exist on data in this class:

    • The directory path must be absolute.
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.

    The repository type value is kept around just for reference. It doesn't affect the behavior of the backup.

    Relative exclusions are allowed here. However, there is no configured ignore file, because repository dir backups are not recursive.

    Instance Methods [hide private]
     
    __init__(self, repositoryType=None, directoryPath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None)
    Constructor for the RepositoryDir class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    _setRepositoryType(self, value)
    Property target used to set the repository type.
    source code
     
    _getRepositoryType(self)
    Property target used to get the repository type.
    source code
     
    _setDirectoryPath(self, value)
    Property target used to set the directory path.
    source code
     
    _getDirectoryPath(self)
    Property target used to get the repository path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setRelativeExcludePaths(self, value)
    Property target used to set the relative exclude paths list.
    source code
     
    _getRelativeExcludePaths(self)
    Property target used to get the relative exclude paths list.
    source code
     
    _setExcludePatterns(self, value)
    Property target used to set the exclude patterns list.
    source code
     
    _getExcludePatterns(self)
    Property target used to get the exclude patterns list.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      directoryPath
    Absolute path of the Subversion parent directory.
      collectMode
    Overridden collect mode for this repository.
      compressMode
    Overridden compress mode for this repository.
      repositoryType
    Type of this repository, for reference.
      relativeExcludePaths
    List of relative paths to exclude.
      excludePatterns
    List of regular expression patterns to exclude.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, repositoryType=None, directoryPath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None)
    (Constructor)

    source code 

    Constructor for the RepositoryDir class.

    Parameters:
    • repositoryType - Type of repository, for reference
    • directoryPath - Absolute path of the Subversion parent directory
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    • relativeExcludePaths - List of relative paths to exclude.
    • excludePatterns - List of regular expression patterns to exclude
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setRepositoryType(self, value)

    source code 

    Property target used to set the repository type. There is no validation; this value is kept around just for reference.

    _setDirectoryPath(self, value)

    source code 

    Property target used to set the directory path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setRelativeExcludePaths(self, value)

    source code 

    Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment.


    Property Details [hide private]

    directoryPath

    Absolute path of the Subversion parent directory.

    Get Method:
    _getDirectoryPath(self) - Property target used to get the repository path.
    Set Method:
    _setDirectoryPath(self, value) - Property target used to set the directory path.

    collectMode

    Overridden collect mode for this repository.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Overridden compress mode for this repository.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    repositoryType

    Type of this repository, for reference.

    Get Method:
    _getRepositoryType(self) - Property target used to get the repository type.
    Set Method:
    _setRepositoryType(self, value) - Property target used to set the repository type.

    relativeExcludePaths

    List of relative paths to exclude.

    Get Method:
    _getRelativeExcludePaths(self) - Property target used to get the relative exclude paths list.
    Set Method:
    _setRelativeExcludePaths(self, value) - Property target used to set the relative exclude paths list.

    excludePatterns

    List of regular expression patterns to exclude.

    Get Method:
    _getExcludePatterns(self) - Property target used to get the exclude patterns list.
    Set Method:
    _setExcludePatterns(self, value) - Property target used to set the exclude patterns list.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.ExtensionsConfig-class.html0000664000175000017500000006513012657665544031200 0ustar pronovicpronovic00000000000000 CedarBackup3.config.ExtensionsConfig
    Package CedarBackup3 :: Module config :: Class ExtensionsConfig
    [hide private]
    [frames] | no frames]

    Class ExtensionsConfig

    source code

    object --+
             |
            ExtensionsConfig
    

    Class representing Cedar Backup extensions configuration.

    Extensions configuration is used to specify "extended actions" implemented by code external to Cedar Backup. For instance, a hypothetical third party might write extension code to collect database repository data. If they write a properly-formatted extension function, they can use the extension configuration to map a command-line Cedar Backup action (i.e. "database") to their function.

    The following restrictions exist on data in this class:

    • If set, the order mode must be one of the values in VALID_ORDER_MODES
    • The actions list must be a list of ExtendedAction objects.
    Instance Methods [hide private]
     
    __init__(self, actions=None, orderMode=None)
    Constructor for the ExtensionsConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    _setOrderMode(self, value)
    Property target used to set the order mode.
    source code
     
    _getOrderMode(self)
    Property target used to get the order mode.
    source code
     
    _setActions(self, value)
    Property target used to set the actions list.
    source code
     
    _getActions(self)
    Property target used to get the actions list.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      orderMode
    Order mode for extensions, to control execution ordering.
      actions
    List of extended actions.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, actions=None, orderMode=None)
    (Constructor)

    source code 

    Constructor for the ExtensionsConfig class.

    Parameters:
    • actions - List of extended actions
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setOrderMode(self, value)

    source code 

    Property target used to set the order mode. The value must be one of VALID_ORDER_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setActions(self, value)

    source code 

    Property target used to set the actions list. Either the value must be None or each element must be an ExtendedAction.

    Raises:
    • ValueError - If the value is not a ExtendedAction

    Property Details [hide private]

    orderMode

    Order mode for extensions, to control execution ordering.

    Get Method:
    _getOrderMode(self) - Property target used to get the order mode.
    Set Method:
    _setOrderMode(self, value) - Property target used to set the order mode.

    actions

    List of extended actions.

    Get Method:
    _getActions(self) - Property target used to get the actions list.
    Set Method:
    _setActions(self, value) - Property target used to set the actions list.

    CedarBackup3-3.1.6/doc/interface/identifier-index.html0000664000175000017500000173312012657665544024347 0ustar pronovicpronovic00000000000000 Identifier Index
     
    [hide private]
    [frames] | no frames]

    Identifier Index

    [ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z _ ]

    A

    B

    C

    D

    E

    F

    G

    H

    I

    K

    L

    M

    N

    O

    P

    Q

    R

    S

    T

    U

    V

    W

    X

    _



    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.PostActionHook-class.html0000664000175000017500000003341012657665544030613 0ustar pronovicpronovic00000000000000 CedarBackup3.config.PostActionHook
    Package CedarBackup3 :: Module config :: Class PostActionHook
    [hide private]
    [frames] | no frames]

    Class PostActionHook

    source code

    object --+    
             |    
    ActionHook --+
                 |
                PostActionHook
    

    Class representing a pre-action hook associated with an action.

    A hook associated with an action is a shell command to be executed either before or after a named action is executed. In this case, a post-action hook is executed after the named action.

    The following restrictions exist on data in this class:

    • The action name must be a non-empty string consisting of lower-case letters and digits.
    • The shell command must be a non-empty string.

    The internal before instance variable is always set to True in this class.

    Instance Methods [hide private]
     
    __init__(self, action=None, command=None)
    Constructor for the PostActionHook class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code

    Inherited from ActionHook: __str__, __cmp__, __eq__, __lt__, __gt__, __ge__, __le__

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]

    Inherited from ActionHook: action, command, before, after

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, action=None, command=None)
    (Constructor)

    source code 

    Constructor for the PostActionHook class.

    Parameters:
    • action - Action this hook is associated with
    • command - Shell command to execute
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.actions.initialize-module.html0000664000175000017500000000255512657665544031214 0ustar pronovicpronovic00000000000000 initialize

    Module initialize


    Functions

    executeInitialize

    Variables

    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.config-module.html0000664000175000017500000001273012657665544026655 0ustar pronovicpronovic00000000000000 config

    Module config


    Classes

    ActionDependencies
    ActionHook
    BlankBehavior
    ByteQuantity
    CollectConfig
    CollectDir
    CollectFile
    CommandOverride
    Config
    ExtendedAction
    ExtensionsConfig
    LocalPeer
    OptionsConfig
    PeersConfig
    PostActionHook
    PreActionHook
    PurgeConfig
    PurgeDir
    ReferenceConfig
    RemotePeer
    StageConfig
    StoreConfig

    Functions

    addByteQuantityNode
    readByteQuantity

    Variables

    ACTION_NAME_REGEX
    DEFAULT_DEVICE_TYPE
    DEFAULT_MEDIA_TYPE
    REWRITABLE_MEDIA_TYPES
    VALID_ARCHIVE_MODES
    VALID_BLANK_MODES
    VALID_BYTE_UNITS
    VALID_CD_MEDIA_TYPES
    VALID_COLLECT_MODES
    VALID_COMPRESS_MODES
    VALID_DEVICE_TYPES
    VALID_DVD_MEDIA_TYPES
    VALID_FAILURE_MODES
    VALID_MEDIA_TYPES
    VALID_ORDER_MODES
    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.extend.mbox-module.html0000664000175000017500000000716512657665544027651 0ustar pronovicpronovic00000000000000 mbox

    Module mbox


    Classes

    LocalConfig
    MboxConfig
    MboxDir
    MboxFile

    Functions

    executeAction

    Variables

    GREPMAIL_COMMAND
    REVISION_PATH_EXTENSION
    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.tools-module.html0000664000175000017500000001612312657665544025765 0ustar pronovicpronovic00000000000000 CedarBackup3.tools
    Package CedarBackup3 :: Package tools
    [hide private]
    [frames] | no frames]

    Package tools

    source code

    Official Cedar Backup Tools

    This package provides official Cedar Backup tools. Tools are things that feel a little like extensions, but don't fit the normal mold of extensions. For instance, they might not be intended to run from cron, or might need to interact dynamically with the user (i.e. accept user input).

    Tools are usually scripts that are run directly from the command line, just like the main cback3 script. Like the cback3 script, the majority of a tool is implemented in a .py module, and then the script just invokes the module's cli() function. The actual scripts for tools are distributed in the util/ directory.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Submodules [hide private]

    Variables [hide private]
      __package__ = None
    hash(x)
    CedarBackup3-3.1.6/doc/interface/CedarBackup3.testutil-module.html0000664000175000017500000011603112657665544026501 0ustar pronovicpronovic00000000000000 CedarBackup3.testutil
    Package CedarBackup3 :: Module testutil
    [hide private]
    [frames] | no frames]

    Module testutil

    source code

    Provides unit-testing utilities.

    These utilities are kept here, separate from util.py, because they provide common functionality that I do not want exported "publicly" once Cedar Backup is installed on a system. They are only used for unit testing, and are only useful within the source tree.

    Many of these functions are in here because they are "good enough" for unit test work but are not robust enough to be real public functions. Others (like removedir) do what they are supposed to, but I don't want responsibility for making them available to others.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    findResources(resources, dataDirs)
    Returns a dictionary of locations for various resources.
    source code
     
    commandAvailable(command)
    Indicates whether a command is available on $PATH somewhere.
    source code
     
    buildPath(components)
    Builds a complete path from a list of components.
    source code
     
    removedir(tree)
    Recursively removes an entire directory.
    source code
     
    extractTar(tmpdir, filepath)
    Extracts the indicated tar file to the indicated tmpdir.
    source code
     
    changeFileAge(filename, subtract=None)
    Changes a file age using the os.utime function.
    source code
     
    getMaskAsMode()
    Returns the user's current umask inverted to a mode.
    source code
     
    getLogin()
    Returns the name of the currently-logged in user.
    source code
     
    failUnlessAssignRaises(testCase, exception, obj, prop, value)
    Equivalent of failUnlessRaises, but used for property assignments instead.
    source code
     
    runningAsRoot()
    Returns boolean indicating whether the effective user id is root.
    source code
     
    platformDebian()
    Returns boolean indicating whether this is the Debian platform.
    source code
     
    platformMacOsX()
    Returns boolean indicating whether this is the Mac OS X platform.
    source code
     
    setupDebugLogger()
    Sets up a screen logger for debugging purposes.
    source code
     
    setupOverrides()
    Set up any platform-specific overrides that might be required.
    source code
     
    randomFilename(length, prefix=None, suffix=None)
    Generates a random filename with the given length.
    source code
     
    captureOutput(c)
    Captures the output (stdout, stderr) of a function or a method.
    source code
     
    _isPlatform(name)
    Returns boolean indicating whether we're running on the indicated platform.
    source code
     
    availableLocales()
    Returns a list of available locales on the system
    source code
    Variables [hide private]
      __package__ = 'CedarBackup3'
    Function Details [hide private]

    findResources(resources, dataDirs)

    source code 

    Returns a dictionary of locations for various resources.

    Parameters:
    • resources - List of required resources.
    • dataDirs - List of data directories to search within for resources.
    Returns:
    Dictionary mapping resource name to resource path.
    Raises:
    • Exception - If some resource cannot be found.

    commandAvailable(command)

    source code 

    Indicates whether a command is available on $PATH somewhere. This should work on both Windows and UNIX platforms.

    Parameters:
    • command - Commang to search for
    Returns:
    Boolean true/false depending on whether command is available.

    buildPath(components)

    source code 

    Builds a complete path from a list of components. For instance, constructs "/a/b/c" from ["/a", "b", "c",].

    Parameters:
    • components - List of components.
    Returns:
    String path constructed from components.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    removedir(tree)

    source code 

    Recursively removes an entire directory. This is basically taken from an example on python.com.

    Parameters:
    • tree - Directory tree to remove.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    extractTar(tmpdir, filepath)

    source code 

    Extracts the indicated tar file to the indicated tmpdir.

    Parameters:
    • tmpdir - Temp directory to extract to.
    • filepath - Path to tarfile to extract.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    changeFileAge(filename, subtract=None)

    source code 

    Changes a file age using the os.utime function.

    Parameters:
    • filename - File to operate on.
    • subtract - Number of seconds to subtract from the current time.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    Note: Some platforms don't seem to be able to set an age precisely. As a result, whereas we might have intended to set an age of 86400 seconds, we actually get an age of 86399.375 seconds. When util.calculateFileAge() looks at that the file, it calculates an age of 0.999992766204 days, which then gets truncated down to zero whole days. The tests get very confused. To work around this, I always subtract off one additional second as a fudge factor. That way, the file age will be at least as old as requested later on.

    getMaskAsMode()

    source code 

    Returns the user's current umask inverted to a mode. A mode is mostly a bitwise inversion of a mask, i.e. mask 002 is mode 775.

    Returns:
    Umask converted to a mode, as an integer.

    getLogin()

    source code 

    Returns the name of the currently-logged in user. This might fail under some circumstances - but if it does, our tests would fail anyway.

    failUnlessAssignRaises(testCase, exception, obj, prop, value)

    source code 

    Equivalent of failUnlessRaises, but used for property assignments instead.

    It's nice to be able to use failUnlessRaises to check that a method call raises the exception that you expect. Unfortunately, this method can't be used to check Python propery assignments, even though these property assignments are actually implemented underneath as methods.

    This function (which can be easily called by unit test classes) provides an easy way to wrap the assignment checks. It's not pretty, or as intuitive as the original check it's modeled on, but it does work.

    Let's assume you make this method call:

      testCase.failUnlessAssignRaises(ValueError, collectDir, "absolutePath", absolutePath)
    

    If you do this, a test case failure will be raised unless the assignment:

      collectDir.absolutePath = absolutePath
    

    fails with a ValueError exception. The failure message differentiates between the case where no exception was raised and the case where the wrong exception was raised.

    Parameters:
    • testCase - PyUnit test case object (i.e. self).
    • exception - Exception that is expected to be raised.
    • obj - Object whose property is to be assigned to.
    • prop - Name of the property, as a string.
    • value - Value that is to be assigned to the property.

    Note: Internally, the missed and instead variables are used rather than directly calling testCase.fail upon noticing a problem because the act of "failure" itself generates an exception that would be caught by the general except clause.

    See Also: unittest.TestCase.failUnlessRaises

    setupDebugLogger()

    source code 

    Sets up a screen logger for debugging purposes.

    Normally, the CLI functionality configures the logger so that things get written to the right place. However, for debugging it's sometimes nice to just get everything -- debug information and output -- dumped to the screen. This function takes care of that.

    setupOverrides()

    source code 

    Set up any platform-specific overrides that might be required.

    When packages are built, this is done manually (hardcoded) in customize.py and the overrides are set up in cli.cli(). This way, no runtime checks need to be done. This is safe, because the package maintainer knows exactly which platform (Debian or not) the package is being built for.

    Unit tests are different, because they might be run anywhere. So, we attempt to make a guess about plaform using platformDebian(), and use that to set up the custom overrides so that platform-specific unit tests continue to work.

    randomFilename(length, prefix=None, suffix=None)

    source code 

    Generates a random filename with the given length.

    Parameters:
    • length - Length of filename. @return Random filename.

    captureOutput(c)

    source code 

    Captures the output (stdout, stderr) of a function or a method.

    Some of our functions don't do anything other than just print output. We need a way to test these functions (at least nominally) but we don't want any of the output spoiling the test suite output.

    This function just creates a dummy file descriptor that can be used as a target by the callable function, rather than stdout or stderr.

    Parameters:
    • c - Callable function or method.
    Returns:
    Output of function, as one big string.

    Note: This method assumes that callable doesn't take any arguments besides keyword argument fd to specify the file descriptor.

    _isPlatform(name)

    source code 

    Returns boolean indicating whether we're running on the indicated platform.

    Parameters:
    • name - Platform name to check, currently one of "windows" or "macosx"

    availableLocales()

    source code 

    Returns a list of available locales on the system

    Returns:
    List of string locale names

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.util.PathResolverSingleton._Helper-class.html0000664000175000017500000001502412657665545033277 0ustar pronovicpronovic00000000000000 CedarBackup3.util.PathResolverSingleton._Helper
    Package CedarBackup3 :: Module util :: Class PathResolverSingleton :: Class _Helper
    [hide private]
    [frames] | no frames]

    Class _Helper

    source code

    Helper class to provide a singleton factory method.

    Instance Methods [hide private]
     
    __init__(self) source code
     
    __call__(self, *args, **kw) source code
    CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.constants-module.html0000664000175000017500000001756212657665544030310 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.constants
    Package CedarBackup3 :: Package actions :: Module constants
    [hide private]
    [frames] | no frames]

    Module constants

    source code

    Provides common constants used by standard actions.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Variables [hide private]
      DIR_TIME_FORMAT = '%Y/%m/%d'
      DIGEST_EXTENSION = 'sha'
      INDICATOR_PATTERN = ['cback\\..*']
      COLLECT_INDICATOR = 'cback.collect'
      STAGE_INDICATOR = 'cback.stage'
      STORE_INDICATOR = 'cback.store'
      __package__ = None
    hash(x)
    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.ByteQuantity-class.html0000664000175000017500000007317012657665544030360 0ustar pronovicpronovic00000000000000 CedarBackup3.config.ByteQuantity
    Package CedarBackup3 :: Module config :: Class ByteQuantity
    [hide private]
    [frames] | no frames]

    Class ByteQuantity

    source code

    object --+
             |
            ByteQuantity
    

    Class representing a byte quantity.

    A byte quantity has both a quantity and a byte-related unit. Units are maintained using the constants from util.py. If no units are provided, UNIT_BYTES is assumed.

    The quantity is maintained internally as a string so that issues of precision can be avoided. It really isn't possible to store a floating point number here while being able to losslessly translate back and forth between XML and object representations. (Perhaps the Python 2.4 Decimal class would have been an option, but I originally wanted to stay compatible with Python 2.3.)

    Even though the quantity is maintained as a string, the string must be in a valid floating point positive number. Technically, any floating point string format supported by Python is allowble. However, it does not make sense to have a negative quantity of bytes in this context.

    Instance Methods [hide private]
     
    __init__(self, quantity=None, units=None)
    Constructor for the ByteQuantity class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Python 2-style comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of Python 2-style compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of Python 2-style compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of Python 2-style compare operator.
    source code
     
    _setQuantity(self, value)
    Property target used to set the quantity The value must be interpretable as a float if it is not None
    source code
     
    _getQuantity(self)
    Property target used to get the quantity.
    source code
     
    _setUnits(self, value)
    Property target used to set the units value.
    source code
     
    _getUnits(self)
    Property target used to get the units value.
    source code
     
    _getBytes(self)
    Property target used to return the byte quantity as a floating point number.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      quantity
    Byte quantity, as a string
      units
    Units for byte quantity, for instance UNIT_BYTES
      bytes
    Byte quantity, as a floating point number.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, quantity=None, units=None)
    (Constructor)

    source code 

    Constructor for the ByteQuantity class.

    Parameters:
    • quantity - Quantity of bytes, something interpretable as a float
    • units - Unit of bytes, one of VALID_BYTE_UNITS
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Python 2-style comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setQuantity(self, value)

    source code 

    Property target used to set the quantity The value must be interpretable as a float if it is not None

    Raises:
    • ValueError - If the value is an empty string.
    • ValueError - If the value is not a valid floating point number
    • ValueError - If the value is less than zero

    _setUnits(self, value)

    source code 

    Property target used to set the units value. If not None, the units value must be one of the values in VALID_BYTE_UNITS.

    Raises:
    • ValueError - If the value is not valid.

    _getBytes(self)

    source code 

    Property target used to return the byte quantity as a floating point number. If there is no quantity set, then a value of 0.0 is returned.


    Property Details [hide private]

    quantity

    Byte quantity, as a string

    Get Method:
    _getQuantity(self) - Property target used to get the quantity.
    Set Method:
    _setQuantity(self, value) - Property target used to set the quantity The value must be interpretable as a float if it is not None

    units

    Units for byte quantity, for instance UNIT_BYTES

    Get Method:
    _getUnits(self) - Property target used to get the units value.
    Set Method:
    _setUnits(self, value) - Property target used to set the units value.

    bytes

    Byte quantity, as a floating point number.

    Get Method:
    _getBytes(self) - Property target used to return the byte quantity as a floating point number.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.knapsack-module.html0000664000175000017500000005336512657665544026431 0ustar pronovicpronovic00000000000000 CedarBackup3.knapsack
    Package CedarBackup3 :: Module knapsack
    [hide private]
    [frames] | no frames]

    Module knapsack

    source code

    Provides the implementation for various knapsack algorithms.

    Knapsack algorithms are "fit" algorithms, used to take a set of "things" and decide on the optimal way to fit them into some container. The focus of this code is to fit files onto a disc, although the interface (in terms of item, item size and capacity size, with no units) is generic enough that it can be applied to items other than files.

    All of the algorithms implemented below assume that "optimal" means "use up as much of the disc's capacity as possible", but each produces slightly different results. For instance, the best fit and first fit algorithms tend to include fewer files than the worst fit and alternate fit algorithms, even if they use the disc space more efficiently.

    Usually, for a given set of circumstances, it will be obvious to a human which algorithm is the right one to use, based on trade-offs between number of files included and ideal space utilization. It's a little more difficult to do this programmatically. For Cedar Backup's purposes (i.e. trying to fit a small number of collect-directory tarfiles onto a disc), worst-fit is probably the best choice if the goal is to include as many of the collect directories as possible.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    firstFit(items, capacity)
    Implements the first-fit knapsack algorithm.
    source code
     
    bestFit(items, capacity)
    Implements the best-fit knapsack algorithm.
    source code
     
    worstFit(items, capacity)
    Implements the worst-fit knapsack algorithm.
    source code
     
    alternateFit(items, capacity)
    Implements the alternate-fit knapsack algorithm.
    source code
    Variables [hide private]
      __package__ = None
    hash(x)
    Function Details [hide private]

    firstFit(items, capacity)

    source code 

    Implements the first-fit knapsack algorithm.

    The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting.

    The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items.

    The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed.

    The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass items.copy() if they do not want their version of the list modified.

    The function returns a list of chosen items and the unitless amount of capacity used by the items.

    Parameters:
    • items (dictionary, keyed on item, of (item, size) tuples, item as string and size as integer) - Items to operate on
    • capacity (integer) - Capacity of container to fit to
    Returns:
    Tuple (items, used) as described above

    bestFit(items, capacity)

    source code 

    Implements the best-fit knapsack algorithm.

    The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not ususual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms.

    The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items.

    The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed.

    The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass items.copy() if they do not want their version of the list modified.

    The function returns a list of chosen items and the unitless amount of capacity used by the items.

    Parameters:
    • items (dictionary, keyed on item, of (item, size) tuples, item as string and size as integer) - Items to operate on
    • capacity (integer) - Capacity of container to fit to
    Returns:
    Tuple (items, used) as described above

    worstFit(items, capacity)

    source code 

    Implements the worst-fit knapsack algorithm.

    The worst-fit algorithm proceeds through an a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing.

    The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items.

    The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed.

    The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass items.copy() if they do not want their version of the list modified.

    The function returns a list of chosen items and the unitless amount of capacity used by the items.

    Parameters:
    • items (dictionary, keyed on item, of (item, size) tuples, item as string and size as integer) - Items to operate on
    • capacity (integer) - Capacity of container to fit to
    Returns:
    Tuple (items, used) as described above

    alternateFit(items, capacity)

    source code 

    Implements the alternate-fit knapsack algorithm.

    This algorithm (which I'm calling "alternate-fit" as in "alternate from one to the other") tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slighly fewer items.

    The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items.

    The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed.

    The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass items.copy() if they do not want their version of the list modified.

    The function returns a list of chosen items and the unitless amount of capacity used by the items.

    Parameters:
    • items (dictionary, keyed on item, of (item, size) tuples, item as string and size as integer) - Items to operate on
    • capacity (integer) - Capacity of container to fit to
    Returns:
    Tuple (items, used) as described above

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.action-module.html0000664000175000017500000000211612657665544026662 0ustar pronovicpronovic00000000000000 action

    Module action


    Variables

    __package__

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.amazons3-module.html0000664000175000017500000010137512657665544027652 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.amazons3
    Package CedarBackup3 :: Package extend :: Module amazons3
    [hide private]
    [frames] | no frames]

    Module amazons3

    source code

    Store-type extension that writes data to Amazon S3.

    This extension requires a new configuration section <amazons3> and is intended to be run immediately after the standard stage action, replacing the standard store action. Aside from its own configuration, it requires the options and staging configuration sections in the standard Cedar Backup configuration file. Since it is intended to replace the store action, it does not rely on any store configuration.

    The underlying functionality relies on the AWS CLI interface. Before you use this extension, you need to set up your Amazon S3 account and configure the AWS CLI connection per Amazon's documentation. The extension assumes that the backup is being executed as root, and switches over to the configured backup user to communicate with AWS. So, make sure you configure AWS CLI as the backup user and not root.

    You can optionally configure Cedar Backup to encrypt data before sending it to S3. To do that, provide a complete command line using the ${input} and ${output} variables to represent the original input file and the encrypted output file. This command will be executed as the backup user.

    For instance, you can use something like this with GPG:

      /usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input}
    

    The GPG mechanism depends on a strong passphrase for security. One way to generate a strong passphrase is using your system random number generator, i.e.:

      dd if=/dev/urandom count=20 bs=1 | xxd -ps
    

    (See StackExchange for more details about that advice.) If you decide to use encryption, make sure you save off the passphrase in a safe place, so you can get at your backup data later if you need to. And obviously, make sure to set permissions on the passphrase file so it can only be read by the backup user.

    This extension was written for and tested on Linux. It will throw an exception if run on Windows.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      AmazonS3Config
    Class representing Amazon S3 configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the amazons3 backup action.
    source code
     
    _findCorrectDailyDir(options, config, local)
    Finds the correct daily staging directory to be written to Amazon S3.
    source code
     
    _applySizeLimits(options, config, local, stagingDirs)
    Apply size limits, throwing an exception if any limits are exceeded.
    source code
     
    _writeToAmazonS3(config, local, stagingDirs)
    Writes the indicated staging directories to an Amazon S3 bucket.
    source code
     
    _writeStoreIndicator(config, stagingDirs)
    Writes a store indicator file into staging directories.
    source code
     
    _clearExistingBackup(config, s3BucketUrl)
    Clear any existing backup files for an S3 bucket URL.
    source code
     
    _uploadStagingDir(config, stagingDir, s3BucketUrl)
    Upload the contents of a staging directory out to the Amazon S3 cloud.
    source code
     
    _verifyUpload(config, stagingDir, s3BucketUrl)
    Verify that a staging directory was properly uploaded to the Amazon S3 cloud.
    source code
     
    _encryptStagingDir(config, local, stagingDir, encryptedDir)
    Encrypt a staging directory, creating a new directory in the process.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.extend.amazons3")
      SU_COMMAND = ['su']
      AWS_COMMAND = ['aws']
      STORE_INDICATOR = 'cback.amazons3'
      __package__ = 'CedarBackup3.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the amazons3 backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are I/O problems reading or writing files

    _findCorrectDailyDir(options, config, local)

    source code 

    Finds the correct daily staging directory to be written to Amazon S3.

    This is substantially similar to the same function in store.py. The main difference is that it doesn't rely on store configuration at all.

    Parameters:
    • options - Options object.
    • config - Config object.
    • local - Local config object.
    Returns:
    Correct staging dir, as a dict mapping directory to date suffix.
    Raises:
    • IOError - If the staging directory cannot be found.

    _applySizeLimits(options, config, local, stagingDirs)

    source code 

    Apply size limits, throwing an exception if any limits are exceeded.

    Size limits are optional. If a limit is set to None, it does not apply. The full size limit applies if the full option is set or if today is the start of the week. The incremental size limit applies otherwise. Limits are applied to the total size of all the relevant staging directories.

    Parameters:
    • options - Options object.
    • config - Config object.
    • local - Local config object.
    • stagingDirs - Dictionary mapping directory path to date suffix.
    Raises:
    • ValueError - Under many generic error conditions
    • ValueError - If a size limit has been exceeded

    _writeToAmazonS3(config, local, stagingDirs)

    source code 

    Writes the indicated staging directories to an Amazon S3 bucket.

    Each of the staging directories listed in stagingDirs will be written to the configured Amazon S3 bucket from local configuration. The directories will be placed into the image at the root by date, so staging directory /opt/stage/2005/02/10 will be placed into the S3 bucket at /2005/02/10. If an encrypt commmand is provided, the files will be encrypted first.

    Parameters:
    • config - Config object.
    • local - Local config object.
    • stagingDirs - Dictionary mapping directory path to date suffix.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there is a problem writing to Amazon S3

    _writeStoreIndicator(config, stagingDirs)

    source code 

    Writes a store indicator file into staging directories.

    Parameters:
    • config - Config object.
    • stagingDirs - Dictionary mapping directory path to date suffix.

    _clearExistingBackup(config, s3BucketUrl)

    source code 

    Clear any existing backup files for an S3 bucket URL.

    Parameters:
    • config - Config object.
    • s3BucketUrl - S3 bucket URL associated with the staging directory

    _uploadStagingDir(config, stagingDir, s3BucketUrl)

    source code 

    Upload the contents of a staging directory out to the Amazon S3 cloud.

    Parameters:
    • config - Config object.
    • stagingDir - Staging directory to upload
    • s3BucketUrl - S3 bucket URL associated with the staging directory

    _verifyUpload(config, stagingDir, s3BucketUrl)

    source code 

    Verify that a staging directory was properly uploaded to the Amazon S3 cloud.

    Parameters:
    • config - Config object.
    • stagingDir - Staging directory to verify
    • s3BucketUrl - S3 bucket URL associated with the staging directory

    _encryptStagingDir(config, local, stagingDir, encryptedDir)

    source code 

    Encrypt a staging directory, creating a new directory in the process.

    Parameters:
    • config - Config object.
    • stagingDir - Staging directory to use as source
    • encryptedDir - Target directory into which encrypted files should be written

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.util.UnorderedList-class.html0000664000175000017500000006142312657665545030210 0ustar pronovicpronovic00000000000000 CedarBackup3.util.UnorderedList
    Package CedarBackup3 :: Module util :: Class UnorderedList
    [hide private]
    [frames] | no frames]

    Class UnorderedList

    source code

    object --+    
             |    
          list --+
                 |
                UnorderedList
    
    Known Subclasses:

    Class representing an "unordered list".

    An "unordered list" is a list in which only the contents matter, not the order in which the contents appear in the list.

    For instance, we might be keeping track of set of paths in a list, because it's convenient to have them in that form. However, for comparison purposes, we would only care that the lists contain exactly the same contents, regardless of order.

    I have come up with two reasonable ways of doing this, plus a couple more that would work but would be a pain to implement. My first method is to copy and sort each list, comparing the sorted versions. This will only work if two lists with exactly the same members are guaranteed to sort in exactly the same order. The second way would be to create two Sets and then compare the sets. However, this would lose information about any duplicates in either list. I've decided to go with option #1 for now. I'll modify this code if I run into problems in the future.

    We override the original __eq__, __ne__, __ge__, __gt__, __le__ and __lt__ list methods to change the definition of the various comparison operators. In all cases, the comparison is changed to return the result of the original operation but instead comparing sorted lists. This is going to be quite a bit slower than a normal list, so you probably only want to use it on small lists.

    Instance Methods [hide private]
     
    __eq__(self, other)
    Definition of == operator for this class.
    source code
     
    __ne__(self, other)
    Definition of != operator for this class.
    source code
     
    __ge__(self, other)
    Definition of ≥ operator for this class.
    source code
     
    __gt__(self, other)
    Definition of > operator for this class.
    source code
     
    __le__(self, other)
    Definition of ≤ operator for this class.
    source code
     
    __lt__(self, other)
    Definition of < operator for this class.
    source code

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __init__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, append, count, extend, index, insert, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Static Methods [hide private]
     
    mixedsort(value)
    Sort a list, making sure we don't blow up if the list happens to include mixed values.
    source code
     
    mixedkey(value)
    Provide a key for use by mixedsort()
    source code
    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __eq__(self, other)
    (Equality operator)

    source code 

    Definition of == operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self == other.
    Overrides: list.__eq__

    __ne__(self, other)

    source code 

    Definition of != operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self != other.
    Overrides: list.__ne__

    __ge__(self, other)
    (Greater-than-or-equals operator)

    source code 

    Definition of ≥ operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self >= other.
    Overrides: list.__ge__

    __gt__(self, other)
    (Greater-than operator)

    source code 

    Definition of > operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self > other.
    Overrides: list.__gt__

    __le__(self, other)
    (Less-than-or-equals operator)

    source code 

    Definition of ≤ operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self <= other.
    Overrides: list.__le__

    __lt__(self, other)
    (Less-than operator)

    source code 

    Definition of < operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self < other.
    Overrides: list.__lt__

    mixedsort(value)
    Static Method

    source code 

    Sort a list, making sure we don't blow up if the list happens to include mixed values.

    See Also: http://stackoverflow.com/questions/26575183/how-can-i-get-2-x-like-sorting-behaviour-in-python-3-x


    CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.constants-pysrc.html0000664000175000017500000002726712657665547030171 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.constants
    Package CedarBackup3 :: Package actions :: Module constants
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.actions.constants

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 3 (>= 3.4) 
    13  # Project  : Cedar Backup, release 3 
    14  # Purpose  : Provides common constants used by standard actions. 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Provides common constants used by standard actions. 
    24  @sort: DIR_TIME_FORMAT, DIGEST_EXTENSION, INDICATOR_PATTERN, 
    25         COLLECT_INDICATOR, STAGE_INDICATOR, STORE_INDICATOR 
    26  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    27  """ 
    28   
    29  ######################################################################## 
    30  # Module-wide constants and variables 
    31  ######################################################################## 
    32   
    33  DIR_TIME_FORMAT      = "%Y/%m/%d" 
    34  DIGEST_EXTENSION     = "sha" 
    35   
    36  INDICATOR_PATTERN    = [ r"cback\..*", ] 
    37  COLLECT_INDICATOR    = "cback.collect" 
    38  STAGE_INDICATOR      = "cback.stage" 
    39  STORE_INDICATOR      = "cback.store" 
    40   
    

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.capacity.PercentageQuantity-class.html0000664000175000017500000006406712657665544033355 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.capacity.PercentageQuantity
    Package CedarBackup3 :: Package extend :: Module capacity :: Class PercentageQuantity
    [hide private]
    [frames] | no frames]

    Class PercentageQuantity

    source code

    object --+
             |
            PercentageQuantity
    

    Class representing a percentage quantity.

    The percentage is maintained internally as a string so that issues of precision can be avoided. It really isn't possible to store a floating point number here while being able to losslessly translate back and forth between XML and object representations. (Perhaps the Python 2.4 Decimal class would have been an option, but I originally wanted to stay compatible with Python 2.3.)

    Even though the quantity is maintained as a string, the string must be in a valid floating point positive number. Technically, any floating point string format supported by Python is allowble. However, it does not make sense to have a negative percentage in this context.

    Instance Methods [hide private]
     
    __init__(self, quantity=None)
    Constructor for the PercentageQuantity class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    _setQuantity(self, value)
    Property target used to set the quantity The value must be a non-empty string if it is not None.
    source code
     
    _getQuantity(self)
    Property target used to get the quantity.
    source code
     
    _getPercentage(self)
    Property target used to get the quantity as a floating point number.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      quantity
    Percentage value, as a string
      percentage
    Percentage value, as a floating point number.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, quantity=None)
    (Constructor)

    source code 

    Constructor for the PercentageQuantity class.

    Parameters:
    • quantity - Percentage quantity, as a string (i.e. "99.9" or "12")
    Raises:
    • ValueError - If the quantity value is invaid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setQuantity(self, value)

    source code 

    Property target used to set the quantity The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.
    • ValueError - If the value is not a valid floating point number
    • ValueError - If the value is less than zero

    _getPercentage(self)

    source code 

    Property target used to get the quantity as a floating point number. If there is no quantity set, then a value of 0.0 is returned.


    Property Details [hide private]

    quantity

    Percentage value, as a string

    Get Method:
    _getQuantity(self) - Property target used to get the quantity.
    Set Method:
    _setQuantity(self, value) - Property target used to set the quantity The value must be a non-empty string if it is not None.

    percentage

    Percentage value, as a floating point number.

    Get Method:
    _getPercentage(self) - Property target used to get the quantity as a floating point number.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.mbox-module.html0000664000175000017500000012752412657665544027070 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.mbox
    Package CedarBackup3 :: Package extend :: Module mbox
    [hide private]
    [frames] | no frames]

    Module mbox

    source code

    Provides an extension to back up mbox email files.

    Backing up email

    Email folders (often stored as mbox flatfiles) are not well-suited being backed up with an incremental backup like the one offered by Cedar Backup. This is because mbox files often change on a daily basis, forcing the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large folders. (Note that the alternative maildir format does not share this problem, since it typically uses one file per message.)

    One solution to this problem is to design a smarter incremental backup process, which backs up baseline content on the first day of the week, and then backs up only new messages added to that folder on every other day of the week. This way, the backup for any single day is only as large as the messages placed into the folder on that day. The backup isn't as "perfect" as the incremental backup process, because it doesn't preserve information about messages deleted from the backed-up folder. However, it should be much more space-efficient, and in a recovery situation, it seems better to restore too much data rather than too little.

    What is this extension?

    This is a Cedar Backup extension used to back up mbox email files via the Cedar Backup command line. Individual mbox files or directories containing mbox files can be backed up using the same collect modes allowed for filesystems in the standard Cedar Backup collect action: weekly, daily, incremental. It implements the "smart" incremental backup process discussed above, using functionality provided by the grepmail utility.

    This extension requires a new configuration section <mbox> and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file.

    The mbox action is conceptually similar to the standard collect action, except that mbox directories are not collected recursively. This implies some configuration changes (i.e. there's no need for global exclusions or an ignore file). If you back up a directory, all of the mbox files in that directory are backed up into a single tar file using the indicated compression method.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      MboxFile
    Class representing mbox file configuration..
      MboxDir
    Class representing mbox directory configuration..
      MboxConfig
    Class representing mbox configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the mbox backup action.
    source code
     
    _getCollectMode(local, item)
    Gets the collect mode that should be used for an mbox file or directory.
    source code
     
    _getCompressMode(local, item)
    Gets the compress mode that should be used for an mbox file or directory.
    source code
     
    _getRevisionPath(config, item)
    Gets the path to the revision file associated with a repository.
    source code
     
    _loadLastRevision(config, item, fullBackup, collectMode)
    Loads the last revision date for this item from disk and returns it.
    source code
     
    _writeNewRevision(config, item, newRevision)
    Writes new revision information to disk.
    source code
     
    _getExclusions(mboxDir)
    Gets exclusions (file and patterns) associated with an mbox directory.
    source code
     
    _getBackupPath(config, mboxPath, compressMode, newRevision, targetDir=None)
    Gets the backup file path (including correct extension) associated with an mbox path.
    source code
     
    _getTarfilePath(config, mboxPath, compressMode, newRevision)
    Gets the tarfile backup file path (including correct extension) associated with an mbox path.
    source code
     
    _getOutputFile(backupPath, compressMode)
    Opens the output file used for saving backup information.
    source code
     
    _backupMboxFile(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, targetDir=None)
    Backs up an individual mbox file.
    source code
     
    _backupMboxDir(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, excludePaths, excludePatterns)
    Backs up a directory containing mbox files.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.extend.mbox")
      GREPMAIL_COMMAND = ['grepmail']
      REVISION_PATH_EXTENSION = 'mboxlast'
      __package__ = 'CedarBackup3.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the mbox backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If a backup could not be written for some reason.

    _getCollectMode(local, item)

    source code 

    Gets the collect mode that should be used for an mbox file or directory. Use file- or directory-specific value if possible, otherwise take from mbox section.

    Parameters:
    • local - LocalConfig object.
    • item - Mbox file or directory
    Returns:
    Collect mode to use.

    _getCompressMode(local, item)

    source code 

    Gets the compress mode that should be used for an mbox file or directory. Use file- or directory-specific value if possible, otherwise take from mbox section.

    Parameters:
    • local - LocalConfig object.
    • item - Mbox file or directory
    Returns:
    Compress mode to use.

    _getRevisionPath(config, item)

    source code 

    Gets the path to the revision file associated with a repository.

    Parameters:
    • config - Cedar Backup configuration.
    • item - Mbox file or directory
    Returns:
    Absolute path to the revision file associated with the repository.

    _loadLastRevision(config, item, fullBackup, collectMode)

    source code 

    Loads the last revision date for this item from disk and returns it.

    If this is a full backup, or if the revision file cannot be loaded for some reason, then None is returned. This indicates that there is no previous revision, so the entire mail file or directory should be backed up.

    Parameters:
    • config - Cedar Backup configuration.
    • item - Mbox file or directory
    • fullBackup - Indicates whether this is a full backup
    • collectMode - Indicates the collect mode for this item
    Returns:
    Revision date as a datetime.datetime object or None.

    Note: We write the actual revision object to disk via pickle, so we don't deal with the datetime precision or format at all. Whatever's in the object is what we write.

    _writeNewRevision(config, item, newRevision)

    source code 

    Writes new revision information to disk.

    If we can't write the revision file successfully for any reason, we'll log the condition but won't throw an exception.

    Parameters:
    • config - Cedar Backup configuration.
    • item - Mbox file or directory
    • newRevision - Revision date as a datetime.datetime object.

    Note: We write the actual revision object to disk via pickle, so we don't deal with the datetime precision or format at all. Whatever's in the object is what we write.

    _getExclusions(mboxDir)

    source code 

    Gets exclusions (file and patterns) associated with an mbox directory.

    The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the mbox directory's relative exclude paths.

    The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the mbox directory's list of patterns.

    Parameters:
    • mboxDir - Mbox directory object.
    Returns:
    Tuple (files, patterns) indicating what to exclude.

    _getBackupPath(config, mboxPath, compressMode, newRevision, targetDir=None)

    source code 

    Gets the backup file path (including correct extension) associated with an mbox path.

    We assume that if the target directory is passed in, that we're backing up a directory. Under these circumstances, we'll just use the basename of the individual path as the output file.

    Parameters:
    • config - Cedar Backup configuration.
    • mboxPath - Path to the indicated mbox file or directory
    • compressMode - Compress mode to use for this mbox path
    • newRevision - Revision this backup path represents
    • targetDir - Target directory in which the path should exist
    Returns:
    Absolute path to the backup file associated with the repository.

    Note: The backup path only contains the current date in YYYYMMDD format, but that's OK because the index information (stored elsewhere) is the actual date object.

    _getTarfilePath(config, mboxPath, compressMode, newRevision)

    source code 

    Gets the tarfile backup file path (including correct extension) associated with an mbox path.

    Along with the path, the tar archive mode is returned in a form that can be used with BackupFileList.generateTarfile.

    Parameters:
    • config - Cedar Backup configuration.
    • mboxPath - Path to the indicated mbox file or directory
    • compressMode - Compress mode to use for this mbox path
    • newRevision - Revision this backup path represents
    Returns:
    Tuple of (absolute path to tarfile, tar archive mode)

    Note: The tarfile path only contains the current date in YYYYMMDD format, but that's OK because the index information (stored elsewhere) is the actual date object.

    _getOutputFile(backupPath, compressMode)

    source code 

    Opens the output file used for saving backup information.

    If the compress mode is "gzip", we'll open a GzipFile, and if the compress mode is "bzip2", we'll open a BZ2File. Otherwise, we'll just return an object from the normal open() method.

    Parameters:
    • backupPath - Path to file to open.
    • compressMode - Compress mode of file ("none", "gzip", "bzip").
    Returns:
    Output file object, opened in binary mode for use with executeCommand()

    _backupMboxFile(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, targetDir=None)

    source code 

    Backs up an individual mbox file.

    Parameters:
    • config - Cedar Backup configuration.
    • absolutePath - Path to mbox file to back up.
    • fullBackup - Indicates whether this should be a full backup.
    • collectMode - Indicates the collect mode for this item
    • compressMode - Compress mode of file ("none", "gzip", "bzip")
    • lastRevision - Date of last backup as datetime.datetime
    • newRevision - Date of new (current) backup as datetime.datetime
    • targetDir - Target directory to write the backed-up file into
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem backing up the mbox file.

    _backupMboxDir(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, excludePaths, excludePatterns)

    source code 

    Backs up a directory containing mbox files.

    Parameters:
    • config - Cedar Backup configuration.
    • absolutePath - Path to mbox directory to back up.
    • fullBackup - Indicates whether this should be a full backup.
    • collectMode - Indicates the collect mode for this item
    • compressMode - Compress mode of file ("none", "gzip", "bzip")
    • lastRevision - Date of last backup as datetime.datetime
    • newRevision - Date of new (current) backup as datetime.datetime
    • excludePaths - List of absolute paths to exclude.
    • excludePatterns - List of patterns to exclude.
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem backing up the mbox file.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.PurgeDir-class.html0000664000175000017500000006452412657665544027442 0ustar pronovicpronovic00000000000000 CedarBackup3.config.PurgeDir
    Package CedarBackup3 :: Module config :: Class PurgeDir
    [hide private]
    [frames] | no frames]

    Class PurgeDir

    source code

    object --+
             |
            PurgeDir
    

    Class representing a Cedar Backup purge directory.

    The following restrictions exist on data in this class:

    • The absolute path must be an absolute path
    • The retain days value must be an integer >= 0.
    Instance Methods [hide private]
     
    __init__(self, absolutePath=None, retainDays=None)
    Constructor for the PurgeDir class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code
     
    _setRetainDays(self, value)
    Property target used to set the retain days value.
    source code
     
    _getRetainDays(self)
    Property target used to get the absolute path.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      absolutePath
    Absolute path of directory to purge.
      retainDays
    Number of days content within directory should be retained.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, absolutePath=None, retainDays=None)
    (Constructor)

    source code 

    Constructor for the PurgeDir class.

    Parameters:
    • absolutePath - Absolute path of the directory to be purged.
    • retainDays - Number of days content within directory should be retained.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setRetainDays(self, value)

    source code 

    Property target used to set the retain days value. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    absolutePath

    Absolute path of directory to purge.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    retainDays

    Number of days content within directory should be retained.

    Get Method:
    _getRetainDays(self) - Property target used to get the absolute path.
    Set Method:
    _setRetainDays(self, value) - Property target used to set the retain days value.

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.writers.util-module.html0000664000175000017500000000403712657665544030064 0ustar pronovicpronovic00000000000000 util

    Module util


    Classes

    IsoImage

    Functions

    readMediaLabel
    validateDevice
    validateDriveSpeed
    validateScsiId

    Variables

    MKISOFS_COMMAND
    VOLNAME_COMMAND
    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.filesystem-pysrc.html0000664000175000017500000155645512657665547026711 0ustar pronovicpronovic00000000000000 CedarBackup3.filesystem
    Package CedarBackup3 :: Module filesystem
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.filesystem

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2008,2010,2015 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python 3 (>= 3.4) 
      29  # Project  : Cedar Backup, release 3 
      30  # Purpose  : Provides filesystem-related objects. 
      31  # 
      32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      33   
      34  ######################################################################## 
      35  # Module documentation 
      36  ######################################################################## 
      37   
      38  """ 
      39  Provides filesystem-related objects. 
      40  @sort: FilesystemList, BackupFileList, PurgeItemList 
      41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      42  """ 
      43   
      44   
      45  ######################################################################## 
      46  # Imported modules 
      47  ######################################################################## 
      48   
      49  # System modules 
      50  import os 
      51  import re 
      52  import math 
      53  import logging 
      54  import tarfile 
      55  import hashlib 
      56   
      57  # Cedar Backup modules 
      58  from CedarBackup3.knapsack import firstFit, bestFit, worstFit, alternateFit 
      59  from CedarBackup3.util import AbsolutePathList, UnorderedList, RegexList 
      60  from CedarBackup3.util import removeKeys, displayBytes, calculateFileAge, encodePath, dereferenceLink 
      61   
      62   
      63  ######################################################################## 
      64  # Module-wide variables 
      65  ######################################################################## 
      66   
      67  logger = logging.getLogger("CedarBackup3.log.filesystem") 
    
    68 69 70 ######################################################################## 71 # FilesystemList class definition 72 ######################################################################## 73 74 -class FilesystemList(list):
    75 76 ###################### 77 # Class documentation 78 ###################### 79 80 """ 81 Represents a list of filesystem items. 82 83 This is a generic class that represents a list of filesystem items. Callers 84 can add individual files or directories to the list, or can recursively add 85 the contents of a directory. The class also allows for up-front exclusions 86 in several forms (all files, all directories, all items matching a pattern, 87 all items whose basename matches a pattern, or all directories containing a 88 specific "ignore file"). Symbolic links are typically backed up 89 non-recursively, i.e. the link to a directory is backed up, but not the 90 contents of that link (we don't want to deal with recursive loops, etc.). 91 92 The custom methods such as L{addFile} will only add items if they exist on 93 the filesystem and do not match any exclusions that are already in place. 94 However, since a FilesystemList is a subclass of Python's standard list 95 class, callers can also add items to the list in the usual way, using 96 methods like C{append()} or C{insert()}. No validations apply to items 97 added to the list in this way; however, many list-manipulation methods deal 98 "gracefully" with items that don't exist in the filesystem, often by 99 ignoring them. 100 101 Once a list has been created, callers can remove individual items from the 102 list using standard methods like C{pop()} or C{remove()} or they can use 103 custom methods to remove specific types of entries or entries which match a 104 particular pattern. 105 106 @note: Regular expression patterns that apply to paths are assumed to be 107 bounded at front and back by the beginning and end of the string, i.e. they 108 are treated as if they begin with C{^} and end with C{$}. This is true 109 whether we are matching a complete path or a basename. 110 111 @sort: __init__, addFile, addDir, addDirContents, removeFiles, removeDirs, 112 removeLinks, removeMatch, removeInvalid, normalize, 113 excludeFiles, excludeDirs, excludeLinks, excludePaths, 114 excludePatterns, excludeBasenamePatterns, ignoreFile 115 """ 116 117 118 ############## 119 # Constructor 120 ############## 121
    122 - def __init__(self):
    123 """Initializes a list with no configured exclusions.""" 124 list.__init__(self) 125 self._excludeFiles = False 126 self._excludeDirs = False 127 self._excludeLinks = False 128 self._excludePaths = None 129 self._excludePatterns = None 130 self._excludeBasenamePatterns = None 131 self._ignoreFile = None 132 self.excludeFiles = False 133 self.excludeLinks = False 134 self.excludeDirs = False 135 self.excludePaths = [] 136 self.excludePatterns = RegexList() 137 self.excludeBasenamePatterns = RegexList() 138 self.ignoreFile = None
    139 140 141 ############# 142 # Properties 143 ############# 144
    145 - def _setExcludeFiles(self, value):
    146 """ 147 Property target used to set the exclude files flag. 148 No validations, but we normalize the value to C{True} or C{False}. 149 """ 150 if value: 151 self._excludeFiles = True 152 else: 153 self._excludeFiles = False
    154
    155 - def _getExcludeFiles(self):
    156 """ 157 Property target used to get the exclude files flag. 158 """ 159 return self._excludeFiles
    160
    161 - def _setExcludeDirs(self, value):
    162 """ 163 Property target used to set the exclude directories flag. 164 No validations, but we normalize the value to C{True} or C{False}. 165 """ 166 if value: 167 self._excludeDirs = True 168 else: 169 self._excludeDirs = False
    170
    171 - def _getExcludeDirs(self):
    172 """ 173 Property target used to get the exclude directories flag. 174 """ 175 return self._excludeDirs
    176 186 192
    193 - def _setExcludePaths(self, value):
    194 """ 195 Property target used to set the exclude paths list. 196 A C{None} value is converted to an empty list. 197 Elements do not have to exist on disk at the time of assignment. 198 @raise ValueError: If any list element is not an absolute path. 199 """ 200 self._excludePaths = AbsolutePathList() 201 if value is not None: 202 self._excludePaths.extend(value)
    203
    204 - def _getExcludePaths(self):
    205 """ 206 Property target used to get the absolute exclude paths list. 207 """ 208 return self._excludePaths
    209
    210 - def _setExcludePatterns(self, value):
    211 """ 212 Property target used to set the exclude patterns list. 213 A C{None} value is converted to an empty list. 214 """ 215 self._excludePatterns = RegexList() 216 if value is not None: 217 self._excludePatterns.extend(value)
    218
    219 - def _getExcludePatterns(self):
    220 """ 221 Property target used to get the exclude patterns list. 222 """ 223 return self._excludePatterns
    224
    225 - def _setExcludeBasenamePatterns(self, value):
    226 """ 227 Property target used to set the exclude basename patterns list. 228 A C{None} value is converted to an empty list. 229 """ 230 self._excludeBasenamePatterns = RegexList() 231 if value is not None: 232 self._excludeBasenamePatterns.extend(value)
    233
    235 """ 236 Property target used to get the exclude basename patterns list. 237 """ 238 return self._excludeBasenamePatterns
    239
    240 - def _setIgnoreFile(self, value):
    241 """ 242 Property target used to set the ignore file. 243 The value must be a non-empty string if it is not C{None}. 244 @raise ValueError: If the value is an empty string. 245 """ 246 if value is not None: 247 if len(value) < 1: 248 raise ValueError("The ignore file must be a non-empty string.") 249 self._ignoreFile = value
    250
    251 - def _getIgnoreFile(self):
    252 """ 253 Property target used to get the ignore file. 254 """ 255 return self._ignoreFile
    256 257 excludeFiles = property(_getExcludeFiles, _setExcludeFiles, None, "Boolean indicating whether files should be excluded.") 258 excludeDirs = property(_getExcludeDirs, _setExcludeDirs, None, "Boolean indicating whether directories should be excluded.") 259 excludeLinks = property(_getExcludeLinks, _setExcludeLinks, None, "Boolean indicating whether soft links should be excluded.") 260 excludePaths = property(_getExcludePaths, _setExcludePaths, None, "List of absolute paths to be excluded.") 261 excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, 262 "List of regular expression patterns (matching complete path) to be excluded.") 263 excludeBasenamePatterns = property(_getExcludeBasenamePatterns, _setExcludeBasenamePatterns, 264 None, "List of regular expression patterns (matching basename) to be excluded.") 265 ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, "Name of file which will cause directory contents to be ignored.") 266 267 268 ############## 269 # Add methods 270 ############## 271
    272 - def addFile(self, path):
    273 """ 274 Adds a file to the list. 275 276 The path must exist and must be a file or a link to an existing file. It 277 will be added to the list subject to any exclusions that are in place. 278 279 @param path: File path to be added to the list 280 @type path: String representing a path on disk 281 282 @return: Number of items added to the list. 283 284 @raise ValueError: If path is not a file or does not exist. 285 @raise ValueError: If the path could not be encoded properly. 286 """ 287 path = encodePath(path) 288 if not os.path.exists(path) or not os.path.isfile(path): 289 logger.debug("Path [%s] is not a file or does not exist on disk.", path) 290 raise ValueError("Path is not a file or does not exist on disk.") 291 if self.excludeLinks and os.path.islink(path): 292 logger.debug("Path [%s] is excluded based on excludeLinks.", path) 293 return 0 294 if self.excludeFiles: 295 logger.debug("Path [%s] is excluded based on excludeFiles.", path) 296 return 0 297 if path in self.excludePaths: 298 logger.debug("Path [%s] is excluded based on excludePaths.", path) 299 return 0 300 for pattern in self.excludePatterns: 301 pattern = encodePath(pattern) # use same encoding as filenames 302 if re.compile(r"^%s$" % pattern).match(path): # safe to assume all are valid due to RegexList 303 logger.debug("Path [%s] is excluded based on pattern [%s].", path, pattern) 304 return 0 305 for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList 306 pattern = encodePath(pattern) # use same encoding as filenames 307 if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): 308 logger.debug("Path [%s] is excluded based on basename pattern [%s].", path, pattern) 309 return 0 310 self.append(path) 311 logger.debug("Added file to list: [%s]", path) 312 return 1
    313
    314 - def addDir(self, path):
    315 """ 316 Adds a directory to the list. 317 318 The path must exist and must be a directory or a link to an existing 319 directory. It will be added to the list subject to any exclusions that 320 are in place. The L{ignoreFile} does not apply to this method, only to 321 L{addDirContents}. 322 323 @param path: Directory path to be added to the list 324 @type path: String representing a path on disk 325 326 @return: Number of items added to the list. 327 328 @raise ValueError: If path is not a directory or does not exist. 329 @raise ValueError: If the path could not be encoded properly. 330 """ 331 path = encodePath(path) 332 path = normalizeDir(path) 333 if not os.path.exists(path) or not os.path.isdir(path): 334 logger.debug("Path [%s] is not a directory or does not exist on disk.", path) 335 raise ValueError("Path is not a directory or does not exist on disk.") 336 if self.excludeLinks and os.path.islink(path): 337 logger.debug("Path [%s] is excluded based on excludeLinks.", path) 338 return 0 339 if self.excludeDirs: 340 logger.debug("Path [%s] is excluded based on excludeDirs.", path) 341 return 0 342 if path in self.excludePaths: 343 logger.debug("Path [%s] is excluded based on excludePaths.", path) 344 return 0 345 for pattern in self.excludePatterns: # safe to assume all are valid due to RegexList 346 pattern = encodePath(pattern) # use same encoding as filenames 347 if re.compile(r"^%s$" % pattern).match(path): 348 logger.debug("Path [%s] is excluded based on pattern [%s].", path, pattern) 349 return 0 350 for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList 351 pattern = encodePath(pattern) # use same encoding as filenames 352 if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): 353 logger.debug("Path [%s] is excluded based on basename pattern [%s].", path, pattern) 354 return 0 355 self.append(path) 356 logger.debug("Added directory to list: [%s]", path) 357 return 1
    358
    359 - def addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False):
    360 """ 361 Adds the contents of a directory to the list. 362 363 The path must exist and must be a directory or a link to a directory. 364 The contents of the directory (as well as the directory path itself) will 365 be recursively added to the list, subject to any exclusions that are in 366 place. If you only want the directory and its immediate contents to be 367 added, then pass in C{recursive=False}. 368 369 @note: If a directory's absolute path matches an exclude pattern or path, 370 or if the directory contains the configured ignore file, then the 371 directory and all of its contents will be recursively excluded from the 372 list. 373 374 @note: If the passed-in directory happens to be a soft link, it will be 375 recursed. However, the linkDepth parameter controls whether any soft 376 links I{within} the directory will be recursed. The link depth is 377 maximum depth of the tree at which soft links should be followed. So, a 378 depth of 0 does not follow any soft links, a depth of 1 follows only 379 links within the passed-in directory, a depth of 2 follows the links at 380 the next level down, etc. 381 382 @note: Any invalid soft links (i.e. soft links that point to 383 non-existent items) will be silently ignored. 384 385 @note: The L{excludeDirs} flag only controls whether any given directory 386 path itself is added to the list once it has been discovered. It does 387 I{not} modify any behavior related to directory recursion. 388 389 @note: If you call this method I{on a link to a directory} that link will 390 never be dereferenced (it may, however, be followed). 391 392 @param path: Directory path whose contents should be added to the list 393 @type path: String representing a path on disk 394 395 @param recursive: Indicates whether directory contents should be added recursively. 396 @type recursive: Boolean value 397 398 @param addSelf: Indicates whether the directory itself should be added to the list. 399 @type addSelf: Boolean value 400 401 @param linkDepth: Maximum depth of the tree at which soft links should be followed 402 @type linkDepth: Integer value, where zero means not to follow any soft links 403 404 @param dereference: Indicates whether soft links, if followed, should be dereferenced 405 @type dereference: Boolean value 406 407 @return: Number of items recursively added to the list 408 409 @raise ValueError: If path is not a directory or does not exist. 410 @raise ValueError: If the path could not be encoded properly. 411 """ 412 path = encodePath(path) 413 path = normalizeDir(path) 414 return self._addDirContentsInternal(path, addSelf, recursive, linkDepth, dereference)
    415
    416 - def _addDirContentsInternal(self, path, includePath=True, recursive=True, linkDepth=0, dereference=False):
    417 """ 418 Internal implementation of C{addDirContents}. 419 420 This internal implementation exists due to some refactoring. Basically, 421 some subclasses have a need to add the contents of a directory, but not 422 the directory itself. This is different than the standard C{FilesystemList} 423 behavior and actually ends up making a special case out of the first 424 call in the recursive chain. Since I don't want to expose the modified 425 interface, C{addDirContents} ends up being wholly implemented in terms 426 of this method. 427 428 The linkDepth parameter controls whether soft links are followed when we 429 are adding the contents recursively. Any recursive calls reduce the 430 value by one. If the value zero or less, then soft links will just be 431 added as directories, but will not be followed. This means that links 432 are followed to a I{constant depth} starting from the top-most directory. 433 434 There is one difference between soft links and directories: soft links 435 that are added recursively are not placed into the list explicitly. This 436 is because if we do add the links recursively, the resulting tar file 437 gets a little confused (it has a link and a directory with the same 438 name). 439 440 @note: If you call this method I{on a link to a directory} that link will 441 never be dereferenced (it may, however, be followed). 442 443 @param path: Directory path whose contents should be added to the list. 444 @param includePath: Indicates whether to include the path as well as contents. 445 @param recursive: Indicates whether directory contents should be added recursively. 446 @param linkDepth: Depth of soft links that should be followed 447 @param dereference: Indicates whether soft links, if followed, should be dereferenced 448 449 @return: Number of items recursively added to the list 450 451 @raise ValueError: If path is not a directory or does not exist. 452 """ 453 added = 0 454 if not os.path.exists(path) or not os.path.isdir(path): 455 logger.debug("Path [%s] is not a directory or does not exist on disk.", path) 456 raise ValueError("Path is not a directory or does not exist on disk.") 457 if path in self.excludePaths: 458 logger.debug("Path [%s] is excluded based on excludePaths.", path) 459 return added 460 for pattern in self.excludePatterns: # safe to assume all are valid due to RegexList 461 pattern = encodePath(pattern) # use same encoding as filenames 462 if re.compile(r"^%s$" % pattern).match(path): 463 logger.debug("Path [%s] is excluded based on pattern [%s].", path, pattern) 464 return added 465 for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList 466 pattern = encodePath(pattern) # use same encoding as filenames 467 if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): 468 logger.debug("Path [%s] is excluded based on basename pattern [%s].", path, pattern) 469 return added 470 if self.ignoreFile is not None and os.path.exists(os.path.join(path, self.ignoreFile)): 471 logger.debug("Path [%s] is excluded based on ignore file.", path) 472 return added 473 if includePath: 474 added += self.addDir(path) # could actually be excluded by addDir, yet 475 for entry in os.listdir(path): 476 entrypath = os.path.join(path, entry) 477 if os.path.isfile(entrypath): 478 if linkDepth > 0 and dereference: 479 derefpath = dereferenceLink(entrypath) 480 if derefpath != entrypath: 481 added += self.addFile(derefpath) 482 added += self.addFile(entrypath) 483 elif os.path.isdir(entrypath): 484 if os.path.islink(entrypath): 485 if recursive: 486 if linkDepth > 0: 487 newDepth = linkDepth - 1 488 if dereference: 489 derefpath = dereferenceLink(entrypath) 490 if derefpath != entrypath: 491 added += self._addDirContentsInternal(derefpath, True, recursive, newDepth, dereference) 492 added += self.addDir(entrypath) 493 else: 494 added += self._addDirContentsInternal(entrypath, False, recursive, newDepth, dereference) 495 else: 496 added += self.addDir(entrypath) 497 else: 498 added += self.addDir(entrypath) 499 else: 500 if recursive: 501 newDepth = linkDepth - 1 502 added += self._addDirContentsInternal(entrypath, True, recursive, newDepth, dereference) 503 else: 504 added += self.addDir(entrypath) 505 return added
    506 507 508 ################# 509 # Remove methods 510 ################# 511
    512 - def removeFiles(self, pattern=None):
    513 """ 514 Removes file entries from the list. 515 516 If C{pattern} is not passed in or is C{None}, then all file entries will 517 be removed from the list. Otherwise, only those file entries matching 518 the pattern will be removed. Any entry which does not exist on disk 519 will be ignored (use L{removeInvalid} to purge those entries). 520 521 This method might be fairly slow for large lists, since it must check the 522 type of each item in the list. If you know ahead of time that you want 523 to exclude all files, then you will be better off setting L{excludeFiles} 524 to C{True} before adding items to the list. 525 526 @param pattern: Regular expression pattern representing entries to remove 527 528 @return: Number of entries removed 529 @raise ValueError: If the passed-in pattern is not a valid regular expression. 530 """ 531 removed = 0 532 if pattern is None: 533 for entry in self[:]: 534 if os.path.exists(entry) and os.path.isfile(entry): 535 self.remove(entry) 536 logger.debug("Removed path [%s] from list.", entry) 537 removed += 1 538 else: 539 try: 540 pattern = encodePath(pattern) # use same encoding as filenames 541 compiled = re.compile(pattern) 542 except re.error: 543 raise ValueError("Pattern is not a valid regular expression.") 544 for entry in self[:]: 545 if os.path.exists(entry) and os.path.isfile(entry): 546 if compiled.match(entry): 547 self.remove(entry) 548 logger.debug("Removed path [%s] from list.", entry) 549 removed += 1 550 logger.debug("Removed a total of %d entries.", removed) 551 return removed
    552
    553 - def removeDirs(self, pattern=None):
    554 """ 555 Removes directory entries from the list. 556 557 If C{pattern} is not passed in or is C{None}, then all directory entries 558 will be removed from the list. Otherwise, only those directory entries 559 matching the pattern will be removed. Any entry which does not exist on 560 disk will be ignored (use L{removeInvalid} to purge those entries). 561 562 This method might be fairly slow for large lists, since it must check the 563 type of each item in the list. If you know ahead of time that you want 564 to exclude all directories, then you will be better off setting 565 L{excludeDirs} to C{True} before adding items to the list (note that this 566 will not prevent you from recursively adding the I{contents} of 567 directories). 568 569 @param pattern: Regular expression pattern representing entries to remove 570 571 @return: Number of entries removed 572 @raise ValueError: If the passed-in pattern is not a valid regular expression. 573 """ 574 removed = 0 575 if pattern is None: 576 for entry in self[:]: 577 if os.path.exists(entry) and os.path.isdir(entry): 578 self.remove(entry) 579 logger.debug("Removed path [%s] from list.", entry) 580 removed += 1 581 else: 582 try: 583 pattern = encodePath(pattern) # use same encoding as filenames 584 compiled = re.compile(pattern) 585 except re.error: 586 raise ValueError("Pattern is not a valid regular expression.") 587 for entry in self[:]: 588 if os.path.exists(entry) and os.path.isdir(entry): 589 if compiled.match(entry): 590 self.remove(entry) 591 logger.debug("Removed path [%s] from list based on pattern [%s].", entry, pattern) 592 removed += 1 593 logger.debug("Removed a total of %d entries.", removed) 594 return removed
    595 636
    637 - def removeMatch(self, pattern):
    638 """ 639 Removes from the list all entries matching a pattern. 640 641 This method removes from the list all entries which match the passed in 642 C{pattern}. Since there is no need to check the type of each entry, it 643 is faster to call this method than to call the L{removeFiles}, 644 L{removeDirs} or L{removeLinks} methods individually. If you know which 645 patterns you will want to remove ahead of time, you may be better off 646 setting L{excludePatterns} or L{excludeBasenamePatterns} before adding 647 items to the list. 648 649 @note: Unlike when using the exclude lists, the pattern here is I{not} 650 bounded at the front and the back of the string. You can use any pattern 651 you want. 652 653 @param pattern: Regular expression pattern representing entries to remove 654 655 @return: Number of entries removed. 656 @raise ValueError: If the passed-in pattern is not a valid regular expression. 657 """ 658 try: 659 pattern = encodePath(pattern) # use same encoding as filenames 660 compiled = re.compile(pattern) 661 except re.error: 662 raise ValueError("Pattern is not a valid regular expression.") 663 removed = 0 664 for entry in self[:]: 665 if compiled.match(entry): 666 self.remove(entry) 667 logger.debug("Removed path [%s] from list based on pattern [%s].", entry, pattern) 668 removed += 1 669 logger.debug("Removed a total of %d entries.", removed) 670 return removed
    671
    672 - def removeInvalid(self):
    673 """ 674 Removes from the list all entries that do not exist on disk. 675 676 This method removes from the list all entries which do not currently 677 exist on disk in some form. No attention is paid to whether the entries 678 are files or directories. 679 680 @return: Number of entries removed. 681 """ 682 removed = 0 683 for entry in self[:]: 684 if not os.path.exists(entry): 685 self.remove(entry) 686 logger.debug("Removed path [%s] from list.", entry) 687 removed += 1 688 logger.debug("Removed a total of %d entries.", removed) 689 return removed
    690 691 692 ################## 693 # Utility methods 694 ################## 695
    696 - def normalize(self):
    697 """Normalizes the list, ensuring that each entry is unique.""" 698 orig = len(self) 699 self.sort() 700 dups = list(filter(lambda x, self=self: self[x] == self[x+1], list(range(0, len(self) - 1)))) 701 items = list(map(lambda x, self=self: self[x], dups)) 702 list(map(self.remove, items)) 703 new = len(self) 704 logger.debug("Completed normalizing list; removed %d items (%d originally, %d now).", new-orig, orig, new)
    705
    706 - def verify(self):
    707 """ 708 Verifies that all entries in the list exist on disk. 709 @return: C{True} if all entries exist, C{False} otherwise. 710 """ 711 for entry in self: 712 if not os.path.exists(entry): 713 logger.debug("Path [%s] is invalid; list is not valid.", entry) 714 return False 715 logger.debug("All entries in list are valid.") 716 return True
    717
    718 719 ######################################################################## 720 # SpanItem class definition 721 ######################################################################## 722 723 -class SpanItem(object): # pylint: disable=R0903
    724 """ 725 Item returned by L{BackupFileList.generateSpan}. 726 """
    727 - def __init__(self, fileList, size, capacity, utilization):
    728 """ 729 Create object. 730 @param fileList: List of files 731 @param size: Size (in bytes) of files 732 @param utilization: Utilization, as a percentage (0-100) 733 """ 734 self.fileList = fileList 735 self.size = size 736 self.capacity = capacity 737 self.utilization = utilization
    738
    739 740 ######################################################################## 741 # BackupFileList class definition 742 ######################################################################## 743 744 -class BackupFileList(FilesystemList): # pylint: disable=R0904
    745 746 ###################### 747 # Class documentation 748 ###################### 749 750 """ 751 List of files to be backed up. 752 753 A BackupFileList is a L{FilesystemList} containing a list of files to be 754 backed up. It only contains files, not directories (soft links are treated 755 like files). On top of the generic functionality provided by 756 L{FilesystemList}, this class adds functionality to keep a hash (checksum) 757 for each file in the list, and it also provides a method to calculate the 758 total size of the files in the list and a way to export the list into tar 759 form. 760 761 @sort: __init__, addDir, totalSize, generateSizeMap, generateDigestMap, 762 generateFitted, generateTarfile, removeUnchanged 763 """ 764 765 ############## 766 # Constructor 767 ############## 768
    769 - def __init__(self):
    770 """Initializes a list with no configured exclusions.""" 771 FilesystemList.__init__(self)
    772 773 774 ################################ 775 # Overridden superclass methods 776 ################################ 777
    778 - def addDir(self, path):
    779 """ 780 Adds a directory to the list. 781 782 Note that this class does not allow directories to be added by themselves 783 (a backup list contains only files). However, since links to directories 784 are technically files, we allow them to be added. 785 786 This method is implemented in terms of the superclass method, with one 787 additional validation: the superclass method is only called if the 788 passed-in path is both a directory and a link. All of the superclass's 789 existing validations and restrictions apply. 790 791 @param path: Directory path to be added to the list 792 @type path: String representing a path on disk 793 794 @return: Number of items added to the list. 795 796 @raise ValueError: If path is not a directory or does not exist. 797 @raise ValueError: If the path could not be encoded properly. 798 """ 799 path = encodePath(path) 800 path = normalizeDir(path) 801 if os.path.isdir(path) and not os.path.islink(path): 802 return 0 803 else: 804 return FilesystemList.addDir(self, path)
    805 806 807 ################## 808 # Utility methods 809 ################## 810
    811 - def totalSize(self):
    812 """ 813 Returns the total size among all files in the list. 814 Only files are counted. 815 Soft links that point at files are ignored. 816 Entries which do not exist on disk are ignored. 817 @return: Total size, in bytes 818 """ 819 total = 0.0 820 for entry in self: 821 if os.path.isfile(entry) and not os.path.islink(entry): 822 total += float(os.stat(entry).st_size) 823 return total
    824
    825 - def generateSizeMap(self):
    826 """ 827 Generates a mapping from file to file size in bytes. 828 The mapping does include soft links, which are listed with size zero. 829 Entries which do not exist on disk are ignored. 830 @return: Dictionary mapping file to file size 831 """ 832 table = { } 833 for entry in self: 834 if os.path.islink(entry): 835 table[entry] = 0.0 836 elif os.path.isfile(entry): 837 table[entry] = float(os.stat(entry).st_size) 838 return table
    839
    840 - def generateDigestMap(self, stripPrefix=None):
    841 """ 842 Generates a mapping from file to file digest. 843 844 Currently, the digest is an SHA hash, which should be pretty secure. In 845 the future, this might be a different kind of hash, but we guarantee that 846 the type of the hash will not change unless the library major version 847 number is bumped. 848 849 Entries which do not exist on disk are ignored. 850 851 Soft links are ignored. We would end up generating a digest for the file 852 that the soft link points at, which doesn't make any sense. 853 854 If C{stripPrefix} is passed in, then that prefix will be stripped from 855 each key when the map is generated. This can be useful in generating two 856 "relative" digest maps to be compared to one another. 857 858 @param stripPrefix: Common prefix to be stripped from paths 859 @type stripPrefix: String with any contents 860 861 @return: Dictionary mapping file to digest value 862 @see: L{removeUnchanged} 863 """ 864 table = { } 865 if stripPrefix is not None: 866 for entry in self: 867 if os.path.isfile(entry) and not os.path.islink(entry): 868 table[entry.replace(stripPrefix, "", 1)] = BackupFileList._generateDigest(entry) 869 else: 870 for entry in self: 871 if os.path.isfile(entry) and not os.path.islink(entry): 872 table[entry] = BackupFileList._generateDigest(entry) 873 return table
    874 875 @staticmethod
    876 - def _generateDigest(path):
    877 """ 878 Generates an SHA digest for a given file on disk. 879 880 The original code for this function used this simplistic implementation, 881 which requires reading the entire file into memory at once in order to 882 generate a digest value:: 883 884 sha.new(open(path).read()).hexdigest() 885 886 Not surprisingly, this isn't an optimal solution. The U{Simple file 887 hashing <http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/259109>} 888 Python Cookbook recipe describes how to incrementally generate a hash 889 value by reading in chunks of data rather than reading the file all at 890 once. The recipe relies on the the C{update()} method of the various 891 Python hashing algorithms. 892 893 In my tests using a 110 MB file on CD, the original implementation 894 requires 111 seconds. This implementation requires only 40-45 seconds, 895 which is a pretty substantial speed-up. 896 897 Experience shows that reading in around 4kB (4096 bytes) at a time yields 898 the best performance. Smaller reads are quite a bit slower, and larger 899 reads don't make much of a difference. The 4kB number makes me a little 900 suspicious, and I think it might be related to the size of a filesystem 901 read at the hardware level. However, I've decided to just hardcode 4096 902 until I have evidence that shows it's worthwhile making the read size 903 configurable. 904 905 @param path: Path to generate digest for. 906 907 @return: ASCII-safe SHA digest for the file. 908 @raise OSError: If the file cannot be opened. 909 """ 910 # pylint: disable=C0103,E1101 911 s = hashlib.sha1() 912 with open(path, mode="rb") as f: 913 readBytes = 4096 # see notes above 914 while readBytes > 0: 915 readString = f.read(readBytes) 916 s.update(readString) 917 readBytes = len(readString) 918 digest = s.hexdigest() 919 logger.debug("Generated digest [%s] for file [%s].", digest, path) 920 return digest
    921
    922 - def generateFitted(self, capacity, algorithm="worst_fit"):
    923 """ 924 Generates a list of items that fit in the indicated capacity. 925 926 Sometimes, callers would like to include every item in a list, but are 927 unable to because not all of the items fit in the space available. This 928 method returns a copy of the list, containing only the items that fit in 929 a given capacity. A copy is returned so that we don't lose any 930 information if for some reason the fitted list is unsatisfactory. 931 932 The fitting is done using the functions in the knapsack module. By 933 default, the first fit algorithm is used, but you can also choose 934 from best fit, worst fit and alternate fit. 935 936 @param capacity: Maximum capacity among the files in the new list 937 @type capacity: Integer, in bytes 938 939 @param algorithm: Knapsack (fit) algorithm to use 940 @type algorithm: One of "first_fit", "best_fit", "worst_fit", "alternate_fit" 941 942 @return: Copy of list with total size no larger than indicated capacity 943 @raise ValueError: If the algorithm is invalid. 944 """ 945 table = self._getKnapsackTable() 946 function = BackupFileList._getKnapsackFunction(algorithm) 947 return function(table, capacity)[0]
    948
    949 - def generateSpan(self, capacity, algorithm="worst_fit"):
    950 """ 951 Splits the list of items into sub-lists that fit in a given capacity. 952 953 Sometimes, callers need split to a backup file list into a set of smaller 954 lists. For instance, you could use this to "span" the files across a set 955 of discs. 956 957 The fitting is done using the functions in the knapsack module. By 958 default, the first fit algorithm is used, but you can also choose 959 from best fit, worst fit and alternate fit. 960 961 @note: If any of your items are larger than the capacity, then it won't 962 be possible to find a solution. In this case, a value error will be 963 raised. 964 965 @param capacity: Maximum capacity among the files in the new list 966 @type capacity: Integer, in bytes 967 968 @param algorithm: Knapsack (fit) algorithm to use 969 @type algorithm: One of "first_fit", "best_fit", "worst_fit", "alternate_fit" 970 971 @return: List of L{SpanItem} objects. 972 973 @raise ValueError: If the algorithm is invalid. 974 @raise ValueError: If it's not possible to fit some items 975 """ 976 spanItems = [] 977 function = BackupFileList._getKnapsackFunction(algorithm) 978 table = self._getKnapsackTable(capacity) 979 iteration = 0 980 while len(table) > 0: 981 iteration += 1 982 fit = function(table, capacity) 983 if len(fit[0]) == 0: 984 # Should never happen due to validations in _convertToKnapsackForm(), but let's be safe 985 raise ValueError("After iteration %d, unable to add any new items." % iteration) 986 removeKeys(table, fit[0]) 987 utilization = (float(fit[1])/float(capacity))*100.0 988 item = SpanItem(fit[0], fit[1], capacity, utilization) 989 spanItems.append(item) 990 return spanItems
    991
    992 - def _getKnapsackTable(self, capacity=None):
    993 """ 994 Converts the list into the form needed by the knapsack algorithms. 995 @return: Dictionary mapping file name to tuple of (file path, file size). 996 """ 997 table = { } 998 for entry in self: 999 if os.path.islink(entry): 1000 table[entry] = (entry, 0.0) 1001 elif os.path.isfile(entry): 1002 size = float(os.stat(entry).st_size) 1003 if capacity is not None: 1004 if size > capacity: 1005 raise ValueError("File [%s] cannot fit in capacity %s." % (entry, displayBytes(capacity))) 1006 table[entry] = (entry, size) 1007 return table
    1008 1009 @staticmethod
    1010 - def _getKnapsackFunction(algorithm):
    1011 """ 1012 Returns a reference to the function associated with an algorithm name. 1013 Algorithm name must be one of "first_fit", "best_fit", "worst_fit", "alternate_fit" 1014 @param algorithm: Name of the algorithm 1015 @return: Reference to knapsack function 1016 @raise ValueError: If the algorithm name is unknown. 1017 """ 1018 if algorithm == "first_fit": 1019 return firstFit 1020 elif algorithm == "best_fit": 1021 return bestFit 1022 elif algorithm == "worst_fit": 1023 return worstFit 1024 elif algorithm == "alternate_fit": 1025 return alternateFit 1026 else: 1027 raise ValueError("Algorithm [%s] is invalid." % algorithm)
    1028
    1029 - def generateTarfile(self, path, mode='tar', ignore=False, flat=False):
    1030 """ 1031 Creates a tar file containing the files in the list. 1032 1033 By default, this method will create uncompressed tar files. If you pass 1034 in mode C{'targz'}, then it will create gzipped tar files, and if you 1035 pass in mode C{'tarbz2'}, then it will create bzipped tar files. 1036 1037 The tar file will be created as a GNU tar archive, which enables extended 1038 file name lengths, etc. Since GNU tar is so prevalent, I've decided that 1039 the extra functionality out-weighs the disadvantage of not being 1040 "standard". 1041 1042 If you pass in C{flat=True}, then a "flat" archive will be created, and 1043 all of the files will be added to the root of the archive. So, the file 1044 C{/tmp/something/whatever.txt} would be added as just C{whatever.txt}. 1045 1046 By default, the whole method call fails if there are problems adding any 1047 of the files to the archive, resulting in an exception. Under these 1048 circumstances, callers are advised that they might want to call 1049 L{removeInvalid()} and then attempt to extract the tar file a second 1050 time, since the most common cause of failures is a missing file (a file 1051 that existed when the list was built, but is gone again by the time the 1052 tar file is built). 1053 1054 If you want to, you can pass in C{ignore=True}, and the method will 1055 ignore errors encountered when adding individual files to the archive 1056 (but not errors opening and closing the archive itself). 1057 1058 We'll always attempt to remove the tarfile from disk if an exception will 1059 be thrown. 1060 1061 @note: No validation is done as to whether the entries in the list are 1062 files, since only files or soft links should be in an object like this. 1063 However, to be safe, everything is explicitly added to the tar archive 1064 non-recursively so it's safe to include soft links to directories. 1065 1066 @note: The Python C{tarfile} module, which is used internally here, is 1067 supposed to deal properly with long filenames and links. In my testing, 1068 I have found that it appears to be able to add long really long filenames 1069 to archives, but doesn't do a good job reading them back out, even out of 1070 an archive it created. Fortunately, all Cedar Backup does is add files 1071 to archives. 1072 1073 @param path: Path of tar file to create on disk 1074 @type path: String representing a path on disk 1075 1076 @param mode: Tar creation mode 1077 @type mode: One of either C{'tar'}, C{'targz'} or C{'tarbz2'} 1078 1079 @param ignore: Indicates whether to ignore certain errors. 1080 @type ignore: Boolean 1081 1082 @param flat: Creates "flat" archive by putting all items in root 1083 @type flat: Boolean 1084 1085 @raise ValueError: If mode is not valid 1086 @raise ValueError: If list is empty 1087 @raise ValueError: If the path could not be encoded properly. 1088 @raise TarError: If there is a problem creating the tar file 1089 """ 1090 # pylint: disable=E1101 1091 path = encodePath(path) 1092 if len(self) == 0: raise ValueError("Empty list cannot be used to generate tarfile.") 1093 if mode == 'tar': tarmode = "w:" 1094 elif mode == 'targz': tarmode = "w:gz" 1095 elif mode == 'tarbz2': tarmode = "w:bz2" 1096 else: raise ValueError("Mode [%s] is not valid." % mode) 1097 try: 1098 tar = tarfile.open(path, tarmode) 1099 try: 1100 tar.format = tarfile.GNU_FORMAT 1101 except AttributeError: 1102 tar.posix = False 1103 for entry in self: 1104 try: 1105 if flat: 1106 tar.add(entry, arcname=os.path.basename(entry), recursive=False) 1107 else: 1108 tar.add(entry, recursive=False) 1109 except tarfile.TarError as e: 1110 if not ignore: 1111 raise e 1112 logger.info("Unable to add file [%s]; going on anyway.", entry) 1113 except OSError as e: 1114 if not ignore: 1115 raise tarfile.TarError(e) 1116 logger.info("Unable to add file [%s]; going on anyway.", entry) 1117 tar.close() 1118 except tarfile.ReadError as e: 1119 try: tar.close() 1120 except: pass 1121 if os.path.exists(path): 1122 try: os.remove(path) 1123 except: pass 1124 raise tarfile.ReadError("Unable to open [%s]; maybe directory doesn't exist?" % path) 1125 except tarfile.TarError as e: 1126 try: tar.close() 1127 except: pass 1128 if os.path.exists(path): 1129 try: os.remove(path) 1130 except: pass 1131 raise e
    1132
    1133 - def removeUnchanged(self, digestMap, captureDigest=False):
    1134 """ 1135 Removes unchanged entries from the list. 1136 1137 This method relies on a digest map as returned from L{generateDigestMap}. 1138 For each entry in C{digestMap}, if the entry also exists in the current 1139 list I{and} the entry in the current list has the same digest value as in 1140 the map, the entry in the current list will be removed. 1141 1142 This method offers a convenient way for callers to filter unneeded 1143 entries from a list. The idea is that a caller will capture a digest map 1144 from C{generateDigestMap} at some point in time (perhaps the beginning of 1145 the week), and will save off that map using C{pickle} or some other 1146 method. Then, the caller could use this method sometime in the future to 1147 filter out any unchanged files based on the saved-off map. 1148 1149 If C{captureDigest} is passed-in as C{True}, then digest information will 1150 be captured for the entire list before the removal step occurs using the 1151 same rules as in L{generateDigestMap}. The check will involve a lookup 1152 into the complete digest map. 1153 1154 If C{captureDigest} is passed in as C{False}, we will only generate a 1155 digest value for files we actually need to check, and we'll ignore any 1156 entry in the list which isn't a file that currently exists on disk. 1157 1158 The return value varies depending on C{captureDigest}, as well. To 1159 preserve backwards compatibility, if C{captureDigest} is C{False}, then 1160 we'll just return a single value representing the number of entries 1161 removed. Otherwise, we'll return a tuple of C{(entries removed, digest 1162 map)}. The returned digest map will be in exactly the form returned by 1163 L{generateDigestMap}. 1164 1165 @note: For performance reasons, this method actually ends up rebuilding 1166 the list from scratch. First, we build a temporary dictionary containing 1167 all of the items from the original list. Then, we remove items as needed 1168 from the dictionary (which is faster than the equivalent operation on a 1169 list). Finally, we replace the contents of the current list based on the 1170 keys left in the dictionary. This should be transparent to the caller. 1171 1172 @param digestMap: Dictionary mapping file name to digest value. 1173 @type digestMap: Map as returned from L{generateDigestMap}. 1174 1175 @param captureDigest: Indicates that digest information should be captured. 1176 @type captureDigest: Boolean 1177 1178 @return: Results as discussed above (format varies based on arguments) 1179 """ 1180 if captureDigest: 1181 removed = 0 1182 table = {} 1183 captured = {} 1184 for entry in self: 1185 if os.path.isfile(entry) and not os.path.islink(entry): 1186 table[entry] = BackupFileList._generateDigest(entry) 1187 captured[entry] = table[entry] 1188 else: 1189 table[entry] = None 1190 for entry in list(digestMap.keys()): 1191 if entry in table: 1192 if table[entry] is not None: # equivalent to file/link check in other case 1193 digest = table[entry] 1194 if digest == digestMap[entry]: 1195 removed += 1 1196 del table[entry] 1197 logger.debug("Discarded unchanged file [%s].", entry) 1198 self[:] = list(table.keys()) 1199 return (removed, captured) 1200 else: 1201 removed = 0 1202 table = {} 1203 for entry in self: 1204 table[entry] = None 1205 for entry in list(digestMap.keys()): 1206 if entry in table: 1207 if os.path.isfile(entry) and not os.path.islink(entry): 1208 digest = BackupFileList._generateDigest(entry) 1209 if digest == digestMap[entry]: 1210 removed += 1 1211 del table[entry] 1212 logger.debug("Discarded unchanged file [%s].", entry) 1213 self[:] = list(table.keys()) 1214 return removed
    1215
    1216 1217 ######################################################################## 1218 # PurgeItemList class definition 1219 ######################################################################## 1220 1221 -class PurgeItemList(FilesystemList): # pylint: disable=R0904
    1222 1223 ###################### 1224 # Class documentation 1225 ###################### 1226 1227 """ 1228 List of files and directories to be purged. 1229 1230 A PurgeItemList is a L{FilesystemList} containing a list of files and 1231 directories to be purged. On top of the generic functionality provided by 1232 L{FilesystemList}, this class adds functionality to remove items that are 1233 too young to be purged, and to actually remove each item in the list from 1234 the filesystem. 1235 1236 The other main difference is that when you add a directory's contents to a 1237 purge item list, the directory itself is not added to the list. This way, 1238 if someone asks to purge within in C{/opt/backup/collect}, that directory 1239 doesn't get removed once all of the files within it is gone. 1240 """ 1241 1242 ############## 1243 # Constructor 1244 ############## 1245
    1246 - def __init__(self):
    1247 """Initializes a list with no configured exclusions.""" 1248 FilesystemList.__init__(self)
    1249 1250 1251 ############## 1252 # Add methods 1253 ############## 1254
    1255 - def addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False):
    1256 """ 1257 Adds the contents of a directory to the list. 1258 1259 The path must exist and must be a directory or a link to a directory. 1260 The contents of the directory (but I{not} the directory path itself) will 1261 be recursively added to the list, subject to any exclusions that are in 1262 place. If you only want the directory and its contents to be added, then 1263 pass in C{recursive=False}. 1264 1265 @note: If a directory's absolute path matches an exclude pattern or path, 1266 or if the directory contains the configured ignore file, then the 1267 directory and all of its contents will be recursively excluded from the 1268 list. 1269 1270 @note: If the passed-in directory happens to be a soft link, it will be 1271 recursed. However, the linkDepth parameter controls whether any soft 1272 links I{within} the directory will be recursed. The link depth is 1273 maximum depth of the tree at which soft links should be followed. So, a 1274 depth of 0 does not follow any soft links, a depth of 1 follows only 1275 links within the passed-in directory, a depth of 2 follows the links at 1276 the next level down, etc. 1277 1278 @note: Any invalid soft links (i.e. soft links that point to 1279 non-existent items) will be silently ignored. 1280 1281 @note: The L{excludeDirs} flag only controls whether any given soft link 1282 path itself is added to the list once it has been discovered. It does 1283 I{not} modify any behavior related to directory recursion. 1284 1285 @note: The L{excludeDirs} flag only controls whether any given directory 1286 path itself is added to the list once it has been discovered. It does 1287 I{not} modify any behavior related to directory recursion. 1288 1289 @note: If you call this method I{on a link to a directory} that link will 1290 never be dereferenced (it may, however, be followed). 1291 1292 @param path: Directory path whose contents should be added to the list 1293 @type path: String representing a path on disk 1294 1295 @param recursive: Indicates whether directory contents should be added recursively. 1296 @type recursive: Boolean value 1297 1298 @param addSelf: Ignored in this subclass. 1299 1300 @param linkDepth: Depth of soft links that should be followed 1301 @type linkDepth: Integer value, where zero means not to follow any soft links 1302 1303 @param dereference: Indicates whether soft links, if followed, should be dereferenced 1304 @type dereference: Boolean value 1305 1306 @return: Number of items recursively added to the list 1307 1308 @raise ValueError: If path is not a directory or does not exist. 1309 @raise ValueError: If the path could not be encoded properly. 1310 """ 1311 path = encodePath(path) 1312 path = normalizeDir(path) 1313 return super(PurgeItemList, self)._addDirContentsInternal(path, False, recursive, linkDepth, dereference)
    1314 1315 1316 ################## 1317 # Utility methods 1318 ################## 1319
    1320 - def removeYoungFiles(self, daysOld):
    1321 """ 1322 Removes from the list files younger than a certain age (in days). 1323 1324 Any file whose "age" in days is less than (C{<}) the value of the 1325 C{daysOld} parameter will be removed from the list so that it will not be 1326 purged later when L{purgeItems} is called. Directories and soft links 1327 will be ignored. 1328 1329 The "age" of a file is the amount of time since the file was last used, 1330 per the most recent of the file's C{st_atime} and C{st_mtime} values. 1331 1332 @note: Some people find the "sense" of this method confusing or 1333 "backwards". Keep in mind that this method is used to remove items 1334 I{from the list}, not from the filesystem! It removes from the list 1335 those items that you would I{not} want to purge because they are too 1336 young. As an example, passing in C{daysOld} of zero (0) would remove 1337 from the list no files, which would result in purging all of the files 1338 later. I would be happy to make a synonym of this method with an 1339 easier-to-understand "sense", if someone can suggest one. 1340 1341 @param daysOld: Minimum age of files that are to be kept in the list. 1342 @type daysOld: Integer value >= 0. 1343 1344 @return: Number of entries removed 1345 """ 1346 removed = 0 1347 daysOld = int(daysOld) 1348 if daysOld < 0: 1349 raise ValueError("Days old value must be an integer >= 0.") 1350 for entry in self[:]: 1351 if os.path.isfile(entry) and not os.path.islink(entry): 1352 try: 1353 ageInDays = calculateFileAge(entry) 1354 ageInWholeDays = math.floor(ageInDays) 1355 if ageInWholeDays < 0: ageInWholeDays = 0 1356 if ageInWholeDays < daysOld: 1357 removed += 1 1358 self.remove(entry) 1359 except OSError: 1360 pass 1361 return removed
    1362
    1363 - def purgeItems(self):
    1364 """ 1365 Purges all items in the list. 1366 1367 Every item in the list will be purged. Directories in the list will 1368 I{not} be purged recursively, and hence will only be removed if they are 1369 empty. Errors will be ignored. 1370 1371 To faciliate easy removal of directories that will end up being empty, 1372 the delete process happens in two passes: files first (including soft 1373 links), then directories. 1374 1375 @return: Tuple containing count of (files, dirs) removed 1376 """ 1377 files = 0 1378 dirs = 0 1379 for entry in self: 1380 if os.path.exists(entry) and (os.path.isfile(entry) or os.path.islink(entry)): 1381 try: 1382 os.remove(entry) 1383 files += 1 1384 logger.debug("Purged file [%s].", entry) 1385 except OSError: 1386 pass 1387 for entry in self: 1388 if os.path.exists(entry) and os.path.isdir(entry) and not os.path.islink(entry): 1389 try: 1390 os.rmdir(entry) 1391 dirs += 1 1392 logger.debug("Purged empty directory [%s].", entry) 1393 except OSError: 1394 pass 1395 return (files, dirs)
    1396
    1397 1398 ######################################################################## 1399 # Public functions 1400 ######################################################################## 1401 1402 ########################## 1403 # normalizeDir() function 1404 ########################## 1405 1406 -def normalizeDir(path):
    1407 """ 1408 Normalizes a directory name. 1409 1410 For our purposes, a directory name is normalized by removing the trailing 1411 path separator, if any. This is important because we want directories to 1412 appear within lists in a consistent way, although from the user's 1413 perspective passing in C{/path/to/dir/} and C{/path/to/dir} are equivalent. 1414 1415 @param path: Path to be normalized. 1416 @type path: String representing a path on disk 1417 1418 @return: Normalized path, which should be equivalent to the original. 1419 """ 1420 if path != os.sep and path[-1:] == os.sep: 1421 return path[:-1] 1422 return path
    1423
    1424 1425 ############################# 1426 # compareContents() function 1427 ############################# 1428 1429 -def compareContents(path1, path2, verbose=False):
    1430 """ 1431 Compares the contents of two directories to see if they are equivalent. 1432 1433 The two directories are recursively compared. First, we check whether they 1434 contain exactly the same set of files. Then, we check to see every given 1435 file has exactly the same contents in both directories. 1436 1437 This is all relatively simple to implement through the magic of 1438 L{BackupFileList.generateDigestMap}, which knows how to strip a path prefix 1439 off the front of each entry in the mapping it generates. This makes our 1440 comparison as simple as creating a list for each path, then generating a 1441 digest map for each path and comparing the two. 1442 1443 If no exception is thrown, the two directories are considered identical. 1444 1445 If the C{verbose} flag is C{True}, then an alternate (but slower) method is 1446 used so that any thrown exception can indicate exactly which file caused the 1447 comparison to fail. The thrown C{ValueError} exception distinguishes 1448 between the directories containing different files, and containing the same 1449 files with differing content. 1450 1451 @note: Symlinks are I{not} followed for the purposes of this comparison. 1452 1453 @param path1: First path to compare. 1454 @type path1: String representing a path on disk 1455 1456 @param path2: First path to compare. 1457 @type path2: String representing a path on disk 1458 1459 @param verbose: Indicates whether a verbose response should be given. 1460 @type verbose: Boolean 1461 1462 @raise ValueError: If a directory doesn't exist or can't be read. 1463 @raise ValueError: If the two directories are not equivalent. 1464 @raise IOError: If there is an unusual problem reading the directories. 1465 """ 1466 try: 1467 path1List = BackupFileList() 1468 path1List.addDirContents(path1) 1469 path1Digest = path1List.generateDigestMap(stripPrefix=normalizeDir(path1)) 1470 path2List = BackupFileList() 1471 path2List.addDirContents(path2) 1472 path2Digest = path2List.generateDigestMap(stripPrefix=normalizeDir(path2)) 1473 compareDigestMaps(path1Digest, path2Digest, verbose) 1474 except IOError as e: 1475 logger.error("I/O error encountered during consistency check.") 1476 raise e
    1477
    1478 -def compareDigestMaps(digest1, digest2, verbose=False):
    1479 """ 1480 Compares two digest maps and throws an exception if they differ. 1481 1482 @param digest1: First digest to compare. 1483 @type digest1: Digest as returned from BackupFileList.generateDigestMap() 1484 1485 @param digest2: Second digest to compare. 1486 @type digest2: Digest as returned from BackupFileList.generateDigestMap() 1487 1488 @param verbose: Indicates whether a verbose response should be given. 1489 @type verbose: Boolean 1490 1491 @raise ValueError: If the two directories are not equivalent. 1492 """ 1493 if not verbose: 1494 if digest1 != digest2: 1495 raise ValueError("Consistency check failed.") 1496 else: 1497 list1 = UnorderedList(list(digest1.keys())) 1498 list2 = UnorderedList(list(digest2.keys())) 1499 if list1 != list2: 1500 raise ValueError("Directories contain a different set of files.") 1501 for key in list1: 1502 if digest1[key] != digest2[key]: 1503 raise ValueError("File contents for [%s] vary between directories." % key)
    1504

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.filesystem-module.html0000664000175000017500000000401412657665544027570 0ustar pronovicpronovic00000000000000 filesystem

    Module filesystem


    Classes

    BackupFileList
    FilesystemList
    PurgeItemList
    SpanItem

    Functions

    compareContents
    compareDigestMaps
    normalizeDir

    Variables

    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.PeersConfig-class.html0000664000175000017500000007074712657665544030131 0ustar pronovicpronovic00000000000000 CedarBackup3.config.PeersConfig
    Package CedarBackup3 :: Module config :: Class PeersConfig
    [hide private]
    [frames] | no frames]

    Class PeersConfig

    source code

    object --+
             |
            PeersConfig
    

    Class representing Cedar Backup global peer configuration.

    This section contains a list of local and remote peers in a master's backup pool. The section is optional. If a master does not define this section, then all peers are unmanaged, and the stage configuration section must explicitly list any peer that is to be staged. If this section is configured, then peers may be managed or unmanaged, and the stage section peer configuration (if any) completely overrides this configuration.

    The following restrictions exist on data in this class:

    • The list of local peers must contain only LocalPeer objects
    • The list of remote peers must contain only RemotePeer objects

    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, localPeers=None, remotePeers=None)
    Constructor for the PeersConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    hasPeers(self)
    Indicates whether any peers are filled into this object.
    source code
     
    _setLocalPeers(self, value)
    Property target used to set the local peers list.
    source code
     
    _getLocalPeers(self)
    Property target used to get the local peers list.
    source code
     
    _setRemotePeers(self, value)
    Property target used to set the remote peers list.
    source code
     
    _getRemotePeers(self)
    Property target used to get the remote peers list.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      localPeers
    List of local peers.
      remotePeers
    List of remote peers.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, localPeers=None, remotePeers=None)
    (Constructor)

    source code 

    Constructor for the PeersConfig class.

    Parameters:
    • localPeers - List of local peers.
    • remotePeers - List of remote peers.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    hasPeers(self)

    source code 

    Indicates whether any peers are filled into this object.

    Returns:
    Boolean true if any local or remote peers are filled in, false otherwise.

    _setLocalPeers(self, value)

    source code 

    Property target used to set the local peers list. Either the value must be None or each element must be a LocalPeer.

    Raises:
    • ValueError - If the value is not an absolute path.

    _setRemotePeers(self, value)

    source code 

    Property target used to set the remote peers list. Either the value must be None or each element must be a RemotePeer.

    Raises:
    • ValueError - If the value is not a RemotePeer

    Property Details [hide private]

    localPeers

    List of local peers.

    Get Method:
    _getLocalPeers(self) - Property target used to get the local peers list.
    Set Method:
    _setLocalPeers(self, value) - Property target used to set the local peers list.

    remotePeers

    List of remote peers.

    Get Method:
    _getRemotePeers(self) - Property target used to get the remote peers list.
    Set Method:
    _setRemotePeers(self, value) - Property target used to set the remote peers list.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.util.PathResolverSingleton-class.html0000664000175000017500000004551612657665545031733 0ustar pronovicpronovic00000000000000 CedarBackup3.util.PathResolverSingleton
    Package CedarBackup3 :: Module util :: Class PathResolverSingleton
    [hide private]
    [frames] | no frames]

    Class PathResolverSingleton

    source code

    object --+
             |
            PathResolverSingleton
    

    Singleton used for resolving executable paths.

    Various functions throughout Cedar Backup (including extensions) need a way to resolve the path of executables that they use. For instance, the image functionality needs to find the mkisofs executable, and the Subversion extension needs to find the svnlook executable. Cedar Backup's original behavior was to assume that the simple name ("svnlook" or whatever) was available on the caller's $PATH, and to fail otherwise. However, this turns out to be less than ideal, since for instance the root user might not always have executables like svnlook in its path.

    One solution is to specify a path (either via an absolute path or some sort of path insertion or path appending mechanism) that would apply to the executeCommand() function. This is not difficult to implement, but it seem like kind of a "big hammer" solution. Besides that, it might also represent a security flaw (for instance, I prefer not to mess with root's $PATH on the application level if I don't have to).

    The alternative is to set up some sort of configuration for the path to certain executables, i.e. "find svnlook in /usr/local/bin/svnlook" or whatever. This PathResolverSingleton aims to provide a good solution to the mapping problem. Callers of all sorts (extensions or not) can get an instance of the singleton. Then, they call the lookup method to try and resolve the executable they are looking for. Through the lookup method, the caller can also specify a default to use if a mapping is not found. This way, with no real effort on the part of the caller, behavior can neatly degrade to something equivalent to the current behavior if there is no special mapping or if the singleton was never initialized in the first place.

    Even better, extensions automagically get access to the same resolver functionality, and they don't even need to understand how the mapping happens. All extension authors need to do is document what executables their code requires, and the standard resolver configuration section will meet their needs.

    The class should be initialized once through the constructor somewhere in the main routine. Then, the main routine should call the fill method to fill in the resolver's internal structures. Everyone else who needs to resolve a path will get an instance of the class using getInstance and will then just call the lookup method.

    Nested Classes [hide private]
      _Helper
    Helper class to provide a singleton factory method.
    Instance Methods [hide private]
     
    __init__(self)
    Singleton constructor, which just creates the singleton instance.
    source code
     
    lookup(self, name, default=None)
    Looks up name and returns the resolved path associated with the name.
    source code
     
    fill(self, mapping)
    Fills in the singleton's internal mapping from name to resource.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Class Variables [hide private]
      _instance = None
    Holds a reference to the singleton
      getInstance = _Helper()
    Instance Variables [hide private]
      _mapping
    Internal mapping from resource name to path.
    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    Singleton constructor, which just creates the singleton instance.

    Overrides: object.__init__

    lookup(self, name, default=None)

    source code 

    Looks up name and returns the resolved path associated with the name.

    Parameters:
    • name - Name of the path resource to resolve.
    • default - Default to return if resource cannot be resolved.
    Returns:
    Resolved path associated with name, or default if name can't be resolved.

    fill(self, mapping)

    source code 

    Fills in the singleton's internal mapping from name to resource.

    Parameters:
    • mapping (Dictionary mapping name to path, both as strings.) - Mapping from resource name to path.

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3-module.html0000664000175000017500000000216412657665544025411 0ustar pronovicpronovic00000000000000 CedarBackup3

    Module CedarBackup3


    Variables


    [hide private] CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.xmlutil-module.html0000664000175000017500000000763412657665544027115 0ustar pronovicpronovic00000000000000 xmlutil

    Module xmlutil


    Classes

    Serializer

    Functions

    addBooleanNode
    addContainerNode
    addIntegerNode
    addLongNode
    addStringNode
    createInputDom
    createOutputDom
    isElement
    readBoolean
    readChildren
    readFirstChild
    readFloat
    readInteger
    readLong
    readString
    readStringList
    serializeDom

    Variables

    FALSE_BOOLEAN_VALUES
    TRUE_BOOLEAN_VALUES
    VALID_BOOLEAN_VALUES
    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.writers.dvdwriter-module.html0000664000175000017500000002323412657665544030336 0ustar pronovicpronovic00000000000000 CedarBackup3.writers.dvdwriter
    Package CedarBackup3 :: Package writers :: Module dvdwriter
    [hide private]
    [frames] | no frames]

    Module dvdwriter

    source code

    Provides functionality related to DVD writer devices.


    Authors:
    Kenneth J. Pronovici <pronovic@ieee.org>, Dmitry Rutsky <rutsky@inbox.ru>
    Classes [hide private]
      MediaDefinition
    Class encapsulating information about DVD media definitions.
      DvdWriter
    Class representing a device that knows how to write some kinds of DVD media.
      MediaCapacity
    Class encapsulating information about DVD media capacity.
      _ImageProperties
    Simple value object to hold image properties for DvdWriter.
    Variables [hide private]
      MEDIA_DVDPLUSR = 1
    Constant representing DVD+R media.
      MEDIA_DVDPLUSRW = 2
    Constant representing DVD+RW media.
      logger = logging.getLogger("CedarBackup3.log.writers.dvdwriter")
      GROWISOFS_COMMAND = ['growisofs']
      EJECT_COMMAND = ['eject']
      __package__ = 'CedarBackup3.writers'
    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.actions.store-module.html0000664000175000017500000000414012657665544030177 0ustar pronovicpronovic00000000000000 store

    Module store


    Functions

    consistencyCheck
    executeStore
    writeImage
    writeImageBlankSafe
    writeStoreIndicator

    Variables

    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.writers.util-pysrc.html0000664000175000017500000062047712657665545027171 0ustar pronovicpronovic00000000000000 CedarBackup3.writers.util
    Package CedarBackup3 :: Package writers :: Module util
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.writers.util

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2007,2010,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Cedar Backup, release 3 
     30  # Purpose  : Provides utilities related to image writers. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Provides utilities related to image writers. 
     40  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     41  """ 
     42   
     43   
     44  ######################################################################## 
     45  # Imported modules 
     46  ######################################################################## 
     47   
     48  # System modules 
     49  import os 
     50  import re 
     51  import logging 
     52   
     53  # Cedar Backup modules 
     54  from CedarBackup3.util import resolveCommand, executeCommand 
     55  from CedarBackup3.util import convertSize, UNIT_BYTES, UNIT_SECTORS, encodePath 
     56   
     57   
     58  ######################################################################## 
     59  # Module-wide constants and variables 
     60  ######################################################################## 
     61   
     62  logger = logging.getLogger("CedarBackup3.log.writers.util") 
     63   
     64  MKISOFS_COMMAND      = [ "mkisofs", ] 
     65  VOLNAME_COMMAND      = [ "volname", ] 
    
    66 67 68 ######################################################################## 69 # Functions used to portably validate certain kinds of values 70 ######################################################################## 71 72 ############################ 73 # validateDevice() function 74 ############################ 75 76 -def validateDevice(device, unittest=False):
    77 """ 78 Validates a configured device. 79 The device must be an absolute path, must exist, and must be writable. 80 The unittest flag turns off validation of the device on disk. 81 @param device: Filesystem device path. 82 @param unittest: Indicates whether we're unit testing. 83 @return: Device as a string, for instance C{"/dev/cdrw"} 84 @raise ValueError: If the device value is invalid. 85 @raise ValueError: If some path cannot be encoded properly. 86 """ 87 if device is None: 88 raise ValueError("Device must be filled in.") 89 device = encodePath(device) 90 if not os.path.isabs(device): 91 raise ValueError("Backup device must be an absolute path.") 92 if not unittest and not os.path.exists(device): 93 raise ValueError("Backup device must exist on disk.") 94 if not unittest and not os.access(device, os.W_OK): 95 raise ValueError("Backup device is not writable by the current user.") 96 return device
    97
    98 99 ############################ 100 # validateScsiId() function 101 ############################ 102 103 -def validateScsiId(scsiId):
    104 """ 105 Validates a SCSI id string. 106 SCSI id must be a string in the form C{[<method>:]scsibus,target,lun}. 107 For Mac OS X (Darwin), we also accept the form C{IO.*Services[/N]}. 108 @note: For consistency, if C{None} is passed in, C{None} will be returned. 109 @param scsiId: SCSI id for the device. 110 @return: SCSI id as a string, for instance C{"ATA:1,0,0"} 111 @raise ValueError: If the SCSI id string is invalid. 112 """ 113 if scsiId is not None: 114 pattern = re.compile(r"^\s*(.*:)?\s*[0-9][0-9]*\s*,\s*[0-9][0-9]*\s*,\s*[0-9][0-9]*\s*$") 115 if not pattern.search(scsiId): 116 pattern = re.compile(r"^\s*IO.*Services(\/[0-9][0-9]*)?\s*$") 117 if not pattern.search(scsiId): 118 raise ValueError("SCSI id is not in a valid form.") 119 return scsiId
    120
    121 122 ################################ 123 # validateDriveSpeed() function 124 ################################ 125 126 -def validateDriveSpeed(driveSpeed):
    127 """ 128 Validates a drive speed value. 129 Drive speed must be an integer which is >= 1. 130 @note: For consistency, if C{None} is passed in, C{None} will be returned. 131 @param driveSpeed: Speed at which the drive writes. 132 @return: Drive speed as an integer 133 @raise ValueError: If the drive speed value is invalid. 134 """ 135 if driveSpeed is None: 136 return None 137 try: 138 intSpeed = int(driveSpeed) 139 except TypeError: 140 raise ValueError("Drive speed must be an integer >= 1.") 141 if intSpeed < 1: 142 raise ValueError("Drive speed must an integer >= 1.") 143 return intSpeed
    144
    145 146 ######################################################################## 147 # General writer-related utility functions 148 ######################################################################## 149 150 ############################ 151 # readMediaLabel() function 152 ############################ 153 154 -def readMediaLabel(devicePath):
    155 """ 156 Reads the media label (volume name) from the indicated device. 157 The volume name is read using the C{volname} command. 158 @param devicePath: Device path to read from 159 @return: Media label as a string, or None if there is no name or it could not be read. 160 """ 161 args = [ devicePath, ] 162 command = resolveCommand(VOLNAME_COMMAND) 163 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) 164 if result != 0: 165 return None 166 if output is None or len(output) < 1: 167 return None 168 return output[0].rstrip()
    169
    170 171 ######################################################################## 172 # IsoImage class definition 173 ######################################################################## 174 175 -class IsoImage(object):
    176 177 ###################### 178 # Class documentation 179 ###################### 180 181 """ 182 Represents an ISO filesystem image. 183 184 Summary 185 ======= 186 187 This object represents an ISO 9660 filesystem image. It is implemented 188 in terms of the C{mkisofs} program, which has been ported to many 189 operating systems and platforms. A "sensible subset" of the C{mkisofs} 190 functionality is made available through the public interface, allowing 191 callers to set a variety of basic options such as publisher id, 192 application id, etc. as well as specify exactly which files and 193 directories they want included in their image. 194 195 By default, the image is created using the Rock Ridge protocol (using the 196 C{-r} option to C{mkisofs}) because Rock Ridge discs are generally more 197 useful on UN*X filesystems than standard ISO 9660 images. However, 198 callers can fall back to the default C{mkisofs} functionality by setting 199 the C{useRockRidge} instance variable to C{False}. Note, however, that 200 this option is not well-tested. 201 202 Where Files and Directories are Placed in the Image 203 =================================================== 204 205 Although this class is implemented in terms of the C{mkisofs} program, 206 its standard "image contents" semantics are slightly different than the original 207 C{mkisofs} semantics. The difference is that files and directories are 208 added to the image with some additional information about their source 209 directory kept intact. 210 211 As an example, suppose you add the file C{/etc/profile} to your image and 212 you do not configure a graft point. The file C{/profile} will be created 213 in the image. The behavior for directories is similar. For instance, 214 suppose that you add C{/etc/X11} to the image and do not configure a 215 graft point. In this case, the directory C{/X11} will be created in the 216 image, even if the original C{/etc/X11} directory is empty. I{This 217 behavior differs from the standard C{mkisofs} behavior!} 218 219 If a graft point is configured, it will be used to modify the point at 220 which a file or directory is added into an image. Using the examples 221 from above, let's assume you set a graft point of C{base} when adding 222 C{/etc/profile} and C{/etc/X11} to your image. In this case, the file 223 C{/base/profile} and the directory C{/base/X11} would be added to the 224 image. 225 226 I feel that this behavior is more consistent than the original C{mkisofs} 227 behavior. However, to be fair, it is not quite as flexible, and some 228 users might not like it. For this reason, the C{contentsOnly} parameter 229 to the L{addEntry} method can be used to revert to the original behavior 230 if desired. 231 232 @sort: __init__, addEntry, getEstimatedSize, _getEstimatedSize, writeImage, 233 _buildDirEntries _buildGeneralArgs, _buildSizeArgs, _buildWriteArgs, 234 device, boundaries, graftPoint, useRockRidge, applicationId, 235 biblioFile, publisherId, preparerId, volumeId 236 """ 237 238 ############## 239 # Constructor 240 ############## 241
    242 - def __init__(self, device=None, boundaries=None, graftPoint=None):
    243 """ 244 Initializes an empty ISO image object. 245 246 Only the most commonly-used configuration items can be set using this 247 constructor. If you have a need to change the others, do so immediately 248 after creating your object. 249 250 The device and boundaries values are both required in order to write 251 multisession discs. If either is missing or C{None}, a multisession disc 252 will not be written. The boundaries tuple is in terms of ISO sectors, as 253 built by an image writer class and returned in a L{writer.MediaCapacity} 254 object. 255 256 @param device: Name of the device that the image will be written to 257 @type device: Either be a filesystem path or a SCSI address 258 259 @param boundaries: Session boundaries as required by C{mkisofs} 260 @type boundaries: Tuple C{(last_sess_start,next_sess_start)} as returned from C{cdrecord -msinfo}, or C{None} 261 262 @param graftPoint: Default graft point for this page. 263 @type graftPoint: String representing a graft point path (see L{addEntry}). 264 """ 265 self._device = None 266 self._boundaries = None 267 self._graftPoint = None 268 self._useRockRidge = True 269 self._applicationId = None 270 self._biblioFile = None 271 self._publisherId = None 272 self._preparerId = None 273 self._volumeId = None 274 self.entries = { } 275 self.device = device 276 self.boundaries = boundaries 277 self.graftPoint = graftPoint 278 self.useRockRidge = True 279 self.applicationId = None 280 self.biblioFile = None 281 self.publisherId = None 282 self.preparerId = None 283 self.volumeId = None 284 logger.debug("Created new ISO image object.")
    285 286 287 ############# 288 # Properties 289 ############# 290
    291 - def _setDevice(self, value):
    292 """ 293 Property target used to set the device value. 294 If not C{None}, the value can be either an absolute path or a SCSI id. 295 @raise ValueError: If the value is not valid 296 """ 297 try: 298 if value is None: 299 self._device = None 300 else: 301 if os.path.isabs(value): 302 self._device = value 303 else: 304 self._device = validateScsiId(value) 305 except ValueError: 306 raise ValueError("Device must either be an absolute path or a valid SCSI id.")
    307
    308 - def _getDevice(self):
    309 """ 310 Property target used to get the device value. 311 """ 312 return self._device
    313
    314 - def _setBoundaries(self, value):
    315 """ 316 Property target used to set the boundaries tuple. 317 If not C{None}, the value must be a tuple of two integers. 318 @raise ValueError: If the tuple values are not integers. 319 @raise IndexError: If the tuple does not contain enough elements. 320 """ 321 if value is None: 322 self._boundaries = None 323 else: 324 self._boundaries = (int(value[0]), int(value[1]))
    325
    326 - def _getBoundaries(self):
    327 """ 328 Property target used to get the boundaries value. 329 """ 330 return self._boundaries
    331
    332 - def _setGraftPoint(self, value):
    333 """ 334 Property target used to set the graft point. 335 The value must be a non-empty string if it is not C{None}. 336 @raise ValueError: If the value is an empty string. 337 """ 338 if value is not None: 339 if len(value) < 1: 340 raise ValueError("The graft point must be a non-empty string.") 341 self._graftPoint = value
    342
    343 - def _getGraftPoint(self):
    344 """ 345 Property target used to get the graft point. 346 """ 347 return self._graftPoint
    348
    349 - def _setUseRockRidge(self, value):
    350 """ 351 Property target used to set the use RockRidge flag. 352 No validations, but we normalize the value to C{True} or C{False}. 353 """ 354 if value: 355 self._useRockRidge = True 356 else: 357 self._useRockRidge = False
    358
    359 - def _getUseRockRidge(self):
    360 """ 361 Property target used to get the use RockRidge flag. 362 """ 363 return self._useRockRidge
    364
    365 - def _setApplicationId(self, value):
    366 """ 367 Property target used to set the application id. 368 The value must be a non-empty string if it is not C{None}. 369 @raise ValueError: If the value is an empty string. 370 """ 371 if value is not None: 372 if len(value) < 1: 373 raise ValueError("The application id must be a non-empty string.") 374 self._applicationId = value
    375
    376 - def _getApplicationId(self):
    377 """ 378 Property target used to get the application id. 379 """ 380 return self._applicationId
    381
    382 - def _setBiblioFile(self, value):
    383 """ 384 Property target used to set the biblio file. 385 The value must be a non-empty string if it is not C{None}. 386 @raise ValueError: If the value is an empty string. 387 """ 388 if value is not None: 389 if len(value) < 1: 390 raise ValueError("The biblio file must be a non-empty string.") 391 self._biblioFile = value
    392
    393 - def _getBiblioFile(self):
    394 """ 395 Property target used to get the biblio file. 396 """ 397 return self._biblioFile
    398
    399 - def _setPublisherId(self, value):
    400 """ 401 Property target used to set the publisher id. 402 The value must be a non-empty string if it is not C{None}. 403 @raise ValueError: If the value is an empty string. 404 """ 405 if value is not None: 406 if len(value) < 1: 407 raise ValueError("The publisher id must be a non-empty string.") 408 self._publisherId = value
    409
    410 - def _getPublisherId(self):
    411 """ 412 Property target used to get the publisher id. 413 """ 414 return self._publisherId
    415
    416 - def _setPreparerId(self, value):
    417 """ 418 Property target used to set the preparer id. 419 The value must be a non-empty string if it is not C{None}. 420 @raise ValueError: If the value is an empty string. 421 """ 422 if value is not None: 423 if len(value) < 1: 424 raise ValueError("The preparer id must be a non-empty string.") 425 self._preparerId = value
    426
    427 - def _getPreparerId(self):
    428 """ 429 Property target used to get the preparer id. 430 """ 431 return self._preparerId
    432
    433 - def _setVolumeId(self, value):
    434 """ 435 Property target used to set the volume id. 436 The value must be a non-empty string if it is not C{None}. 437 @raise ValueError: If the value is an empty string. 438 """ 439 if value is not None: 440 if len(value) < 1: 441 raise ValueError("The volume id must be a non-empty string.") 442 self._volumeId = value
    443
    444 - def _getVolumeId(self):
    445 """ 446 Property target used to get the volume id. 447 """ 448 return self._volumeId
    449 450 device = property(_getDevice, _setDevice, None, "Device that image will be written to (device path or SCSI id).") 451 boundaries = property(_getBoundaries, _setBoundaries, None, "Session boundaries as required by C{mkisofs}.") 452 graftPoint = property(_getGraftPoint, _setGraftPoint, None, "Default image-wide graft point (see L{addEntry} for details).") 453 useRockRidge = property(_getUseRockRidge, _setUseRockRidge, None, "Indicates whether to use RockRidge (default is C{True}).") 454 applicationId = property(_getApplicationId, _setApplicationId, None, "Optionally specifies the ISO header application id value.") 455 biblioFile = property(_getBiblioFile, _setBiblioFile, None, "Optionally specifies the ISO bibliographic file name.") 456 publisherId = property(_getPublisherId, _setPublisherId, None, "Optionally specifies the ISO header publisher id value.") 457 preparerId = property(_getPreparerId, _setPreparerId, None, "Optionally specifies the ISO header preparer id value.") 458 volumeId = property(_getVolumeId, _setVolumeId, None, "Optionally specifies the ISO header volume id value.") 459 460 461 ######################### 462 # General public methods 463 ######################### 464
    465 - def addEntry(self, path, graftPoint=None, override=False, contentsOnly=False):
    466 """ 467 Adds an individual file or directory into the ISO image. 468 469 The path must exist and must be a file or a directory. By default, the 470 entry will be placed into the image at the root directory, but this 471 behavior can be overridden using the C{graftPoint} parameter or instance 472 variable. 473 474 You can use the C{contentsOnly} behavior to revert to the "original" 475 C{mkisofs} behavior for adding directories, which is to add only the 476 items within the directory, and not the directory itself. 477 478 @note: Things get I{odd} if you try to add a directory to an image that 479 will be written to a multisession disc, and the same directory already 480 exists in an earlier session on that disc. Not all of the data gets 481 written. You really wouldn't want to do this anyway, I guess. 482 483 @note: An exception will be thrown if the path has already been added to 484 the image, unless the C{override} parameter is set to C{True}. 485 486 @note: The method C{graftPoints} parameter overrides the object-wide 487 instance variable. If neither the method parameter or object-wide value 488 is set, the path will be written at the image root. The graft point 489 behavior is determined by the value which is in effect I{at the time this 490 method is called}, so you I{must} set the object-wide value before 491 calling this method for the first time, or your image may not be 492 consistent. 493 494 @note: You I{cannot} use the local C{graftPoint} parameter to "turn off" 495 an object-wide instance variable by setting it to C{None}. Python's 496 default argument functionality buys us a lot, but it can't make this 497 method psychic. :) 498 499 @param path: File or directory to be added to the image 500 @type path: String representing a path on disk 501 502 @param graftPoint: Graft point to be used when adding this entry 503 @type graftPoint: String representing a graft point path, as described above 504 505 @param override: Override an existing entry with the same path. 506 @type override: Boolean true/false 507 508 @param contentsOnly: Add directory contents only (standard C{mkisofs} behavior). 509 @type contentsOnly: Boolean true/false 510 511 @raise ValueError: If path is not a file or directory, or does not exist. 512 @raise ValueError: If the path has already been added, and override is not set. 513 @raise ValueError: If a path cannot be encoded properly. 514 """ 515 path = encodePath(path) 516 if not override: 517 if path in list(self.entries.keys()): 518 raise ValueError("Path has already been added to the image.") 519 if os.path.islink(path): 520 raise ValueError("Path must not be a link.") 521 if os.path.isdir(path): 522 if graftPoint is not None: 523 if contentsOnly: 524 self.entries[path] = graftPoint 525 else: 526 self.entries[path] = os.path.join(graftPoint, os.path.basename(path)) 527 elif self.graftPoint is not None: 528 if contentsOnly: 529 self.entries[path] = self.graftPoint 530 else: 531 self.entries[path] = os.path.join(self.graftPoint, os.path.basename(path)) 532 else: 533 if contentsOnly: 534 self.entries[path] = None 535 else: 536 self.entries[path] = os.path.basename(path) 537 elif os.path.isfile(path): 538 if graftPoint is not None: 539 self.entries[path] = graftPoint 540 elif self.graftPoint is not None: 541 self.entries[path] = self.graftPoint 542 else: 543 self.entries[path] = None 544 else: 545 raise ValueError("Path must be a file or a directory.")
    546
    547 - def getEstimatedSize(self):
    548 """ 549 Returns the estimated size (in bytes) of the ISO image. 550 551 This is implemented via the C{-print-size} option to C{mkisofs}, so it 552 might take a bit of time to execute. However, the result is as accurate 553 as we can get, since it takes into account all of the ISO overhead, the 554 true cost of directories in the structure, etc, etc. 555 556 @return: Estimated size of the image, in bytes. 557 558 @raise IOError: If there is a problem calling C{mkisofs}. 559 @raise ValueError: If there are no filesystem entries in the image 560 """ 561 if len(list(self.entries.keys())) == 0: 562 raise ValueError("Image does not contain any entries.") 563 return self._getEstimatedSize(self.entries)
    564
    565 - def _getEstimatedSize(self, entries):
    566 """ 567 Returns the estimated size (in bytes) for the passed-in entries dictionary. 568 @return: Estimated size of the image, in bytes. 569 @raise IOError: If there is a problem calling C{mkisofs}. 570 """ 571 args = self._buildSizeArgs(entries) 572 command = resolveCommand(MKISOFS_COMMAND) 573 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) 574 if result != 0: 575 raise IOError("Error (%d) executing mkisofs command to estimate size." % result) 576 if len(output) != 1: 577 raise IOError("Unable to parse mkisofs output.") 578 try: 579 sectors = float(output[0]) 580 size = convertSize(sectors, UNIT_SECTORS, UNIT_BYTES) 581 return size 582 except: 583 raise IOError("Unable to parse mkisofs output.")
    584
    585 - def writeImage(self, imagePath):
    586 """ 587 Writes this image to disk using the image path. 588 589 @param imagePath: Path to write image out as 590 @type imagePath: String representing a path on disk 591 592 @raise IOError: If there is an error writing the image to disk. 593 @raise ValueError: If there are no filesystem entries in the image 594 @raise ValueError: If a path cannot be encoded properly. 595 """ 596 imagePath = encodePath(imagePath) 597 if len(list(self.entries.keys())) == 0: 598 raise ValueError("Image does not contain any entries.") 599 args = self._buildWriteArgs(self.entries, imagePath) 600 command = resolveCommand(MKISOFS_COMMAND) 601 (result, output) = executeCommand(command, args, returnOutput=False) 602 if result != 0: 603 raise IOError("Error (%d) executing mkisofs command to build image." % result)
    604 605 606 ######################################### 607 # Methods used to build mkisofs commands 608 ######################################### 609 610 @staticmethod
    611 - def _buildDirEntries(entries):
    612 """ 613 Uses an entries dictionary to build a list of directory locations for use 614 by C{mkisofs}. 615 616 We build a list of entries that can be passed to C{mkisofs}. Each entry is 617 either raw (if no graft point was configured) or in graft-point form as 618 described above (if a graft point was configured). The dictionary keys 619 are the path names, and the values are the graft points, if any. 620 621 @param entries: Dictionary of image entries (i.e. self.entries) 622 623 @return: List of directory locations for use by C{mkisofs} 624 """ 625 dirEntries = [] 626 for key in list(entries.keys()): 627 if entries[key] is None: 628 dirEntries.append(key) 629 else: 630 dirEntries.append("%s/=%s" % (entries[key].strip("/"), key)) 631 return dirEntries
    632
    633 - def _buildGeneralArgs(self):
    634 """ 635 Builds a list of general arguments to be passed to a C{mkisofs} command. 636 637 The various instance variables (C{applicationId}, etc.) are filled into 638 the list of arguments if they are set. 639 By default, we will build a RockRidge disc. If you decide to change 640 this, think hard about whether you know what you're doing. This option 641 is not well-tested. 642 643 @return: List suitable for passing to L{util.executeCommand} as C{args}. 644 """ 645 args = [] 646 if self.applicationId is not None: 647 args.append("-A") 648 args.append(self.applicationId) 649 if self.biblioFile is not None: 650 args.append("-biblio") 651 args.append(self.biblioFile) 652 if self.publisherId is not None: 653 args.append("-publisher") 654 args.append(self.publisherId) 655 if self.preparerId is not None: 656 args.append("-p") 657 args.append(self.preparerId) 658 if self.volumeId is not None: 659 args.append("-V") 660 args.append(self.volumeId) 661 return args
    662
    663 - def _buildSizeArgs(self, entries):
    664 """ 665 Builds a list of arguments to be passed to a C{mkisofs} command. 666 667 The various instance variables (C{applicationId}, etc.) are filled into 668 the list of arguments if they are set. The command will be built to just 669 return size output (a simple count of sectors via the C{-print-size} option), 670 rather than an image file on disk. 671 672 By default, we will build a RockRidge disc. If you decide to change 673 this, think hard about whether you know what you're doing. This option 674 is not well-tested. 675 676 @param entries: Dictionary of image entries (i.e. self.entries) 677 678 @return: List suitable for passing to L{util.executeCommand} as C{args}. 679 """ 680 args = self._buildGeneralArgs() 681 args.append("-print-size") 682 args.append("-graft-points") 683 if self.useRockRidge: 684 args.append("-r") 685 if self.device is not None and self.boundaries is not None: 686 args.append("-C") 687 args.append("%d,%d" % (self.boundaries[0], self.boundaries[1])) 688 args.append("-M") 689 args.append(self.device) 690 args.extend(self._buildDirEntries(entries)) 691 return args
    692
    693 - def _buildWriteArgs(self, entries, imagePath):
    694 """ 695 Builds a list of arguments to be passed to a C{mkisofs} command. 696 697 The various instance variables (C{applicationId}, etc.) are filled into 698 the list of arguments if they are set. The command will be built to write 699 an image to disk. 700 701 By default, we will build a RockRidge disc. If you decide to change 702 this, think hard about whether you know what you're doing. This option 703 is not well-tested. 704 705 @param entries: Dictionary of image entries (i.e. self.entries) 706 707 @param imagePath: Path to write image out as 708 @type imagePath: String representing a path on disk 709 710 @return: List suitable for passing to L{util.executeCommand} as C{args}. 711 """ 712 args = self._buildGeneralArgs() 713 args.append("-graft-points") 714 if self.useRockRidge: 715 args.append("-r") 716 args.append("-o") 717 args.append(imagePath) 718 if self.device is not None and self.boundaries is not None: 719 args.append("-C") 720 args.append("%d,%d" % (self.boundaries[0], self.boundaries[1])) 721 args.append("-M") 722 args.append(self.device) 723 args.extend(self._buildDirEntries(entries)) 724 return args
    725

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.extend.mysql-module.html0000664000175000017500000000413412657665544030042 0ustar pronovicpronovic00000000000000 mysql

    Module mysql


    Classes

    LocalConfig
    MysqlConfig

    Functions

    backupDatabase
    executeAction

    Variables

    MYSQLDUMP_COMMAND
    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.postgresql-pysrc.html0000664000175000017500000074327112657665546030206 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.postgresql
    Package CedarBackup3 :: Package extend :: Module postgresql
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.extend.postgresql

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2006,2010,2015 Kenneth J. Pronovici. 
     12  # Copyright (c) 2006 Antoine Beaupre. 
     13  # All rights reserved. 
     14  # 
     15  # This program is free software; you can redistribute it and/or 
     16  # modify it under the terms of the GNU General Public License, 
     17  # Version 2, as published by the Free Software Foundation. 
     18  # 
     19  # This program is distributed in the hope that it will be useful, 
     20  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     21  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     22  # 
     23  # Copies of the GNU General Public License are available from 
     24  # the Free Software Foundation website, http://www.gnu.org/. 
     25  # 
     26  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     27  # 
     28  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     29  #            Antoine Beaupre <anarcat@koumbit.org> 
     30  # Language : Python 3 (>= 3.4) 
     31  # Project  : Official Cedar Backup Extensions 
     32  # Purpose  : Provides an extension to back up PostgreSQL databases. 
     33  # 
     34  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     35  # This file was created with a width of 132 characters, and NO tabs. 
     36   
     37  ######################################################################## 
     38  # Module documentation 
     39  ######################################################################## 
     40   
     41  """ 
     42  Provides an extension to back up PostgreSQL databases. 
     43   
     44  This is a Cedar Backup extension used to back up PostgreSQL databases via the 
     45  Cedar Backup command line.  It requires a new configurations section 
     46  <postgresql> and is intended to be run either immediately before or immediately 
     47  after the standard collect action.  Aside from its own configuration, it 
     48  requires the options and collect configuration sections in the standard Cedar 
     49  Backup configuration file. 
     50   
     51  The backup is done via the C{pg_dump} or C{pg_dumpall} commands included with 
     52  the PostgreSQL product.  Output can be compressed using C{gzip} or C{bzip2}. 
     53  Administrators can configure the extension either to back up all databases or 
     54  to back up only specific databases.  The extension assumes that the current 
     55  user has passwordless access to the database since there is no easy way to pass 
     56  a password to the C{pg_dump} client. This can be accomplished using appropriate 
     57  voodoo in the C{pg_hda.conf} file. 
     58   
     59  Note that this code always produces a full backup.  There is currently no 
     60  facility for making incremental backups. 
     61   
     62  You should always make C{/etc/cback3.conf} unreadble to non-root users once you 
     63  place postgresql configuration into it, since postgresql configuration will 
     64  contain information about available PostgreSQL databases and usernames. 
     65   
     66  Use of this extension I{may} expose usernames in the process listing (via 
     67  C{ps}) when the backup is running if the username is specified in the 
     68  configuration. 
     69   
     70  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     71  @author: Antoine Beaupre <anarcat@koumbit.org> 
     72  """ 
     73   
     74  ######################################################################## 
     75  # Imported modules 
     76  ######################################################################## 
     77   
     78  # System modules 
     79  import os 
     80  import logging 
     81  from gzip import GzipFile 
     82  from bz2 import BZ2File 
     83  from functools import total_ordering 
     84   
     85  # Cedar Backup modules 
     86  from CedarBackup3.xmlutil import createInputDom, addContainerNode, addStringNode, addBooleanNode 
     87  from CedarBackup3.xmlutil import readFirstChild, readString, readStringList, readBoolean 
     88  from CedarBackup3.config import VALID_COMPRESS_MODES 
     89  from CedarBackup3.util import resolveCommand, executeCommand 
     90  from CedarBackup3.util import ObjectTypeList, changeOwnership 
     91   
     92   
     93  ######################################################################## 
     94  # Module-wide constants and variables 
     95  ######################################################################## 
     96   
     97  logger = logging.getLogger("CedarBackup3.log.extend.postgresql") 
     98  POSTGRESQLDUMP_COMMAND = [ "pg_dump", ] 
     99  POSTGRESQLDUMPALL_COMMAND = [ "pg_dumpall", ] 
    
    100 101 102 ######################################################################## 103 # PostgresqlConfig class definition 104 ######################################################################## 105 106 @total_ordering 107 -class PostgresqlConfig(object):
    108 109 """ 110 Class representing PostgreSQL configuration. 111 112 The PostgreSQL configuration information is used for backing up PostgreSQL databases. 113 114 The following restrictions exist on data in this class: 115 116 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 117 - The 'all' flag must be 'Y' if no databases are defined. 118 - The 'all' flag must be 'N' if any databases are defined. 119 - Any values in the databases list must be strings. 120 121 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, user, 122 all, databases 123 """ 124
    125 - def __init__(self, user=None, compressMode=None, all=None, databases=None): # pylint: disable=W0622
    126 """ 127 Constructor for the C{PostgresqlConfig} class. 128 129 @param user: User to execute backup as. 130 @param compressMode: Compress mode for backed-up files. 131 @param all: Indicates whether to back up all databases. 132 @param databases: List of databases to back up. 133 """ 134 self._user = None 135 self._compressMode = None 136 self._all = None 137 self._databases = None 138 self.user = user 139 self.compressMode = compressMode 140 self.all = all 141 self.databases = databases
    142
    143 - def __repr__(self):
    144 """ 145 Official string representation for class instance. 146 """ 147 return "PostgresqlConfig(%s, %s, %s)" % (self.user, self.all, self.databases)
    148
    149 - def __str__(self):
    150 """ 151 Informal string representation for class instance. 152 """ 153 return self.__repr__()
    154
    155 - def __eq__(self, other):
    156 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 157 return self.__cmp__(other) == 0
    158
    159 - def __lt__(self, other):
    160 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 161 return self.__cmp__(other) < 0
    162
    163 - def __gt__(self, other):
    164 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 165 return self.__cmp__(other) > 0
    166
    167 - def __cmp__(self, other):
    168 """ 169 Original Python 2 comparison operator. 170 @param other: Other object to compare to. 171 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 172 """ 173 if other is None: 174 return 1 175 if self.user != other.user: 176 if str(self.user or "") < str(other.user or ""): 177 return -1 178 else: 179 return 1 180 if self.compressMode != other.compressMode: 181 if str(self.compressMode or "") < str(other.compressMode or ""): 182 return -1 183 else: 184 return 1 185 if self.all != other.all: 186 if self.all < other.all: 187 return -1 188 else: 189 return 1 190 if self.databases != other.databases: 191 if self.databases < other.databases: 192 return -1 193 else: 194 return 1 195 return 0
    196
    197 - def _setUser(self, value):
    198 """ 199 Property target used to set the user value. 200 """ 201 if value is not None: 202 if len(value) < 1: 203 raise ValueError("User must be non-empty string.") 204 self._user = value
    205
    206 - def _getUser(self):
    207 """ 208 Property target used to get the user value. 209 """ 210 return self._user
    211
    212 - def _setCompressMode(self, value):
    213 """ 214 Property target used to set the compress mode. 215 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 216 @raise ValueError: If the value is not valid. 217 """ 218 if value is not None: 219 if value not in VALID_COMPRESS_MODES: 220 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 221 self._compressMode = value
    222
    223 - def _getCompressMode(self):
    224 """ 225 Property target used to get the compress mode. 226 """ 227 return self._compressMode
    228
    229 - def _setAll(self, value):
    230 """ 231 Property target used to set the 'all' flag. 232 No validations, but we normalize the value to C{True} or C{False}. 233 """ 234 if value: 235 self._all = True 236 else: 237 self._all = False
    238
    239 - def _getAll(self):
    240 """ 241 Property target used to get the 'all' flag. 242 """ 243 return self._all
    244
    245 - def _setDatabases(self, value):
    246 """ 247 Property target used to set the databases list. 248 Either the value must be C{None} or each element must be a string. 249 @raise ValueError: If the value is not a string. 250 """ 251 if value is None: 252 self._databases = None 253 else: 254 for database in value: 255 if len(database) < 1: 256 raise ValueError("Each database must be a non-empty string.") 257 try: 258 saved = self._databases 259 self._databases = ObjectTypeList(str, "string") 260 self._databases.extend(value) 261 except Exception as e: 262 self._databases = saved 263 raise e
    264
    265 - def _getDatabases(self):
    266 """ 267 Property target used to get the databases list. 268 """ 269 return self._databases
    270 271 user = property(_getUser, _setUser, None, "User to execute backup as.") 272 compressMode = property(_getCompressMode, _setCompressMode, None, "Compress mode to be used for backed-up files.") 273 all = property(_getAll, _setAll, None, "Indicates whether to back up all databases.") 274 databases = property(_getDatabases, _setDatabases, None, "List of databases to back up.") 275
    276 277 ######################################################################## 278 # LocalConfig class definition 279 ######################################################################## 280 281 @total_ordering 282 -class LocalConfig(object):
    283 284 """ 285 Class representing this extension's configuration document. 286 287 This is not a general-purpose configuration object like the main Cedar 288 Backup configuration object. Instead, it just knows how to parse and emit 289 PostgreSQL-specific configuration values. Third parties who need to read and 290 write configuration related to this extension should access it through the 291 constructor, C{validate} and C{addConfig} methods. 292 293 @note: Lists within this class are "unordered" for equality comparisons. 294 295 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 296 postgresql, validate, addConfig 297 """ 298
    299 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    300 """ 301 Initializes a configuration object. 302 303 If you initialize the object without passing either C{xmlData} or 304 C{xmlPath} then configuration will be empty and will be invalid until it 305 is filled in properly. 306 307 No reference to the original XML data or original path is saved off by 308 this class. Once the data has been parsed (successfully or not) this 309 original information is discarded. 310 311 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 312 method will be called (with its default arguments) against configuration 313 after successfully parsing any passed-in XML. Keep in mind that even if 314 C{validate} is C{False}, it might not be possible to parse the passed-in 315 XML document if lower-level validations fail. 316 317 @note: It is strongly suggested that the C{validate} option always be set 318 to C{True} (the default) unless there is a specific need to read in 319 invalid configuration from disk. 320 321 @param xmlData: XML data representing configuration. 322 @type xmlData: String data. 323 324 @param xmlPath: Path to an XML file on disk. 325 @type xmlPath: Absolute path to a file on disk. 326 327 @param validate: Validate the document after parsing it. 328 @type validate: Boolean true/false. 329 330 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 331 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 332 @raise ValueError: If the parsed configuration document is not valid. 333 """ 334 self._postgresql = None 335 self.postgresql = None 336 if xmlData is not None and xmlPath is not None: 337 raise ValueError("Use either xmlData or xmlPath, but not both.") 338 if xmlData is not None: 339 self._parseXmlData(xmlData) 340 if validate: 341 self.validate() 342 elif xmlPath is not None: 343 with open(xmlPath) as f: 344 xmlData = f.read() 345 self._parseXmlData(xmlData) 346 if validate: 347 self.validate()
    348
    349 - def __repr__(self):
    350 """ 351 Official string representation for class instance. 352 """ 353 return "LocalConfig(%s)" % (self.postgresql)
    354
    355 - def __str__(self):
    356 """ 357 Informal string representation for class instance. 358 """ 359 return self.__repr__()
    360
    361 - def __eq__(self, other):
    362 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 363 return self.__cmp__(other) == 0
    364
    365 - def __lt__(self, other):
    366 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 367 return self.__cmp__(other) < 0
    368
    369 - def __gt__(self, other):
    370 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 371 return self.__cmp__(other) > 0
    372
    373 - def __cmp__(self, other):
    374 """ 375 Original Python 2 comparison operator. 376 Lists within this class are "unordered" for equality comparisons. 377 @param other: Other object to compare to. 378 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 379 """ 380 if other is None: 381 return 1 382 if self.postgresql != other.postgresql: 383 if self.postgresql < other.postgresql: 384 return -1 385 else: 386 return 1 387 return 0
    388
    389 - def _setPostgresql(self, value):
    390 """ 391 Property target used to set the postgresql configuration value. 392 If not C{None}, the value must be a C{PostgresqlConfig} object. 393 @raise ValueError: If the value is not a C{PostgresqlConfig} 394 """ 395 if value is None: 396 self._postgresql = None 397 else: 398 if not isinstance(value, PostgresqlConfig): 399 raise ValueError("Value must be a C{PostgresqlConfig} object.") 400 self._postgresql = value
    401
    402 - def _getPostgresql(self):
    403 """ 404 Property target used to get the postgresql configuration value. 405 """ 406 return self._postgresql
    407 408 postgresql = property(_getPostgresql, _setPostgresql, None, "Postgresql configuration in terms of a C{PostgresqlConfig} object.") 409
    410 - def validate(self):
    411 """ 412 Validates configuration represented by the object. 413 414 The compress mode must be filled in. Then, if the 'all' flag 415 I{is} set, no databases are allowed, and if the 'all' flag is 416 I{not} set, at least one database is required. 417 418 @raise ValueError: If one of the validations fails. 419 """ 420 if self.postgresql is None: 421 raise ValueError("PostgreSQL section is required.") 422 if self.postgresql.compressMode is None: 423 raise ValueError("Compress mode value is required.") 424 if self.postgresql.all: 425 if self.postgresql.databases is not None and self.postgresql.databases != []: 426 raise ValueError("Databases cannot be specified if 'all' flag is set.") 427 else: 428 if self.postgresql.databases is None or len(self.postgresql.databases) < 1: 429 raise ValueError("At least one PostgreSQL database must be indicated if 'all' flag is not set.")
    430
    431 - def addConfig(self, xmlDom, parentNode):
    432 """ 433 Adds a <postgresql> configuration section as the next child of a parent. 434 435 Third parties should use this function to write configuration related to 436 this extension. 437 438 We add the following fields to the document:: 439 440 user //cb_config/postgresql/user 441 compressMode //cb_config/postgresql/compress_mode 442 all //cb_config/postgresql/all 443 444 We also add groups of the following items, one list element per 445 item:: 446 447 database //cb_config/postgresql/database 448 449 @param xmlDom: DOM tree as from C{impl.createDocument()}. 450 @param parentNode: Parent that the section should be appended to. 451 """ 452 if self.postgresql is not None: 453 sectionNode = addContainerNode(xmlDom, parentNode, "postgresql") 454 addStringNode(xmlDom, sectionNode, "user", self.postgresql.user) 455 addStringNode(xmlDom, sectionNode, "compress_mode", self.postgresql.compressMode) 456 addBooleanNode(xmlDom, sectionNode, "all", self.postgresql.all) 457 if self.postgresql.databases is not None: 458 for database in self.postgresql.databases: 459 addStringNode(xmlDom, sectionNode, "database", database)
    460
    461 - def _parseXmlData(self, xmlData):
    462 """ 463 Internal method to parse an XML string into the object. 464 465 This method parses the XML document into a DOM tree (C{xmlDom}) and then 466 calls a static method to parse the postgresql configuration section. 467 468 @param xmlData: XML data to be parsed 469 @type xmlData: String data 470 471 @raise ValueError: If the XML cannot be successfully parsed. 472 """ 473 (xmlDom, parentNode) = createInputDom(xmlData) 474 self._postgresql = LocalConfig._parsePostgresql(parentNode)
    475 476 @staticmethod
    477 - def _parsePostgresql(parent):
    478 """ 479 Parses a postgresql configuration section. 480 481 We read the following fields:: 482 483 user //cb_config/postgresql/user 484 compressMode //cb_config/postgresql/compress_mode 485 all //cb_config/postgresql/all 486 487 We also read groups of the following item, one list element per 488 item:: 489 490 databases //cb_config/postgresql/database 491 492 @param parent: Parent node to search beneath. 493 494 @return: C{PostgresqlConfig} object or C{None} if the section does not exist. 495 @raise ValueError: If some filled-in value is invalid. 496 """ 497 postgresql = None 498 section = readFirstChild(parent, "postgresql") 499 if section is not None: 500 postgresql = PostgresqlConfig() 501 postgresql.user = readString(section, "user") 502 postgresql.compressMode = readString(section, "compress_mode") 503 postgresql.all = readBoolean(section, "all") 504 postgresql.databases = readStringList(section, "database") 505 return postgresql
    506
    507 508 ######################################################################## 509 # Public functions 510 ######################################################################## 511 512 ########################### 513 # executeAction() function 514 ########################### 515 516 -def executeAction(configPath, options, config):
    517 """ 518 Executes the PostgreSQL backup action. 519 520 @param configPath: Path to configuration file on disk. 521 @type configPath: String representing a path on disk. 522 523 @param options: Program command-line options. 524 @type options: Options object. 525 526 @param config: Program configuration. 527 @type config: Config object. 528 529 @raise ValueError: Under many generic error conditions 530 @raise IOError: If a backup could not be written for some reason. 531 """ 532 logger.debug("Executing PostgreSQL extended action.") 533 if config.options is None or config.collect is None: 534 raise ValueError("Cedar Backup configuration is not properly filled in.") 535 local = LocalConfig(xmlPath=configPath) 536 if local.postgresql.all: 537 logger.info("Backing up all databases.") 538 _backupDatabase(config.collect.targetDir, local.postgresql.compressMode, local.postgresql.user, 539 config.options.backupUser, config.options.backupGroup, None) 540 if local.postgresql.databases is not None and local.postgresql.databases != []: 541 logger.debug("Backing up %d individual databases.", len(local.postgresql.databases)) 542 for database in local.postgresql.databases: 543 logger.info("Backing up database [%s].", database) 544 _backupDatabase(config.collect.targetDir, local.postgresql.compressMode, local.postgresql.user, 545 config.options.backupUser, config.options.backupGroup, database) 546 logger.info("Executed the PostgreSQL extended action successfully.")
    547
    548 -def _backupDatabase(targetDir, compressMode, user, backupUser, backupGroup, database=None):
    549 """ 550 Backs up an individual PostgreSQL database, or all databases. 551 552 This internal method wraps the public method and adds some functionality, 553 like figuring out a filename, etc. 554 555 @param targetDir: Directory into which backups should be written. 556 @param compressMode: Compress mode to be used for backed-up files. 557 @param user: User to use for connecting to the database. 558 @param backupUser: User to own resulting file. 559 @param backupGroup: Group to own resulting file. 560 @param database: Name of database, or C{None} for all databases. 561 562 @return: Name of the generated backup file. 563 564 @raise ValueError: If some value is missing or invalid. 565 @raise IOError: If there is a problem executing the PostgreSQL dump. 566 """ 567 (outputFile, filename) = _getOutputFile(targetDir, database, compressMode) 568 with outputFile: 569 backupDatabase(user, outputFile, database) 570 if not os.path.exists(filename): 571 raise IOError("Dump file [%s] does not seem to exist after backup completed." % filename) 572 changeOwnership(filename, backupUser, backupGroup)
    573
    574 #pylint: disable=R0204 575 -def _getOutputFile(targetDir, database, compressMode):
    576 """ 577 Opens the output file used for saving the PostgreSQL dump. 578 579 The filename is either C{"postgresqldump.txt"} or 580 C{"postgresqldump-<database>.txt"}. The C{".gz"} or C{".bz2"} extension is 581 added if C{compress} is C{True}. 582 583 @param targetDir: Target directory to write file in. 584 @param database: Name of the database (if any) 585 @param compressMode: Compress mode to be used for backed-up files. 586 587 @return: Tuple of (Output file object, filename), file opened in binary mode for use with executeCommand() 588 """ 589 if database is None: 590 filename = os.path.join(targetDir, "postgresqldump.txt") 591 else: 592 filename = os.path.join(targetDir, "postgresqldump-%s.txt" % database) 593 if compressMode == "gzip": 594 filename = "%s.gz" % filename 595 outputFile = GzipFile(filename, "wb") 596 elif compressMode == "bzip2": 597 filename = "%s.bz2" % filename 598 outputFile = BZ2File(filename, "wb") 599 else: 600 outputFile = open(filename, "wb") 601 logger.debug("PostgreSQL dump file will be [%s].", filename) 602 return (outputFile, filename)
    603
    604 605 ############################ 606 # backupDatabase() function 607 ############################ 608 609 -def backupDatabase(user, backupFile, database=None):
    610 """ 611 Backs up an individual PostgreSQL database, or all databases. 612 613 This function backs up either a named local PostgreSQL database or all local 614 PostgreSQL databases, using the passed in user for connectivity. 615 This is I{always} a full backup. There is no facility for incremental 616 backups. 617 618 The backup data will be written into the passed-in back file. Normally, 619 this would be an object as returned from C{open()}, but it is possible to 620 use something like a C{GzipFile} to write compressed output. The caller is 621 responsible for closing the passed-in backup file. 622 623 @note: Typically, you would use the C{root} user to back up all databases. 624 625 @param user: User to use for connecting to the database. 626 @type user: String representing PostgreSQL username. 627 628 @param backupFile: File use for writing backup. 629 @type backupFile: Python file object as from C{open()} or C{file()}. 630 631 @param database: Name of the database to be backed up. 632 @type database: String representing database name, or C{None} for all databases. 633 634 @raise ValueError: If some value is missing or invalid. 635 @raise IOError: If there is a problem executing the PostgreSQL dump. 636 """ 637 args = [] 638 if user is not None: 639 args.append('-U') 640 args.append(user) 641 642 if database is None: 643 command = resolveCommand(POSTGRESQLDUMPALL_COMMAND) 644 else: 645 command = resolveCommand(POSTGRESQLDUMP_COMMAND) 646 args.append(database) 647 648 result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] 649 if result != 0: 650 if database is None: 651 raise IOError("Error [%d] executing PostgreSQL database dump for all databases." % result) 652 else: 653 raise IOError("Error [%d] executing PostgreSQL database dump for database [%s]." % (result, database))
    654

    CedarBackup3-3.1.6/doc/interface/toc-everything.html0000664000175000017500000021621012657665544024062 0ustar pronovicpronovic00000000000000 Everything

    Everything


    All Classes

    CedarBackup3.cli.Options
    CedarBackup3.config.ActionDependencies
    CedarBackup3.config.ActionHook
    CedarBackup3.config.BlankBehavior
    CedarBackup3.config.ByteQuantity
    CedarBackup3.config.CollectConfig
    CedarBackup3.config.CollectDir
    CedarBackup3.config.CollectFile
    CedarBackup3.config.CommandOverride
    CedarBackup3.config.Config
    CedarBackup3.config.ExtendedAction
    CedarBackup3.config.ExtensionsConfig
    CedarBackup3.config.LocalPeer
    CedarBackup3.config.OptionsConfig
    CedarBackup3.config.PeersConfig
    CedarBackup3.config.PostActionHook
    CedarBackup3.config.PreActionHook
    CedarBackup3.config.PurgeConfig
    CedarBackup3.config.PurgeDir
    CedarBackup3.config.ReferenceConfig
    CedarBackup3.config.RemotePeer
    CedarBackup3.config.StageConfig
    CedarBackup3.config.StoreConfig
    CedarBackup3.extend.amazons3.AmazonS3Config
    CedarBackup3.extend.amazons3.LocalConfig
    CedarBackup3.extend.capacity.CapacityConfig
    CedarBackup3.extend.capacity.LocalConfig
    CedarBackup3.extend.capacity.PercentageQuantity
    CedarBackup3.extend.encrypt.EncryptConfig
    CedarBackup3.extend.encrypt.LocalConfig
    CedarBackup3.extend.mbox.LocalConfig
    CedarBackup3.extend.mbox.MboxConfig
    CedarBackup3.extend.mbox.MboxDir
    CedarBackup3.extend.mbox.MboxFile
    CedarBackup3.extend.mysql.LocalConfig
    CedarBackup3.extend.mysql.MysqlConfig
    CedarBackup3.extend.postgresql.LocalConfig
    CedarBackup3.extend.postgresql.PostgresqlConfig
    CedarBackup3.extend.split.LocalConfig
    CedarBackup3.extend.split.SplitConfig
    CedarBackup3.extend.subversion.BDBRepository
    CedarBackup3.extend.subversion.FSFSRepository
    CedarBackup3.extend.subversion.LocalConfig
    CedarBackup3.extend.subversion.Repository
    CedarBackup3.extend.subversion.RepositoryDir
    CedarBackup3.extend.subversion.SubversionConfig
    CedarBackup3.filesystem.BackupFileList
    CedarBackup3.filesystem.FilesystemList
    CedarBackup3.filesystem.PurgeItemList
    CedarBackup3.filesystem.SpanItem
    CedarBackup3.peer.LocalPeer
    CedarBackup3.peer.RemotePeer
    CedarBackup3.tools.amazons3.Options
    CedarBackup3.tools.span.SpanOptions
    CedarBackup3.util.AbsolutePathList
    CedarBackup3.util.Diagnostics
    CedarBackup3.util.DirectedGraph
    CedarBackup3.util.ObjectTypeList
    CedarBackup3.util.PathResolverSingleton
    CedarBackup3.util.Pipe
    CedarBackup3.util.RegexList
    CedarBackup3.util.RegexMatchList
    CedarBackup3.util.RestrictedContentList
    CedarBackup3.util.UnorderedList
    CedarBackup3.writers.cdwriter.CdWriter
    CedarBackup3.writers.cdwriter.MediaCapacity
    CedarBackup3.writers.cdwriter.MediaDefinition
    CedarBackup3.writers.dvdwriter.DvdWriter
    CedarBackup3.writers.dvdwriter.MediaCapacity
    CedarBackup3.writers.dvdwriter.MediaDefinition
    CedarBackup3.writers.util.IsoImage
    CedarBackup3.xmlutil.Serializer

    All Functions

    CedarBackup3.actions.collect.executeCollect
    CedarBackup3.actions.initialize.executeInitialize
    CedarBackup3.actions.purge.executePurge
    CedarBackup3.actions.rebuild.executeRebuild
    CedarBackup3.actions.stage.executeStage
    CedarBackup3.actions.store.consistencyCheck
    CedarBackup3.actions.store.executeStore
    CedarBackup3.actions.store.writeImage
    CedarBackup3.actions.store.writeImageBlankSafe
    CedarBackup3.actions.store.writeStoreIndicator
    CedarBackup3.actions.util.buildMediaLabel
    CedarBackup3.actions.util.checkMediaState
    CedarBackup3.actions.util.createWriter
    CedarBackup3.actions.util.findDailyDirs
    CedarBackup3.actions.util.getBackupFiles
    CedarBackup3.actions.util.initializeMediaState
    CedarBackup3.actions.util.writeIndicatorFile
    CedarBackup3.actions.validate.executeValidate
    CedarBackup3.cli.cli
    CedarBackup3.cli.setupLogging
    CedarBackup3.cli.setupPathResolver
    CedarBackup3.config.addByteQuantityNode
    CedarBackup3.config.readByteQuantity
    CedarBackup3.customize.customizeOverrides
    CedarBackup3.extend.amazons3.executeAction
    CedarBackup3.extend.capacity.executeAction
    CedarBackup3.extend.encrypt.executeAction
    CedarBackup3.extend.mbox.executeAction
    CedarBackup3.extend.mysql.backupDatabase
    CedarBackup3.extend.mysql.executeAction
    CedarBackup3.extend.postgresql.backupDatabase
    CedarBackup3.extend.postgresql.executeAction
    CedarBackup3.extend.split.executeAction
    CedarBackup3.extend.subversion.backupBDBRepository
    CedarBackup3.extend.subversion.backupFSFSRepository
    CedarBackup3.extend.subversion.backupRepository
    CedarBackup3.extend.subversion.executeAction
    CedarBackup3.extend.subversion.getYoungestRevision
    CedarBackup3.extend.sysinfo.executeAction
    CedarBackup3.filesystem.compareContents
    CedarBackup3.filesystem.compareDigestMaps
    CedarBackup3.filesystem.normalizeDir
    CedarBackup3.knapsack.alternateFit
    CedarBackup3.knapsack.bestFit
    CedarBackup3.knapsack.firstFit
    CedarBackup3.knapsack.worstFit
    CedarBackup3.testutil.availableLocales
    CedarBackup3.testutil.buildPath
    CedarBackup3.testutil.captureOutput
    CedarBackup3.testutil.changeFileAge
    CedarBackup3.testutil.commandAvailable
    CedarBackup3.testutil.extractTar
    CedarBackup3.testutil.failUnlessAssignRaises
    CedarBackup3.testutil.findResources
    CedarBackup3.testutil.getLogin
    CedarBackup3.testutil.getMaskAsMode
    CedarBackup3.testutil.platformDebian
    CedarBackup3.testutil.platformMacOsX
    CedarBackup3.testutil.randomFilename
    CedarBackup3.testutil.removedir
    CedarBackup3.testutil.runningAsRoot
    CedarBackup3.testutil.setupDebugLogger
    CedarBackup3.testutil.setupOverrides
    CedarBackup3.tools.amazons3.cli
    CedarBackup3.tools.span.cli
    CedarBackup3.util.buildNormalizedPath
    CedarBackup3.util.calculateFileAge
    CedarBackup3.util.changeOwnership
    CedarBackup3.util.checkUnique
    CedarBackup3.util.convertSize
    CedarBackup3.util.dereferenceLink
    CedarBackup3.util.deriveDayOfWeek
    CedarBackup3.util.deviceMounted
    CedarBackup3.util.displayBytes
    CedarBackup3.util.encodePath
    CedarBackup3.util.executeCommand
    CedarBackup3.util.getFunctionReference
    CedarBackup3.util.getUidGid
    CedarBackup3.util.isRunningAsRoot
    CedarBackup3.util.isStartOfWeek
    CedarBackup3.util.mount
    CedarBackup3.util.nullDevice
    CedarBackup3.util.parseCommaSeparatedString
    CedarBackup3.util.removeKeys
    CedarBackup3.util.resolveCommand
    CedarBackup3.util.sanitizeEnvironment
    CedarBackup3.util.sortDict
    CedarBackup3.util.splitCommandLine
    CedarBackup3.util.unmount
    CedarBackup3.writers.util.readMediaLabel
    CedarBackup3.writers.util.validateDevice
    CedarBackup3.writers.util.validateDriveSpeed
    CedarBackup3.writers.util.validateScsiId
    CedarBackup3.xmlutil.addBooleanNode
    CedarBackup3.xmlutil.addContainerNode
    CedarBackup3.xmlutil.addIntegerNode
    CedarBackup3.xmlutil.addLongNode
    CedarBackup3.xmlutil.addStringNode
    CedarBackup3.xmlutil.createInputDom
    CedarBackup3.xmlutil.createOutputDom
    CedarBackup3.xmlutil.isElement
    CedarBackup3.xmlutil.readBoolean
    CedarBackup3.xmlutil.readChildren
    CedarBackup3.xmlutil.readFirstChild
    CedarBackup3.xmlutil.readFloat
    CedarBackup3.xmlutil.readInteger
    CedarBackup3.xmlutil.readLong
    CedarBackup3.xmlutil.readString
    CedarBackup3.xmlutil.readStringList
    CedarBackup3.xmlutil.serializeDom

    All Variables

    CedarBackup3.action.__package__
    CedarBackup3.actions.collect.__package__
    CedarBackup3.actions.collect.logger
    CedarBackup3.actions.constants.COLLECT_INDICATOR
    CedarBackup3.actions.constants.DIGEST_EXTENSION
    CedarBackup3.actions.constants.DIR_TIME_FORMAT
    CedarBackup3.actions.constants.INDICATOR_PATTERN
    CedarBackup3.actions.constants.STAGE_INDICATOR
    CedarBackup3.actions.constants.STORE_INDICATOR
    CedarBackup3.actions.constants.__package__
    CedarBackup3.actions.initialize.__package__
    CedarBackup3.actions.initialize.logger
    CedarBackup3.actions.purge.__package__
    CedarBackup3.actions.purge.logger
    CedarBackup3.actions.rebuild.__package__
    CedarBackup3.actions.rebuild.logger
    CedarBackup3.actions.stage.__package__
    CedarBackup3.actions.stage.logger
    CedarBackup3.actions.store.__package__
    CedarBackup3.actions.store.logger
    CedarBackup3.actions.util.MEDIA_LABEL_PREFIX
    CedarBackup3.actions.util.__package__
    CedarBackup3.actions.util.logger
    CedarBackup3.actions.validate.__package__
    CedarBackup3.actions.validate.logger
    CedarBackup3.cli.COLLECT_INDEX
    CedarBackup3.cli.COMBINE_ACTIONS
    CedarBackup3.cli.DATE_FORMAT
    CedarBackup3.cli.DEFAULT_CONFIG
    CedarBackup3.cli.DEFAULT_LOGFILE
    CedarBackup3.cli.DEFAULT_MODE
    CedarBackup3.cli.DEFAULT_OWNERSHIP
    CedarBackup3.cli.DISK_LOG_FORMAT
    CedarBackup3.cli.DISK_OUTPUT_FORMAT
    CedarBackup3.cli.INITIALIZE_INDEX
    CedarBackup3.cli.LONG_SWITCHES
    CedarBackup3.cli.NONCOMBINE_ACTIONS
    CedarBackup3.cli.PURGE_INDEX
    CedarBackup3.cli.REBUILD_INDEX
    CedarBackup3.cli.SCREEN_LOG_FORMAT
    CedarBackup3.cli.SCREEN_LOG_STREAM
    CedarBackup3.cli.SHORT_SWITCHES
    CedarBackup3.cli.STAGE_INDEX
    CedarBackup3.cli.STORE_INDEX
    CedarBackup3.cli.VALIDATE_INDEX
    CedarBackup3.cli.VALID_ACTIONS
    CedarBackup3.cli.__package__
    CedarBackup3.cli.logger
    CedarBackup3.config.ACTION_NAME_REGEX
    CedarBackup3.config.DEFAULT_DEVICE_TYPE
    CedarBackup3.config.DEFAULT_MEDIA_TYPE
    CedarBackup3.config.REWRITABLE_MEDIA_TYPES
    CedarBackup3.config.VALID_ARCHIVE_MODES
    CedarBackup3.config.VALID_BLANK_MODES
    CedarBackup3.config.VALID_BYTE_UNITS
    CedarBackup3.config.VALID_CD_MEDIA_TYPES
    CedarBackup3.config.VALID_COLLECT_MODES
    CedarBackup3.config.VALID_COMPRESS_MODES
    CedarBackup3.config.VALID_DEVICE_TYPES
    CedarBackup3.config.VALID_DVD_MEDIA_TYPES
    CedarBackup3.config.VALID_FAILURE_MODES
    CedarBackup3.config.VALID_MEDIA_TYPES
    CedarBackup3.config.VALID_ORDER_MODES
    CedarBackup3.config.__package__
    CedarBackup3.config.logger
    CedarBackup3.customize.DEBIAN_CDRECORD
    CedarBackup3.customize.DEBIAN_MKISOFS
    CedarBackup3.customize.PLATFORM
    CedarBackup3.customize.__package__
    CedarBackup3.customize.logger
    CedarBackup3.extend.amazons3.AWS_COMMAND
    CedarBackup3.extend.amazons3.STORE_INDICATOR
    CedarBackup3.extend.amazons3.SU_COMMAND
    CedarBackup3.extend.amazons3.__package__
    CedarBackup3.extend.amazons3.logger
    CedarBackup3.extend.capacity.__package__
    CedarBackup3.extend.capacity.logger
    CedarBackup3.extend.encrypt.ENCRYPT_INDICATOR
    CedarBackup3.extend.encrypt.GPG_COMMAND
    CedarBackup3.extend.encrypt.VALID_ENCRYPT_MODES
    CedarBackup3.extend.encrypt.__package__
    CedarBackup3.extend.encrypt.logger
    CedarBackup3.extend.mbox.GREPMAIL_COMMAND
    CedarBackup3.extend.mbox.REVISION_PATH_EXTENSION
    CedarBackup3.extend.mbox.__package__
    CedarBackup3.extend.mbox.logger
    CedarBackup3.extend.mysql.MYSQLDUMP_COMMAND
    CedarBackup3.extend.mysql.__package__
    CedarBackup3.extend.mysql.logger
    CedarBackup3.extend.postgresql.POSTGRESQLDUMPALL_COMMAND
    CedarBackup3.extend.postgresql.POSTGRESQLDUMP_COMMAND
    CedarBackup3.extend.postgresql.__package__
    CedarBackup3.extend.postgresql.logger
    CedarBackup3.extend.split.SPLIT_COMMAND
    CedarBackup3.extend.split.SPLIT_INDICATOR
    CedarBackup3.extend.split.__package__
    CedarBackup3.extend.split.logger
    CedarBackup3.extend.subversion.REVISION_PATH_EXTENSION
    CedarBackup3.extend.subversion.SVNADMIN_COMMAND
    CedarBackup3.extend.subversion.SVNLOOK_COMMAND
    CedarBackup3.extend.subversion.__package__
    CedarBackup3.extend.subversion.logger
    CedarBackup3.extend.sysinfo.DPKG_COMMAND
    CedarBackup3.extend.sysinfo.DPKG_PATH
    CedarBackup3.extend.sysinfo.FDISK_COMMAND
    CedarBackup3.extend.sysinfo.FDISK_PATH
    CedarBackup3.extend.sysinfo.LS_COMMAND
    CedarBackup3.extend.sysinfo.__package__
    CedarBackup3.extend.sysinfo.logger
    CedarBackup3.filesystem.__package__
    CedarBackup3.filesystem.logger
    CedarBackup3.image.__package__
    CedarBackup3.knapsack.__package__
    CedarBackup3.peer.DEF_CBACK_COMMAND
    CedarBackup3.peer.DEF_COLLECT_INDICATOR
    CedarBackup3.peer.DEF_RCP_COMMAND
    CedarBackup3.peer.DEF_RSH_COMMAND
    CedarBackup3.peer.DEF_STAGE_INDICATOR
    CedarBackup3.peer.SU_COMMAND
    CedarBackup3.peer.__package__
    CedarBackup3.peer.logger
    CedarBackup3.release.AUTHOR
    CedarBackup3.release.COPYRIGHT
    CedarBackup3.release.DATE
    CedarBackup3.release.EMAIL
    CedarBackup3.release.URL
    CedarBackup3.release.VERSION
    CedarBackup3.release.__package__
    CedarBackup3.testutil.__package__
    CedarBackup3.tools.amazons3.AWS_COMMAND
    CedarBackup3.tools.amazons3.LONG_SWITCHES
    CedarBackup3.tools.amazons3.SHORT_SWITCHES
    CedarBackup3.tools.amazons3.logger
    CedarBackup3.tools.span.__package__
    CedarBackup3.tools.span.logger
    CedarBackup3.util.BYTES_PER_GBYTE
    CedarBackup3.util.BYTES_PER_KBYTE
    CedarBackup3.util.BYTES_PER_MBYTE
    CedarBackup3.util.BYTES_PER_SECTOR
    CedarBackup3.util.DEFAULT_LANGUAGE
    CedarBackup3.util.HOURS_PER_DAY
    CedarBackup3.util.ISO_SECTOR_SIZE
    CedarBackup3.util.KBYTES_PER_MBYTE
    CedarBackup3.util.LANG_VAR
    CedarBackup3.util.LOCALE_VARS
    CedarBackup3.util.MBYTES_PER_GBYTE
    CedarBackup3.util.MINUTES_PER_HOUR
    CedarBackup3.util.MOUNT_COMMAND
    CedarBackup3.util.MTAB_FILE
    CedarBackup3.util.SECONDS_PER_DAY
    CedarBackup3.util.SECONDS_PER_MINUTE
    CedarBackup3.util.UMOUNT_COMMAND
    CedarBackup3.util.UNIT_BYTES
    CedarBackup3.util.UNIT_GBYTES
    CedarBackup3.util.UNIT_KBYTES
    CedarBackup3.util.UNIT_MBYTES
    CedarBackup3.util.UNIT_SECTORS
    CedarBackup3.util.__package__
    CedarBackup3.util.logger
    CedarBackup3.util.outputLogger
    CedarBackup3.writer.__package__
    CedarBackup3.writers.cdwriter.CDRECORD_COMMAND
    CedarBackup3.writers.cdwriter.EJECT_COMMAND
    CedarBackup3.writers.cdwriter.MEDIA_CDRW_74
    CedarBackup3.writers.cdwriter.MEDIA_CDRW_80
    CedarBackup3.writers.cdwriter.MEDIA_CDR_74
    CedarBackup3.writers.cdwriter.MEDIA_CDR_80
    CedarBackup3.writers.cdwriter.MKISOFS_COMMAND
    CedarBackup3.writers.cdwriter.__package__
    CedarBackup3.writers.cdwriter.logger
    CedarBackup3.writers.dvdwriter.EJECT_COMMAND
    CedarBackup3.writers.dvdwriter.GROWISOFS_COMMAND
    CedarBackup3.writers.dvdwriter.MEDIA_DVDPLUSR
    CedarBackup3.writers.dvdwriter.MEDIA_DVDPLUSRW
    CedarBackup3.writers.dvdwriter.__package__
    CedarBackup3.writers.dvdwriter.logger
    CedarBackup3.writers.util.MKISOFS_COMMAND
    CedarBackup3.writers.util.VOLNAME_COMMAND
    CedarBackup3.writers.util.__package__
    CedarBackup3.writers.util.logger
    CedarBackup3.xmlutil.FALSE_BOOLEAN_VALUES
    CedarBackup3.xmlutil.TRUE_BOOLEAN_VALUES
    CedarBackup3.xmlutil.VALID_BOOLEAN_VALUES
    CedarBackup3.xmlutil.__package__
    CedarBackup3.xmlutil.logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.CommandOverride-class.html0000664000175000017500000006507712657665544031003 0ustar pronovicpronovic00000000000000 CedarBackup3.config.CommandOverride
    Package CedarBackup3 :: Module config :: Class CommandOverride
    [hide private]
    [frames] | no frames]

    Class CommandOverride

    source code

    object --+
             |
            CommandOverride
    

    Class representing a piece of Cedar Backup command override configuration.

    The following restrictions exist on data in this class:

    • The absolute path must be absolute

    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, command=None, absolutePath=None)
    Constructor for the CommandOverride class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    _setCommand(self, value)
    Property target used to set the command.
    source code
     
    _getCommand(self)
    Property target used to get the command.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      command
    Name of command to be overridden.
      absolutePath
    Absolute path of the overrridden command.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, command=None, absolutePath=None)
    (Constructor)

    source code 

    Constructor for the CommandOverride class.

    Parameters:
    • command - Name of command to be overridden.
    • absolutePath - Absolute path of the overrridden command.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setCommand(self, value)

    source code 

    Property target used to set the command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    Property Details [hide private]

    command

    Name of command to be overridden.

    Get Method:
    _getCommand(self) - Property target used to get the command.
    Set Method:
    _setCommand(self, value) - Property target used to set the command.

    absolutePath

    Absolute path of the overrridden command.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    CedarBackup3-3.1.6/doc/interface/module-tree.html0000664000175000017500000003004012657665544023330 0ustar pronovicpronovic00000000000000 Module Hierarchy
     
    [hide private]
    [frames] | no frames]
    [ Module Hierarchy | Class Hierarchy ]

    Module Hierarchy

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.amazons3.LocalConfig-class.html0000664000175000017500000010456112657665544031651 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.amazons3.LocalConfig
    Package CedarBackup3 :: Package extend :: Module amazons3 :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit amazons3-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds an <amazons3> configuration section as the next child of a parent.
    source code
     
    _setAmazonS3(self, value)
    Property target used to set the amazons3 configuration value.
    source code
     
    _getAmazonS3(self)
    Property target used to get the amazons3 configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseAmazonS3(parent)
    Parses an amazons3 configuration section.
    source code
    Properties [hide private]
      amazons3
    AmazonS3 configuration in terms of a AmazonS3Config object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    AmazonS3 configuration must be filled in. Within that, the s3Bucket target must be filled in

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds an <amazons3> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      warnMidnite                 //cb_config/amazons3/warn_midnite
      s3Bucket                    //cb_config/amazons3/s3_bucket
      encryptCommand              //cb_config/amazons3/encrypt
      fullBackupSizeLimit         //cb_config/amazons3/full_size_limit
      incrementalBackupSizeLimit  //cb_config/amazons3/incr_size_limit
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setAmazonS3(self, value)

    source code 

    Property target used to set the amazons3 configuration value. If not None, the value must be a AmazonS3Config object.

    Raises:
    • ValueError - If the value is not a AmazonS3Config

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the amazons3 configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseAmazonS3(parent)
    Static Method

    source code 

    Parses an amazons3 configuration section.

    We read the following individual fields:

      warnMidnite                 //cb_config/amazons3/warn_midnite
      s3Bucket                    //cb_config/amazons3/s3_bucket
      encryptCommand              //cb_config/amazons3/encrypt
      fullBackupSizeLimit         //cb_config/amazons3/full_size_limit
      incrementalBackupSizeLimit  //cb_config/amazons3/incr_size_limit
    
    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    AmazonS3Config object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    Property Details [hide private]

    amazons3

    AmazonS3 configuration in terms of a AmazonS3Config object.

    Get Method:
    _getAmazonS3(self) - Property target used to get the amazons3 configuration value.
    Set Method:
    _setAmazonS3(self, value) - Property target used to set the amazons3 configuration value.

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.writers.dvdwriter-module.html0000664000175000017500000000431412657665544031117 0ustar pronovicpronovic00000000000000 dvdwriter

    Module dvdwriter


    Classes

    DvdWriter
    MediaCapacity
    MediaDefinition

    Variables

    EJECT_COMMAND
    GROWISOFS_COMMAND
    MEDIA_DVDPLUSR
    MEDIA_DVDPLUSRW
    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.util._Vertex-class.html0000664000175000017500000002145312657665545027040 0ustar pronovicpronovic00000000000000 CedarBackup3.util._Vertex
    Package CedarBackup3 :: Module util :: Class _Vertex
    [hide private]
    [frames] | no frames]

    Class _Vertex

    source code

    object --+
             |
            _Vertex
    

    Represents a vertex (or node) in a directed graph.

    Instance Methods [hide private]
     
    __init__(self, name)
    Constructor.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name)
    (Constructor)

    source code 

    Constructor.

    Parameters:
    • name (String value.) - Name of this graph vertex.
    Overrides: object.__init__

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.rebuild-pysrc.html0000664000175000017500000016460312657665546027576 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.rebuild
    Package CedarBackup3 :: Package actions :: Module rebuild
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.actions.rebuild

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2007,2010,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Cedar Backup, release 3 
     30  # Purpose  : Implements the standard 'rebuild' action. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Implements the standard 'rebuild' action. 
     40  @sort: executeRebuild 
     41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     42  """ 
     43   
     44   
     45  ######################################################################## 
     46  # Imported modules 
     47  ######################################################################## 
     48   
     49  # System modules 
     50  import sys 
     51  import os 
     52  import logging 
     53  import datetime 
     54   
     55  # Cedar Backup modules 
     56  from CedarBackup3.util import deriveDayOfWeek 
     57  from CedarBackup3.actions.util import checkMediaState 
     58  from CedarBackup3.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR 
     59  from CedarBackup3.actions.store import writeImage, writeStoreIndicator, consistencyCheck 
     60   
     61   
     62  ######################################################################## 
     63  # Module-wide constants and variables 
     64  ######################################################################## 
     65   
     66  logger = logging.getLogger("CedarBackup3.log.actions.rebuild") 
     67   
     68   
     69  ######################################################################## 
     70  # Public functions 
     71  ######################################################################## 
     72   
     73  ############################ 
     74  # executeRebuild() function 
     75  ############################ 
     76   
    
    77 -def executeRebuild(configPath, options, config):
    78 """ 79 Executes the rebuild backup action. 80 81 This function exists mainly to recreate a disc that has been "trashed" due 82 to media or hardware problems. Note that the "stage complete" indicator 83 isn't checked for this action. 84 85 Note that the rebuild action and the store action are very similar. The 86 main difference is that while store only stores a single day's staging 87 directory, the rebuild action operates on multiple staging directories. 88 89 @param configPath: Path to configuration file on disk. 90 @type configPath: String representing a path on disk. 91 92 @param options: Program command-line options. 93 @type options: Options object. 94 95 @param config: Program configuration. 96 @type config: Config object. 97 98 @raise ValueError: Under many generic error conditions 99 @raise IOError: If there are problems reading or writing files. 100 """ 101 logger.debug("Executing the 'rebuild' action.") 102 if sys.platform == "darwin": 103 logger.warning("Warning: the rebuild action is not fully supported on Mac OS X.") 104 logger.warning("See the Cedar Backup software manual for further information.") 105 if config.options is None or config.store is None: 106 raise ValueError("Rebuild configuration is not properly filled in.") 107 if config.store.checkMedia: 108 checkMediaState(config.store) # raises exception if media is not initialized 109 stagingDirs = _findRebuildDirs(config) 110 writeImage(config, True, stagingDirs) 111 if config.store.checkData: 112 if sys.platform == "darwin": 113 logger.warning("Warning: consistency check cannot be run successfully on Mac OS X.") 114 logger.warning("See the Cedar Backup software manual for further information.") 115 else: 116 logger.debug("Running consistency check of media.") 117 consistencyCheck(config, stagingDirs) 118 writeStoreIndicator(config, stagingDirs) 119 logger.info("Executed the 'rebuild' action successfully.")
    120 121 122 ######################################################################## 123 # Private utility functions 124 ######################################################################## 125 126 ############################## 127 # _findRebuildDirs() function 128 ############################## 129
    130 -def _findRebuildDirs(config):
    131 """ 132 Finds the set of directories to be included in a disc rebuild. 133 134 A the rebuild action is supposed to recreate the "last week's" disc. This 135 won't always be possible if some of the staging directories are missing. 136 However, the general procedure is to look back into the past no further than 137 the previous "starting day of week", and then work forward from there trying 138 to find all of the staging directories between then and now that still exist 139 and have a stage indicator. 140 141 @param config: Config object. 142 143 @return: Correct staging dir, as a dict mapping directory to date suffix. 144 @raise IOError: If we do not find at least one staging directory. 145 """ 146 stagingDirs = {} 147 start = deriveDayOfWeek(config.options.startingDay) 148 today = datetime.date.today() 149 if today.weekday() >= start: 150 days = today.weekday() - start + 1 151 else: 152 days = 7 - (start - today.weekday()) + 1 153 for i in range (0, days): 154 currentDay = today - datetime.timedelta(days=i) 155 dateSuffix = currentDay.strftime(DIR_TIME_FORMAT) 156 stageDir = os.path.join(config.store.sourceDir, dateSuffix) 157 indicator = os.path.join(stageDir, STAGE_INDICATOR) 158 if os.path.isdir(stageDir) and os.path.exists(indicator): 159 logger.info("Rebuild process will include stage directory [%s]", stageDir) 160 stagingDirs[stageDir] = dateSuffix 161 if len(stagingDirs) == 0: 162 raise IOError("Unable to find any staging directories for rebuild process.") 163 return stagingDirs
    164

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.filesystem.SpanItem-class.html0000664000175000017500000002265712657665545030362 0ustar pronovicpronovic00000000000000 CedarBackup3.filesystem.SpanItem
    Package CedarBackup3 :: Module filesystem :: Class SpanItem
    [hide private]
    [frames] | no frames]

    Class SpanItem

    source code

    object --+
             |
            SpanItem
    

    Item returned by BackupFileList.generateSpan.

    Instance Methods [hide private]
     
    __init__(self, fileList, size, capacity, utilization)
    Create object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, fileList, size, capacity, utilization)
    (Constructor)

    source code 

    Create object.

    Parameters:
    • fileList - List of files
    • size - Size (in bytes) of files
    • utilization - Utilization, as a percentage (0-100)
    Overrides: object.__init__

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.util-pysrc.html0000664000175000017500000042076712657665545027132 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.util
    Package CedarBackup3 :: Package actions :: Module util
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.actions.util

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2007,2010,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Cedar Backup, release 3 
     30  # Purpose  : Implements action-related utilities 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Implements action-related utilities 
     40  @sort: findDailyDirs 
     41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     42  """ 
     43   
     44   
     45  ######################################################################## 
     46  # Imported modules 
     47  ######################################################################## 
     48   
     49  # System modules 
     50  import os 
     51  import time 
     52  import tempfile 
     53  import logging 
     54   
     55  # Cedar Backup modules 
     56  from CedarBackup3.filesystem import FilesystemList 
     57  from CedarBackup3.util import changeOwnership 
     58  from CedarBackup3.util import deviceMounted 
     59  from CedarBackup3.writers.util import readMediaLabel 
     60  from CedarBackup3.writers.cdwriter import CdWriter 
     61  from CedarBackup3.writers.dvdwriter import DvdWriter 
     62  from CedarBackup3.writers.cdwriter import MEDIA_CDR_74, MEDIA_CDR_80, MEDIA_CDRW_74, MEDIA_CDRW_80 
     63  from CedarBackup3.writers.dvdwriter import MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW 
     64  from CedarBackup3.config import DEFAULT_MEDIA_TYPE, DEFAULT_DEVICE_TYPE, REWRITABLE_MEDIA_TYPES 
     65  from CedarBackup3.actions.constants import INDICATOR_PATTERN 
     66   
     67   
     68  ######################################################################## 
     69  # Module-wide constants and variables 
     70  ######################################################################## 
     71   
     72  logger = logging.getLogger("CedarBackup3.log.actions.util") 
     73  MEDIA_LABEL_PREFIX   = "CEDAR BACKUP" 
     74   
     75   
     76  ######################################################################## 
     77  # Public utility functions 
     78  ######################################################################## 
     79   
     80  ########################### 
     81  # findDailyDirs() function 
     82  ########################### 
     83   
    
    84 -def findDailyDirs(stagingDir, indicatorFile):
    85 """ 86 Returns a list of all daily staging directories that do not contain 87 the indicated indicator file. 88 89 @param stagingDir: Configured staging directory (config.targetDir) 90 91 @return: List of absolute paths to daily staging directories. 92 """ 93 results = FilesystemList() 94 yearDirs = FilesystemList() 95 yearDirs.excludeFiles = True 96 yearDirs.excludeLinks = True 97 yearDirs.addDirContents(path=stagingDir, recursive=False, addSelf=False) 98 for yearDir in yearDirs: 99 monthDirs = FilesystemList() 100 monthDirs.excludeFiles = True 101 monthDirs.excludeLinks = True 102 monthDirs.addDirContents(path=yearDir, recursive=False, addSelf=False) 103 for monthDir in monthDirs: 104 dailyDirs = FilesystemList() 105 dailyDirs.excludeFiles = True 106 dailyDirs.excludeLinks = True 107 dailyDirs.addDirContents(path=monthDir, recursive=False, addSelf=False) 108 for dailyDir in dailyDirs: 109 if os.path.exists(os.path.join(dailyDir, indicatorFile)): 110 logger.debug("Skipping directory [%s]; contains %s.", dailyDir, indicatorFile) 111 else: 112 logger.debug("Adding [%s] to list of daily directories.", dailyDir) 113 results.append(dailyDir) # just put it in the list, no fancy operations 114 return results
    115 116 117 ########################### 118 # createWriter() function 119 ########################### 120
    121 -def createWriter(config):
    122 """ 123 Creates a writer object based on current configuration. 124 125 This function creates and returns a writer based on configuration. This is 126 done to abstract action functionality from knowing what kind of writer is in 127 use. Since all writers implement the same interface, there's no need for 128 actions to care which one they're working with. 129 130 Currently, the C{cdwriter} and C{dvdwriter} device types are allowed. An 131 exception will be raised if any other device type is used. 132 133 This function also checks to make sure that the device isn't mounted before 134 creating a writer object for it. Experience shows that sometimes if the 135 device is mounted, we have problems with the backup. We may as well do the 136 check here first, before instantiating the writer. 137 138 @param config: Config object. 139 140 @return: Writer that can be used to write a directory to some media. 141 142 @raise ValueError: If there is a problem getting the writer. 143 @raise IOError: If there is a problem creating the writer object. 144 """ 145 devicePath = config.store.devicePath 146 deviceScsiId = config.store.deviceScsiId 147 driveSpeed = config.store.driveSpeed 148 noEject = config.store.noEject 149 refreshMediaDelay = config.store.refreshMediaDelay 150 ejectDelay = config.store.ejectDelay 151 deviceType = _getDeviceType(config) 152 mediaType = _getMediaType(config) 153 if deviceMounted(devicePath): 154 raise IOError("Device [%s] is currently mounted." % (devicePath)) 155 if deviceType == "cdwriter": 156 return CdWriter(devicePath, deviceScsiId, driveSpeed, mediaType, noEject, refreshMediaDelay, ejectDelay) 157 elif deviceType == "dvdwriter": 158 return DvdWriter(devicePath, deviceScsiId, driveSpeed, mediaType, noEject, refreshMediaDelay, ejectDelay) 159 else: 160 raise ValueError("Device type [%s] is invalid." % deviceType)
    161 162 163 ################################ 164 # writeIndicatorFile() function 165 ################################ 166
    167 -def writeIndicatorFile(targetDir, indicatorFile, backupUser, backupGroup):
    168 """ 169 Writes an indicator file into a target directory. 170 @param targetDir: Target directory in which to write indicator 171 @param indicatorFile: Name of the indicator file 172 @param backupUser: User that indicator file should be owned by 173 @param backupGroup: Group that indicator file should be owned by 174 @raise IOException: If there is a problem writing the indicator file 175 """ 176 filename = os.path.join(targetDir, indicatorFile) 177 logger.debug("Writing indicator file [%s].", filename) 178 try: 179 with open(filename, "w") as f: 180 f.write("") 181 changeOwnership(filename, backupUser, backupGroup) 182 except Exception as e: 183 logger.error("Error writing [%s]: %s", filename, e) 184 raise e
    185 186 187 ############################ 188 # getBackupFiles() function 189 ############################ 190
    191 -def getBackupFiles(targetDir):
    192 """ 193 Gets a list of backup files in a target directory. 194 195 Files that match INDICATOR_PATTERN (i.e. C{"cback.store"}, C{"cback.stage"}, 196 etc.) are assumed to be indicator files and are ignored. 197 198 @param targetDir: Directory to look in 199 200 @return: List of backup files in the directory 201 202 @raise ValueError: If the target directory does not exist 203 """ 204 if not os.path.isdir(targetDir): 205 raise ValueError("Target directory [%s] is not a directory or does not exist." % targetDir) 206 fileList = FilesystemList() 207 fileList.excludeDirs = True 208 fileList.excludeLinks = True 209 fileList.excludeBasenamePatterns = INDICATOR_PATTERN 210 fileList.addDirContents(targetDir) 211 return fileList
    212 213 214 #################### 215 # checkMediaState() 216 #################### 217
    218 -def checkMediaState(storeConfig):
    219 """ 220 Checks state of the media in the backup device to confirm whether it has 221 been initialized for use with Cedar Backup. 222 223 We can tell whether the media has been initialized by looking at its media 224 label. If the media label starts with MEDIA_LABEL_PREFIX, then it has been 225 initialized. 226 227 The check varies depending on whether the media is rewritable or not. For 228 non-rewritable media, we also accept a C{None} media label, since this kind 229 of media cannot safely be initialized. 230 231 @param storeConfig: Store configuration 232 233 @raise ValueError: If media is not initialized. 234 """ 235 mediaLabel = readMediaLabel(storeConfig.devicePath) 236 if storeConfig.mediaType in REWRITABLE_MEDIA_TYPES: 237 if mediaLabel is None: 238 raise ValueError("Media has not been initialized: no media label available") 239 elif not mediaLabel.startswith(MEDIA_LABEL_PREFIX): 240 raise ValueError("Media has not been initialized: unrecognized media label [%s]" % mediaLabel) 241 else: 242 if mediaLabel is None: 243 logger.info("Media has no media label; assuming OK since media is not rewritable.") 244 elif not mediaLabel.startswith(MEDIA_LABEL_PREFIX): 245 raise ValueError("Media has not been initialized: unrecognized media label [%s]" % mediaLabel)
    246 247 248 ######################### 249 # initializeMediaState() 250 ######################### 251
    252 -def initializeMediaState(config):
    253 """ 254 Initializes state of the media in the backup device so Cedar Backup can 255 recognize it. 256 257 This is done by writing an mostly-empty image (it contains a "Cedar Backup" 258 directory) to the media with a known media label. 259 260 @note: Only rewritable media (CD-RW, DVD+RW) can be initialized. It 261 doesn't make any sense to initialize media that cannot be rewritten (CD-R, 262 DVD+R), since Cedar Backup would then not be able to use that media for a 263 backup. 264 265 @param config: Cedar Backup configuration 266 267 @raise ValueError: If media could not be initialized. 268 @raise ValueError: If the configured media type is not rewritable 269 """ 270 if not config.store.mediaType in REWRITABLE_MEDIA_TYPES: 271 raise ValueError("Only rewritable media types can be initialized.") 272 mediaLabel = buildMediaLabel() 273 writer = createWriter(config) 274 writer.refreshMedia() 275 writer.initializeImage(True, config.options.workingDir, mediaLabel) # always create a new disc 276 tempdir = tempfile.mkdtemp(dir=config.options.workingDir) 277 try: 278 writer.addImageEntry(tempdir, "CedarBackup") 279 writer.writeImage() 280 finally: 281 if os.path.exists(tempdir): 282 try: 283 os.rmdir(tempdir) 284 except: pass
    285 286 287 #################### 288 # buildMediaLabel() 289 #################### 290
    291 -def buildMediaLabel():
    292 """ 293 Builds a media label to be used on Cedar Backup media. 294 @return: Media label as a string. 295 """ 296 currentDate = time.strftime("%d-%b-%Y").upper() 297 return "%s %s" % (MEDIA_LABEL_PREFIX, currentDate)
    298 299 300 ######################################################################## 301 # Private attribute "getter" functions 302 ######################################################################## 303 304 ############################ 305 # _getDeviceType() function 306 ############################ 307
    308 -def _getDeviceType(config):
    309 """ 310 Gets the device type that should be used for storing. 311 312 Use the configured device type if not C{None}, otherwise use 313 L{config.DEFAULT_DEVICE_TYPE}. 314 315 @param config: Config object. 316 @return: Device type to be used. 317 """ 318 if config.store.deviceType is None: 319 deviceType = DEFAULT_DEVICE_TYPE 320 else: 321 deviceType = config.store.deviceType 322 logger.debug("Device type is [%s]", deviceType) 323 return deviceType
    324 325 326 ########################### 327 # _getMediaType() function 328 ########################### 329
    330 -def _getMediaType(config):
    331 """ 332 Gets the media type that should be used for storing. 333 334 Use the configured media type if not C{None}, otherwise use 335 C{DEFAULT_MEDIA_TYPE}. 336 337 Once we figure out what configuration value to use, we return a media type 338 value that is valid in one of the supported writers:: 339 340 MEDIA_CDR_74 341 MEDIA_CDRW_74 342 MEDIA_CDR_80 343 MEDIA_CDRW_80 344 MEDIA_DVDPLUSR 345 MEDIA_DVDPLUSRW 346 347 @param config: Config object. 348 349 @return: Media type to be used as a writer media type value. 350 @raise ValueError: If the media type is not valid. 351 """ 352 if config.store.mediaType is None: 353 mediaType = DEFAULT_MEDIA_TYPE 354 else: 355 mediaType = config.store.mediaType 356 if mediaType == "cdr-74": 357 logger.debug("Media type is MEDIA_CDR_74.") 358 return MEDIA_CDR_74 359 elif mediaType == "cdrw-74": 360 logger.debug("Media type is MEDIA_CDRW_74.") 361 return MEDIA_CDRW_74 362 elif mediaType == "cdr-80": 363 logger.debug("Media type is MEDIA_CDR_80.") 364 return MEDIA_CDR_80 365 elif mediaType == "cdrw-80": 366 logger.debug("Media type is MEDIA_CDRW_80.") 367 return MEDIA_CDRW_80 368 elif mediaType == "dvd+r": 369 logger.debug("Media type is MEDIA_DVDPLUSR.") 370 return MEDIA_DVDPLUSR 371 elif mediaType == "dvd+rw": 372 logger.debug("Media type is MEDIA_DVDPLUSRW.") 373 return MEDIA_DVDPLUSRW 374 else: 375 raise ValueError("Media type [%s] is not valid." % mediaType)
    376

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.LocalPeer-class.html0000664000175000017500000007531312657665544027565 0ustar pronovicpronovic00000000000000 CedarBackup3.config.LocalPeer
    Package CedarBackup3 :: Module config :: Class LocalPeer
    [hide private]
    [frames] | no frames]

    Class LocalPeer

    source code

    object --+
             |
            LocalPeer
    

    Class representing a Cedar Backup peer.

    The following restrictions exist on data in this class:

    • The peer name must be a non-empty string.
    • The collect directory must be an absolute path.
    • The ignore failure mode must be one of the values in VALID_FAILURE_MODES.
    Instance Methods [hide private]
     
    __init__(self, name=None, collectDir=None, ignoreFailureMode=None)
    Constructor for the LocalPeer class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    _setName(self, value)
    Property target used to set the peer name.
    source code
     
    _getName(self)
    Property target used to get the peer name.
    source code
     
    _setCollectDir(self, value)
    Property target used to set the collect directory.
    source code
     
    _getCollectDir(self)
    Property target used to get the collect directory.
    source code
     
    _setIgnoreFailureMode(self, value)
    Property target used to set the ignoreFailure mode.
    source code
     
    _getIgnoreFailureMode(self)
    Property target used to get the ignoreFailure mode.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      name
    Name of the peer, typically a valid hostname.
      collectDir
    Collect directory to stage files from on peer.
      ignoreFailureMode
    Ignore failure mode for peer.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name=None, collectDir=None, ignoreFailureMode=None)
    (Constructor)

    source code 

    Constructor for the LocalPeer class.

    Parameters:
    • name - Name of the peer, typically a valid hostname.
    • collectDir - Collect directory to stage files from on peer.
    • ignoreFailureMode - Ignore failure mode for peer.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setName(self, value)

    source code 

    Property target used to set the peer name. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setCollectDir(self, value)

    source code 

    Property target used to set the collect directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setIgnoreFailureMode(self, value)

    source code 

    Property target used to set the ignoreFailure mode. If not None, the mode must be one of the values in VALID_FAILURE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    name

    Name of the peer, typically a valid hostname.

    Get Method:
    _getName(self) - Property target used to get the peer name.
    Set Method:
    _setName(self, value) - Property target used to set the peer name.

    collectDir

    Collect directory to stage files from on peer.

    Get Method:
    _getCollectDir(self) - Property target used to get the collect directory.
    Set Method:
    _setCollectDir(self, value) - Property target used to set the collect directory.

    ignoreFailureMode

    Ignore failure mode for peer.

    Get Method:
    _getIgnoreFailureMode(self) - Property target used to get the ignoreFailure mode.
    Set Method:
    _setIgnoreFailureMode(self, value) - Property target used to set the ignoreFailure mode.

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.knapsack-module.html0000664000175000017500000000301312657665544027175 0ustar pronovicpronovic00000000000000 knapsack

    Module knapsack


    Functions

    alternateFit
    bestFit
    firstFit
    worstFit

    Variables

    __package__

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.writers.dvdwriter-pysrc.html0000664000175000017500000110453312657665547030217 0ustar pronovicpronovic00000000000000 CedarBackup3.writers.dvdwriter
    Package CedarBackup3 :: Package writers :: Module dvdwriter
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.writers.dvdwriter

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2007-2008,2010,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Cedar Backup, release 3 
     30  # Purpose  : Provides functionality related to DVD writer devices. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Provides functionality related to DVD writer devices. 
     40   
     41  @sort: MediaDefinition, DvdWriter, MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW 
     42   
     43  @var MEDIA_DVDPLUSR: Constant representing DVD+R media. 
     44  @var MEDIA_DVDPLUSRW: Constant representing DVD+RW media. 
     45   
     46  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     47  @author: Dmitry Rutsky <rutsky@inbox.ru> 
     48  """ 
     49   
     50  ######################################################################## 
     51  # Imported modules 
     52  ######################################################################## 
     53   
     54  # System modules 
     55  import os 
     56  import re 
     57  import logging 
     58  import tempfile 
     59  import time 
     60   
     61  # Cedar Backup modules 
     62  from CedarBackup3.writers.util import IsoImage 
     63  from CedarBackup3.util import resolveCommand, executeCommand 
     64  from CedarBackup3.util import convertSize, displayBytes, encodePath 
     65  from CedarBackup3.util import UNIT_SECTORS, UNIT_BYTES, UNIT_GBYTES 
     66  from CedarBackup3.writers.util import validateDevice, validateDriveSpeed 
     67   
     68   
     69  ######################################################################## 
     70  # Module-wide constants and variables 
     71  ######################################################################## 
     72   
     73  logger = logging.getLogger("CedarBackup3.log.writers.dvdwriter") 
     74   
     75  MEDIA_DVDPLUSR  = 1 
     76  MEDIA_DVDPLUSRW = 2 
     77   
     78  GROWISOFS_COMMAND = [ "growisofs", ] 
     79  EJECT_COMMAND     = [ "eject", ] 
    
    80 81 82 ######################################################################## 83 # MediaDefinition class definition 84 ######################################################################## 85 86 -class MediaDefinition(object):
    87 88 """ 89 Class encapsulating information about DVD media definitions. 90 91 The following media types are accepted: 92 93 - C{MEDIA_DVDPLUSR}: DVD+R media (4.4 GB capacity) 94 - C{MEDIA_DVDPLUSRW}: DVD+RW media (4.4 GB capacity) 95 96 Note that the capacity attribute returns capacity in terms of ISO sectors 97 (C{util.ISO_SECTOR_SIZE)}. This is for compatibility with the CD writer 98 functionality. 99 100 The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes 101 of 1024*1024*1024 bytes per gigabyte. 102 103 @sort: __init__, mediaType, rewritable, capacity 104 """ 105
    106 - def __init__(self, mediaType):
    107 """ 108 Creates a media definition for the indicated media type. 109 @param mediaType: Type of the media, as discussed above. 110 @raise ValueError: If the media type is unknown or unsupported. 111 """ 112 self._mediaType = None 113 self._rewritable = False 114 self._capacity = 0.0 115 self._setValues(mediaType)
    116
    117 - def _setValues(self, mediaType):
    118 """ 119 Sets values based on media type. 120 @param mediaType: Type of the media, as discussed above. 121 @raise ValueError: If the media type is unknown or unsupported. 122 """ 123 if mediaType not in [MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW, ]: 124 raise ValueError("Invalid media type %d." % mediaType) 125 self._mediaType = mediaType 126 if self._mediaType == MEDIA_DVDPLUSR: 127 self._rewritable = False 128 self._capacity = convertSize(4.4, UNIT_GBYTES, UNIT_SECTORS) # 4.4 "true" GB = 4.7 "marketing" GB 129 elif self._mediaType == MEDIA_DVDPLUSRW: 130 self._rewritable = True 131 self._capacity = convertSize(4.4, UNIT_GBYTES, UNIT_SECTORS) # 4.4 "true" GB = 4.7 "marketing" GB
    132
    133 - def _getMediaType(self):
    134 """ 135 Property target used to get the media type value. 136 """ 137 return self._mediaType
    138
    139 - def _getRewritable(self):
    140 """ 141 Property target used to get the rewritable flag value. 142 """ 143 return self._rewritable
    144
    145 - def _getCapacity(self):
    146 """ 147 Property target used to get the capacity value. 148 """ 149 return self._capacity
    150 151 mediaType = property(_getMediaType, None, None, doc="Configured media type.") 152 rewritable = property(_getRewritable, None, None, doc="Boolean indicating whether the media is rewritable.") 153 capacity = property(_getCapacity, None, None, doc="Total capacity of media in 2048-byte sectors.")
    154
    155 156 ######################################################################## 157 # MediaCapacity class definition 158 ######################################################################## 159 160 -class MediaCapacity(object):
    161 162 """ 163 Class encapsulating information about DVD media capacity. 164 165 Space used and space available do not include any information about media 166 lead-in or other overhead. 167 168 @sort: __init__, bytesUsed, bytesAvailable, totalCapacity, utilized 169 """ 170
    171 - def __init__(self, bytesUsed, bytesAvailable):
    172 """ 173 Initializes a capacity object. 174 @raise ValueError: If the bytes used and available values are not floats. 175 """ 176 self._bytesUsed = float(bytesUsed) 177 self._bytesAvailable = float(bytesAvailable)
    178
    179 - def __str__(self):
    180 """ 181 Informal string representation for class instance. 182 """ 183 return "utilized %s of %s (%.2f%%)" % (displayBytes(self.bytesUsed), displayBytes(self.totalCapacity), self.utilized)
    184
    185 - def _getBytesUsed(self):
    186 """ 187 Property target used to get the bytes-used value. 188 """ 189 return self._bytesUsed
    190
    191 - def _getBytesAvailable(self):
    192 """ 193 Property target available to get the bytes-available value. 194 """ 195 return self._bytesAvailable
    196
    197 - def _getTotalCapacity(self):
    198 """ 199 Property target to get the total capacity (used + available). 200 """ 201 return self.bytesUsed + self.bytesAvailable
    202
    203 - def _getUtilized(self):
    204 """ 205 Property target to get the percent of capacity which is utilized. 206 """ 207 if self.bytesAvailable <= 0.0: 208 return 100.0 209 elif self.bytesUsed <= 0.0: 210 return 0.0 211 return (self.bytesUsed / self.totalCapacity) * 100.0
    212 213 bytesUsed = property(_getBytesUsed, None, None, doc="Space used on disc, in bytes.") 214 bytesAvailable = property(_getBytesAvailable, None, None, doc="Space available on disc, in bytes.") 215 totalCapacity = property(_getTotalCapacity, None, None, doc="Total capacity of the disc, in bytes.") 216 utilized = property(_getUtilized, None, None, "Percentage of the total capacity which is utilized.")
    217
    218 219 ######################################################################## 220 # _ImageProperties class definition 221 ######################################################################## 222 223 -class _ImageProperties(object):
    224 """ 225 Simple value object to hold image properties for C{DvdWriter}. 226 """
    227 - def __init__(self):
    228 self.newDisc = False 229 self.tmpdir = None 230 self.mediaLabel = None 231 self.entries = None # dict mapping path to graft point
    232
    233 234 ######################################################################## 235 # DvdWriter class definition 236 ######################################################################## 237 238 -class DvdWriter(object):
    239 240 ###################### 241 # Class documentation 242 ###################### 243 244 """ 245 Class representing a device that knows how to write some kinds of DVD media. 246 247 Summary 248 ======= 249 250 This is a class representing a device that knows how to write some kinds 251 of DVD media. It provides common operations for the device, such as 252 ejecting the media and writing data to the media. 253 254 This class is implemented in terms of the C{eject} and C{growisofs} 255 utilities, all of which should be available on most UN*X platforms. 256 257 Image Writer Interface 258 ====================== 259 260 The following methods make up the "image writer" interface shared 261 with other kinds of writers:: 262 263 __init__ 264 initializeImage() 265 addImageEntry() 266 writeImage() 267 setImageNewDisc() 268 retrieveCapacity() 269 getEstimatedImageSize() 270 271 Only these methods will be used by other Cedar Backup functionality 272 that expects a compatible image writer. 273 274 The media attribute is also assumed to be available. 275 276 Unlike the C{CdWriter}, the C{DvdWriter} can only operate in terms of 277 filesystem devices, not SCSI devices. So, although the constructor 278 interface accepts a SCSI device parameter for the sake of compatibility, 279 it's not used. 280 281 Media Types 282 =========== 283 284 This class knows how to write to DVD+R and DVD+RW media, represented 285 by the following constants: 286 287 - C{MEDIA_DVDPLUSR}: DVD+R media (4.4 GB capacity) 288 - C{MEDIA_DVDPLUSRW}: DVD+RW media (4.4 GB capacity) 289 290 The difference is that DVD+RW media can be rewritten, while DVD+R media 291 cannot be (although at present, C{DvdWriter} does not really 292 differentiate between rewritable and non-rewritable media). 293 294 The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes 295 of 1024*1024*1024 bytes per gigabyte. 296 297 The underlying C{growisofs} utility does support other kinds of media 298 (including DVD-R, DVD-RW and BlueRay) which work somewhat differently 299 than standard DVD+R and DVD+RW media. I don't support these other kinds 300 of media because I haven't had any opportunity to work with them. The 301 same goes for dual-layer media of any type. 302 303 Device Attributes vs. Media Attributes 304 ====================================== 305 306 As with the cdwriter functionality, a given dvdwriter instance has two 307 different kinds of attributes associated with it. I call these device 308 attributes and media attributes. 309 310 Device attributes are things which can be determined without looking at 311 the media. Media attributes are attributes which vary depending on the 312 state of the media. In general, device attributes are available via 313 instance variables and are constant over the life of an object, while 314 media attributes can be retrieved through method calls. 315 316 Compared to cdwriters, dvdwriters have very few attributes. This is due 317 to differences between the way C{growisofs} works relative to 318 C{cdrecord}. 319 320 Media Capacity 321 ============== 322 323 One major difference between the C{cdrecord}/C{mkisofs} utilities used by 324 the cdwriter class and the C{growisofs} utility used here is that the 325 process of estimating remaining capacity and image size is more 326 straightforward with C{cdrecord}/C{mkisofs} than with C{growisofs}. 327 328 In this class, remaining capacity is calculated by asking doing a dry run 329 of C{growisofs} and grabbing some information from the output of that 330 command. Image size is estimated by asking the C{IsoImage} class for an 331 estimate and then adding on a "fudge factor" determined through 332 experimentation. 333 334 Testing 335 ======= 336 337 It's rather difficult to test this code in an automated fashion, even if 338 you have access to a physical DVD writer drive. It's even more difficult 339 to test it if you are running on some build daemon (think of a Debian 340 autobuilder) which can't be expected to have any hardware or any media 341 that you could write to. 342 343 Because of this, some of the implementation below is in terms of static 344 methods that are supposed to take defined actions based on their 345 arguments. Public methods are then implemented in terms of a series of 346 calls to simplistic static methods. This way, we can test as much as 347 possible of the "difficult" functionality via testing the static methods, 348 while hoping that if the static methods are called appropriately, things 349 will work properly. It's not perfect, but it's much better than no 350 testing at all. 351 352 @sort: __init__, isRewritable, retrieveCapacity, openTray, closeTray, refreshMedia, 353 initializeImage, addImageEntry, writeImage, setImageNewDisc, getEstimatedImageSize, 354 _writeImage, _getEstimatedImageSize, _searchForOverburn, _buildWriteArgs, 355 device, scsiId, hardwareId, driveSpeed, media, deviceHasTray, deviceCanEject 356 """ 357 358 ############## 359 # Constructor 360 ############## 361
    362 - def __init__(self, device, scsiId=None, driveSpeed=None, 363 mediaType=MEDIA_DVDPLUSRW, noEject=False, 364 refreshMediaDelay=0, ejectDelay=0, unittest=False):
    365 """ 366 Initializes a DVD writer object. 367 368 Since C{growisofs} can only address devices using the device path (i.e. 369 C{/dev/dvd}), the hardware id will always be set based on the device. If 370 passed in, it will be saved for reference purposes only. 371 372 We have no way to query the device to ask whether it has a tray or can be 373 safely opened and closed. So, the C{noEject} flag is used to set these 374 values. If C{noEject=False}, then we assume a tray exists and open/close 375 is safe. If C{noEject=True}, then we assume that there is no tray and 376 open/close is not safe. 377 378 @note: The C{unittest} parameter should never be set to C{True} 379 outside of Cedar Backup code. It is intended for use in unit testing 380 Cedar Backup internals and has no other sensible purpose. 381 382 @param device: Filesystem device associated with this writer. 383 @type device: Absolute path to a filesystem device, i.e. C{/dev/dvd} 384 385 @param scsiId: SCSI id for the device (optional, for reference only). 386 @type scsiId: If provided, SCSI id in the form C{[<method>:]scsibus,target,lun} 387 388 @param driveSpeed: Speed at which the drive writes. 389 @type driveSpeed: Use C{2} for 2x device, etc. or C{None} to use device default. 390 391 @param mediaType: Type of the media that is assumed to be in the drive. 392 @type mediaType: One of the valid media type as discussed above. 393 394 @param noEject: Tells Cedar Backup that the device cannot safely be ejected 395 @type noEject: Boolean true/false 396 397 @param refreshMediaDelay: Refresh media delay to use, if any 398 @type refreshMediaDelay: Number of seconds, an integer >= 0 399 400 @param ejectDelay: Eject delay to use, if any 401 @type ejectDelay: Number of seconds, an integer >= 0 402 403 @param unittest: Turns off certain validations, for use in unit testing. 404 @type unittest: Boolean true/false 405 406 @raise ValueError: If the device is not valid for some reason. 407 @raise ValueError: If the SCSI id is not in a valid form. 408 @raise ValueError: If the drive speed is not an integer >= 1. 409 """ 410 if scsiId is not None: 411 logger.warning("SCSI id [%s] will be ignored by DvdWriter.", scsiId) 412 self._image = None # optionally filled in by initializeImage() 413 self._device = validateDevice(device, unittest) 414 self._scsiId = scsiId # not validated, because it's just for reference 415 self._driveSpeed = validateDriveSpeed(driveSpeed) 416 self._media = MediaDefinition(mediaType) 417 self._refreshMediaDelay = refreshMediaDelay 418 self._ejectDelay = ejectDelay 419 if noEject: 420 self._deviceHasTray = False 421 self._deviceCanEject = False 422 else: 423 self._deviceHasTray = True # just assume 424 self._deviceCanEject = True # just assume
    425 426 427 ############# 428 # Properties 429 ############# 430
    431 - def _getDevice(self):
    432 """ 433 Property target used to get the device value. 434 """ 435 return self._device
    436
    437 - def _getScsiId(self):
    438 """ 439 Property target used to get the SCSI id value. 440 """ 441 return self._scsiId
    442
    443 - def _getHardwareId(self):
    444 """ 445 Property target used to get the hardware id value. 446 """ 447 return self._device
    448
    449 - def _getDriveSpeed(self):
    450 """ 451 Property target used to get the drive speed. 452 """ 453 return self._driveSpeed
    454
    455 - def _getMedia(self):
    456 """ 457 Property target used to get the media description. 458 """ 459 return self._media
    460
    461 - def _getDeviceHasTray(self):
    462 """ 463 Property target used to get the device-has-tray flag. 464 """ 465 return self._deviceHasTray
    466
    467 - def _getDeviceCanEject(self):
    468 """ 469 Property target used to get the device-can-eject flag. 470 """ 471 return self._deviceCanEject
    472
    473 - def _getRefreshMediaDelay(self):
    474 """ 475 Property target used to get the configured refresh media delay, in seconds. 476 """ 477 return self._refreshMediaDelay
    478
    479 - def _getEjectDelay(self):
    480 """ 481 Property target used to get the configured eject delay, in seconds. 482 """ 483 return self._ejectDelay
    484 485 device = property(_getDevice, None, None, doc="Filesystem device name for this writer.") 486 scsiId = property(_getScsiId, None, None, doc="SCSI id for the device (saved for reference only).") 487 hardwareId = property(_getHardwareId, None, None, doc="Hardware id for this writer (always the device path).") 488 driveSpeed = property(_getDriveSpeed, None, None, doc="Speed at which the drive writes.") 489 media = property(_getMedia, None, None, doc="Definition of media that is expected to be in the device.") 490 deviceHasTray = property(_getDeviceHasTray, None, None, doc="Indicates whether the device has a media tray.") 491 deviceCanEject = property(_getDeviceCanEject, None, None, doc="Indicates whether the device supports ejecting its media.") 492 refreshMediaDelay = property(_getRefreshMediaDelay, None, None, doc="Refresh media delay, in seconds.") 493 ejectDelay = property(_getEjectDelay, None, None, doc="Eject delay, in seconds.") 494 495 496 ################################################# 497 # Methods related to device and media attributes 498 ################################################# 499
    500 - def isRewritable(self):
    501 """Indicates whether the media is rewritable per configuration.""" 502 return self._media.rewritable
    503
    504 - def retrieveCapacity(self, entireDisc=False):
    505 """ 506 Retrieves capacity for the current media in terms of a C{MediaCapacity} 507 object. 508 509 If C{entireDisc} is passed in as C{True}, the capacity will be for the 510 entire disc, as if it were to be rewritten from scratch. The same will 511 happen if the disc can't be read for some reason. Otherwise, the capacity 512 will be calculated by subtracting the sectors currently used on the disc, 513 as reported by C{growisofs} itself. 514 515 @param entireDisc: Indicates whether to return capacity for entire disc. 516 @type entireDisc: Boolean true/false 517 518 @return: C{MediaCapacity} object describing the capacity of the media. 519 520 @raise ValueError: If there is a problem parsing the C{growisofs} output 521 @raise IOError: If the media could not be read for some reason. 522 """ 523 sectorsUsed = 0.0 524 if not entireDisc: 525 sectorsUsed = self._retrieveSectorsUsed() 526 sectorsAvailable = self._media.capacity - sectorsUsed # both are in sectors 527 bytesUsed = convertSize(sectorsUsed, UNIT_SECTORS, UNIT_BYTES) 528 bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) 529 return MediaCapacity(bytesUsed, bytesAvailable)
    530 531 532 ####################################################### 533 # Methods used for working with the internal ISO image 534 ####################################################### 535
    536 - def initializeImage(self, newDisc, tmpdir, mediaLabel=None):
    537 """ 538 Initializes the writer's associated ISO image. 539 540 This method initializes the C{image} instance variable so that the caller 541 can use the C{addImageEntry} method. Once entries have been added, the 542 C{writeImage} method can be called with no arguments. 543 544 @param newDisc: Indicates whether the disc should be re-initialized 545 @type newDisc: Boolean true/false 546 547 @param tmpdir: Temporary directory to use if needed 548 @type tmpdir: String representing a directory path on disk 549 550 @param mediaLabel: Media label to be applied to the image, if any 551 @type mediaLabel: String, no more than 25 characters long 552 """ 553 self._image = _ImageProperties() 554 self._image.newDisc = newDisc 555 self._image.tmpdir = encodePath(tmpdir) 556 self._image.mediaLabel = mediaLabel 557 self._image.entries = {} # mapping from path to graft point (if any)
    558
    559 - def addImageEntry(self, path, graftPoint):
    560 """ 561 Adds a filepath entry to the writer's associated ISO image. 562 563 The contents of the filepath -- but not the path itself -- will be added 564 to the image at the indicated graft point. If you don't want to use a 565 graft point, just pass C{None}. 566 567 @note: Before calling this method, you must call L{initializeImage}. 568 569 @param path: File or directory to be added to the image 570 @type path: String representing a path on disk 571 572 @param graftPoint: Graft point to be used when adding this entry 573 @type graftPoint: String representing a graft point path, as described above 574 575 @raise ValueError: If initializeImage() was not previously called 576 @raise ValueError: If the path is not a valid file or directory 577 """ 578 if self._image is None: 579 raise ValueError("Must call initializeImage() before using this method.") 580 if not os.path.exists(path): 581 raise ValueError("Path [%s] does not exist." % path) 582 self._image.entries[path] = graftPoint
    583
    584 - def setImageNewDisc(self, newDisc):
    585 """ 586 Resets (overrides) the newDisc flag on the internal image. 587 @param newDisc: New disc flag to set 588 @raise ValueError: If initializeImage() was not previously called 589 """ 590 if self._image is None: 591 raise ValueError("Must call initializeImage() before using this method.") 592 self._image.newDisc = newDisc
    593
    594 - def getEstimatedImageSize(self):
    595 """ 596 Gets the estimated size of the image associated with the writer. 597 598 This is an estimate and is conservative. The actual image could be as 599 much as 450 blocks (sectors) smaller under some circmstances. 600 601 @return: Estimated size of the image, in bytes. 602 603 @raise IOError: If there is a problem calling C{mkisofs}. 604 @raise ValueError: If initializeImage() was not previously called 605 """ 606 if self._image is None: 607 raise ValueError("Must call initializeImage() before using this method.") 608 return DvdWriter._getEstimatedImageSize(self._image.entries)
    609 610 611 ###################################### 612 # Methods which expose device actions 613 ###################################### 614
    615 - def openTray(self):
    616 """ 617 Opens the device's tray and leaves it open. 618 619 This only works if the device has a tray and supports ejecting its media. 620 We have no way to know if the tray is currently open or closed, so we 621 just send the appropriate command and hope for the best. If the device 622 does not have a tray or does not support ejecting its media, then we do 623 nothing. 624 625 Starting with Debian wheezy on my backup hardware, I started seeing 626 consistent problems with the eject command. I couldn't tell whether 627 these problems were due to the device management system or to the new 628 kernel (3.2.0). Initially, I saw simple eject failures, possibly because 629 I was opening and closing the tray too quickly. I worked around that 630 behavior with the new ejectDelay flag. 631 632 Later, I sometimes ran into issues after writing an image to a disc: 633 eject would give errors like "unable to eject, last error: Inappropriate 634 ioctl for device". Various sources online (like Ubuntu bug #875543) 635 suggested that the drive was being locked somehow, and that the 636 workaround was to run 'eject -i off' to unlock it. Sure enough, that 637 fixed the problem for me, so now it's a normal error-handling strategy. 638 639 @raise IOError: If there is an error talking to the device. 640 """ 641 if self._deviceHasTray and self._deviceCanEject: 642 command = resolveCommand(EJECT_COMMAND) 643 args = [ self.device, ] 644 result = executeCommand(command, args)[0] 645 if result != 0: 646 logger.debug("Eject failed; attempting kludge of unlocking the tray before retrying.") 647 self.unlockTray() 648 result = executeCommand(command, args)[0] 649 if result != 0: 650 raise IOError("Error (%d) executing eject command to open tray (failed even after unlocking tray)." % result) 651 logger.debug("Kludge was apparently successful.") 652 if self.ejectDelay is not None: 653 logger.debug("Per configuration, sleeping %d seconds after opening tray.", self.ejectDelay) 654 time.sleep(self.ejectDelay)
    655
    656 - def unlockTray(self):
    657 """ 658 Unlocks the device's tray via 'eject -i off'. 659 @raise IOError: If there is an error talking to the device. 660 """ 661 command = resolveCommand(EJECT_COMMAND) 662 args = [ "-i", "off", self.device, ] 663 result = executeCommand(command, args)[0] 664 if result != 0: 665 raise IOError("Error (%d) executing eject command to unlock tray." % result)
    666
    667 - def closeTray(self):
    668 """ 669 Closes the device's tray. 670 671 This only works if the device has a tray and supports ejecting its media. 672 We have no way to know if the tray is currently open or closed, so we 673 just send the appropriate command and hope for the best. If the device 674 does not have a tray or does not support ejecting its media, then we do 675 nothing. 676 677 @raise IOError: If there is an error talking to the device. 678 """ 679 if self._deviceHasTray and self._deviceCanEject: 680 command = resolveCommand(EJECT_COMMAND) 681 args = [ "-t", self.device, ] 682 result = executeCommand(command, args)[0] 683 if result != 0: 684 raise IOError("Error (%d) executing eject command to close tray." % result)
    685
    686 - def refreshMedia(self):
    687 """ 688 Opens and then immediately closes the device's tray, to refresh the 689 device's idea of the media. 690 691 Sometimes, a device gets confused about the state of its media. Often, 692 all it takes to solve the problem is to eject the media and then 693 immediately reload it. (There are also configurable eject and refresh 694 media delays which can be applied, for situations where this makes a 695 difference.) 696 697 This only works if the device has a tray and supports ejecting its media. 698 We have no way to know if the tray is currently open or closed, so we 699 just send the appropriate command and hope for the best. If the device 700 does not have a tray or does not support ejecting its media, then we do 701 nothing. The configured delays still apply, though. 702 703 @raise IOError: If there is an error talking to the device. 704 """ 705 self.openTray() 706 self.closeTray() 707 self.unlockTray() # on some systems, writing a disc leaves the tray locked, yikes! 708 if self.refreshMediaDelay is not None: 709 logger.debug("Per configuration, sleeping %d seconds to stabilize media state.", self.refreshMediaDelay) 710 time.sleep(self.refreshMediaDelay) 711 logger.debug("Media refresh complete; hopefully media state is stable now.")
    712
    713 - def writeImage(self, imagePath=None, newDisc=False, writeMulti=True):
    714 """ 715 Writes an ISO image to the media in the device. 716 717 If C{newDisc} is passed in as C{True}, we assume that the entire disc 718 will be re-created from scratch. Note that unlike C{CdWriter}, 719 C{DvdWriter} does not blank rewritable media before reusing it; however, 720 C{growisofs} is called such that the media will be re-initialized as 721 needed. 722 723 If C{imagePath} is passed in as C{None}, then the existing image 724 configured with C{initializeImage()} will be used. Under these 725 circumstances, the passed-in C{newDisc} flag will be ignored and the 726 value passed in to C{initializeImage()} will apply instead. 727 728 The C{writeMulti} argument is ignored. It exists for compatibility with 729 the Cedar Backup image writer interface. 730 731 @note: The image size indicated in the log ("Image size will be...") is 732 an estimate. The estimate is conservative and is probably larger than 733 the actual space that C{dvdwriter} will use. 734 735 @param imagePath: Path to an ISO image on disk, or C{None} to use writer's image 736 @type imagePath: String representing a path on disk 737 738 @param newDisc: Indicates whether the disc should be re-initialized 739 @type newDisc: Boolean true/false. 740 741 @param writeMulti: Unused 742 @type writeMulti: Boolean true/false 743 744 @raise ValueError: If the image path is not absolute. 745 @raise ValueError: If some path cannot be encoded properly. 746 @raise IOError: If the media could not be written to for some reason. 747 @raise ValueError: If no image is passed in and initializeImage() was not previously called 748 """ 749 if not writeMulti: 750 logger.warning("writeMulti value of [%s] ignored.", writeMulti) 751 if imagePath is None: 752 if self._image is None: 753 raise ValueError("Must call initializeImage() before using this method with no image path.") 754 size = self.getEstimatedImageSize() 755 logger.info("Image size will be %s (estimated).", displayBytes(size)) 756 available = self.retrieveCapacity(entireDisc=self._image.newDisc).bytesAvailable 757 if size > available: 758 logger.error("Image [%s] does not fit in available capacity [%s].", displayBytes(size), displayBytes(available)) 759 raise IOError("Media does not contain enough capacity to store image.") 760 self._writeImage(self._image.newDisc, None, self._image.entries, self._image.mediaLabel) 761 else: 762 if not os.path.isabs(imagePath): 763 raise ValueError("Image path must be absolute.") 764 imagePath = encodePath(imagePath) 765 self._writeImage(newDisc, imagePath, None)
    766 767 768 ################################################################## 769 # Utility methods for dealing with growisofs and dvd+rw-mediainfo 770 ################################################################## 771
    772 - def _writeImage(self, newDisc, imagePath, entries, mediaLabel=None):
    773 """ 774 Writes an image to disc using either an entries list or an ISO image on 775 disk. 776 777 Callers are assumed to have done validation on paths, etc. before calling 778 this method. 779 780 @param newDisc: Indicates whether the disc should be re-initialized 781 @param imagePath: Path to an ISO image on disk, or c{None} to use C{entries} 782 @param entries: Mapping from path to graft point, or C{None} to use C{imagePath} 783 784 @raise IOError: If the media could not be written to for some reason. 785 """ 786 command = resolveCommand(GROWISOFS_COMMAND) 787 args = DvdWriter._buildWriteArgs(newDisc, self.hardwareId, self._driveSpeed, imagePath, entries, mediaLabel, dryRun=False) 788 (result, output) = executeCommand(command, args, returnOutput=True) 789 if result != 0: 790 DvdWriter._searchForOverburn(output) # throws own exception if overburn condition is found 791 raise IOError("Error (%d) executing command to write disc." % result) 792 self.refreshMedia()
    793 794 @staticmethod
    795 - def _getEstimatedImageSize(entries):
    796 """ 797 Gets the estimated size of a set of image entries. 798 799 This is implemented in terms of the C{IsoImage} class. The returned 800 value is calculated by adding a "fudge factor" to the value from 801 C{IsoImage}. This fudge factor was determined by experimentation and is 802 conservative -- the actual image could be as much as 450 blocks smaller 803 under some circumstances. 804 805 @param entries: Dictionary mapping path to graft point. 806 807 @return: Total estimated size of image, in bytes. 808 809 @raise ValueError: If there are no entries in the dictionary 810 @raise ValueError: If any path in the dictionary does not exist 811 @raise IOError: If there is a problem calling C{mkisofs}. 812 """ 813 fudgeFactor = convertSize(2500.0, UNIT_SECTORS, UNIT_BYTES) # determined through experimentation 814 if len(list(entries.keys())) == 0: 815 raise ValueError("Must add at least one entry with addImageEntry().") 816 image = IsoImage() 817 for path in list(entries.keys()): 818 image.addEntry(path, entries[path], override=False, contentsOnly=True) 819 estimatedSize = image.getEstimatedSize() + fudgeFactor 820 return estimatedSize
    821
    822 - def _retrieveSectorsUsed(self):
    823 """ 824 Retrieves the number of sectors used on the current media. 825 826 This is a little ugly. We need to call growisofs in "dry-run" mode and 827 parse some information from its output. However, to do that, we need to 828 create a dummy file that we can pass to the command -- and we have to 829 make sure to remove it later. 830 831 Once growisofs has been run, then we call C{_parseSectorsUsed} to parse 832 the output and calculate the number of sectors used on the media. 833 834 @return: Number of sectors used on the media 835 """ 836 tempdir = tempfile.mkdtemp() 837 try: 838 entries = { tempdir: None } 839 args = DvdWriter._buildWriteArgs(False, self.hardwareId, self.driveSpeed, None, entries, None, dryRun=True) 840 command = resolveCommand(GROWISOFS_COMMAND) 841 (result, output) = executeCommand(command, args, returnOutput=True) 842 if result != 0: 843 logger.debug("Error (%d) calling growisofs to read sectors used.", result) 844 logger.warning("Unable to read disc (might not be initialized); returning zero sectors used.") 845 return 0.0 846 sectorsUsed = DvdWriter._parseSectorsUsed(output) 847 logger.debug("Determined sectors used as %s", sectorsUsed) 848 return sectorsUsed 849 finally: 850 if os.path.exists(tempdir): 851 try: 852 os.rmdir(tempdir) 853 except: pass
    854 855 @staticmethod
    856 - def _parseSectorsUsed(output):
    857 """ 858 Parse sectors used information out of C{growisofs} output. 859 860 The first line of a growisofs run looks something like this:: 861 862 Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566' 863 864 Dmitry has determined that the seek value in this line gives us 865 information about how much data has previously been written to the media. 866 That value multiplied by 16 yields the number of sectors used. 867 868 If the seek line cannot be found in the output, then sectors used of zero 869 is assumed. 870 871 @return: Sectors used on the media, as a floating point number. 872 873 @raise ValueError: If the output cannot be parsed properly. 874 """ 875 if output is not None: 876 pattern = re.compile(r"(^)(.*)(seek=)(.*)('$)") 877 for line in output: 878 match = pattern.search(line) 879 if match is not None: 880 try: 881 return float(match.group(4).strip()) * 16.0 882 except ValueError: 883 raise ValueError("Unable to parse sectors used out of growisofs output.") 884 logger.warning("Unable to read disc (might not be initialized); returning zero sectors used.") 885 return 0.0
    886 887 @staticmethod
    888 - def _searchForOverburn(output):
    889 """ 890 Search for an "overburn" error message in C{growisofs} output. 891 892 The C{growisofs} command returns a non-zero exit code and puts a message 893 into the output -- even on a dry run -- if there is not enough space on 894 the media. This is called an "overburn" condition. 895 896 The error message looks like this:: 897 898 :-( /dev/cdrom: 894048 blocks are free, 2033746 to be written! 899 900 This method looks for the overburn error message anywhere in the output. 901 If a matching error message is found, an C{IOError} exception is raised 902 containing relevant information about the problem. Otherwise, the method 903 call returns normally. 904 905 @param output: List of output lines to search, as from C{executeCommand} 906 907 @raise IOError: If an overburn condition is found. 908 """ 909 if output is None: 910 return 911 pattern = re.compile(r"(^)(:-[(])(\s*.*:\s*)(.* )(blocks are free, )(.* )(to be written!)") 912 for line in output: 913 match = pattern.search(line) 914 if match is not None: 915 try: 916 available = convertSize(float(match.group(4).strip()), UNIT_SECTORS, UNIT_BYTES) 917 size = convertSize(float(match.group(6).strip()), UNIT_SECTORS, UNIT_BYTES) 918 logger.error("Image [%s] does not fit in available capacity [%s].", displayBytes(size), displayBytes(available)) 919 except ValueError: 920 logger.error("Image does not fit in available capacity (no useful capacity info available).") 921 raise IOError("Media does not contain enough capacity to store image.")
    922 923 @staticmethod
    924 - def _buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel=None, dryRun=False):
    925 """ 926 Builds a list of arguments to be passed to a C{growisofs} command. 927 928 The arguments will either cause C{growisofs} to write the indicated image 929 file to disc, or will pass C{growisofs} a list of directories or files 930 that should be written to disc. 931 932 If a new image is created, it will always be created with Rock Ridge 933 extensions (-r). A volume name will be applied (-V) if C{mediaLabel} is 934 not C{None}. 935 936 @param newDisc: Indicates whether the disc should be re-initialized 937 @param hardwareId: Hardware id for the device 938 @param driveSpeed: Speed at which the drive writes. 939 @param imagePath: Path to an ISO image on disk, or c{None} to use C{entries} 940 @param entries: Mapping from path to graft point, or C{None} to use C{imagePath} 941 @param mediaLabel: Media label to set on the image, if any 942 @param dryRun: Says whether to make this a dry run (for checking capacity) 943 944 @note: If we write an existing image to disc, then the mediaLabel is 945 ignored. The media label is an attribute of the image, and should be set 946 on the image when it is created. 947 948 @note: We always pass the undocumented option C{-use-the-force-like=tty} 949 to growisofs. Without this option, growisofs will refuse to execute 950 certain actions when running from cron. A good example is -Z, which 951 happily overwrites an existing DVD from the command-line, but fails when 952 run from cron. It took a while to figure that out, since it worked every 953 time I tested it by hand. :( 954 955 @return: List suitable for passing to L{util.executeCommand} as C{args}. 956 957 @raise ValueError: If caller does not pass one or the other of imagePath or entries. 958 """ 959 args = [] 960 if (imagePath is None and entries is None) or (imagePath is not None and entries is not None): 961 raise ValueError("Must use either imagePath or entries.") 962 args.append("-use-the-force-luke=tty") # tell growisofs to let us run from cron 963 if dryRun: 964 args.append("-dry-run") 965 if driveSpeed is not None: 966 args.append("-speed=%d" % driveSpeed) 967 if newDisc: 968 args.append("-Z") 969 else: 970 args.append("-M") 971 if imagePath is not None: 972 args.append("%s=%s" % (hardwareId, imagePath)) 973 else: 974 args.append(hardwareId) 975 if mediaLabel is not None: 976 args.append("-V") 977 args.append(mediaLabel) 978 args.append("-r") # Rock Ridge extensions with sane ownership and permissions 979 args.append("-graft-points") 980 keys = list(entries.keys()) 981 keys.sort() # just so we get consistent results 982 for key in keys: 983 # Same syntax as when calling mkisofs in IsoImage 984 if entries[key] is None: 985 args.append(key) 986 else: 987 args.append("%s/=%s" % (entries[key].strip("/"), key)) 988 return args
    989

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.writers-pysrc.html0000664000175000017500000002272512657665545026205 0ustar pronovicpronovic00000000000000 CedarBackup3.writers
    Package CedarBackup3 :: Package writers
    [hide private]
    [frames] | no frames]

    Source Code for Package CedarBackup3.writers

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 3 (>= 3.4) 
    13  # Project  : Official Cedar Backup Extensions 
    14  # Purpose  : Provides package initialization 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Cedar Backup writers. 
    24   
    25  This package consolidates all of the modules that implenent "image writer" 
    26  functionality, including utilities and specific writer implementations. 
    27   
    28  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    29  """ 
    30   
    31   
    32  ######################################################################## 
    33  # Package initialization 
    34  ######################################################################## 
    35   
    36  # Using 'from CedarBackup3.writers import *' will just import the modules listed 
    37  # in the __all__ variable. 
    38   
    39  __all__ = [ 'util', 'cdwriter', 'dvdwriter', ] 
    40   
    

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.cli._ActionItem-class.html0000664000175000017500000005765512657665544027425 0ustar pronovicpronovic00000000000000 CedarBackup3.cli._ActionItem
    Package CedarBackup3 :: Module cli :: Class _ActionItem
    [hide private]
    [frames] | no frames]

    Class _ActionItem

    source code

    object --+
             |
            _ActionItem
    

    Class representing a single action to be executed.

    This class represents a single named action to be executed, and understands how to execute that action.

    The built-in actions will use only the options and config values. We also pass in the config path so that extension modules can re-parse configuration if they want to, to add in extra information.

    This class is also where pre-action and post-action hooks are executed. An action item is instantiated in terms of optional pre- and post-action hook objects (config.ActionHook), which are then executed at the appropriate time (if set).


    Note: The comparison operators for this class have been implemented to only compare based on the index and SORT_ORDER value, and ignore all other values. This is so that the action set list can be easily sorted first by type (_ActionItem before _ManagedActionItem) and then by index within type.

    Instance Methods [hide private]
     
    __init__(self, index, name, preHooks, postHooks, function)
    Default constructor.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    executeAction(self, configPath, options, config)
    Executes the action associated with an item, including hooks.
    source code
     
    _executeAction(self, configPath, options, config)
    Executes the action, specifically the function associated with the action.
    source code
     
    _executeHook(self, type, hook)
    Executes a hook command via util.executeCommand().
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Class Variables [hide private]
      SORT_ORDER = 0
    Defines a sort order to order properly between types.
    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, index, name, preHooks, postHooks, function)
    (Constructor)

    source code 

    Default constructor.

    It's OK to pass None for index, preHooks or postHooks, but not for name.

    Parameters:
    • index - Index of the item (or None).
    • name - Name of the action that is being executed.
    • preHooks - List of pre-action hooks in terms of an ActionHook object, or None.
    • postHooks - List of post-action hooks in terms of an ActionHook object, or None.
    • function - Reference to function associated with item.
    Overrides: object.__init__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. The only thing we compare is the item's index.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    executeAction(self, configPath, options, config)

    source code 

    Executes the action associated with an item, including hooks.

    See class notes for more details on how the action is executed.

    Parameters:
    • configPath - Path to configuration file on disk.
    • options - Command-line options to be passed to action.
    • config - Parsed configuration to be passed to action.
    Raises:
    • Exception - If there is a problem executing the action.

    _executeAction(self, configPath, options, config)

    source code 

    Executes the action, specifically the function associated with the action.

    Parameters:
    • configPath - Path to configuration file on disk.
    • options - Command-line options to be passed to action.
    • config - Parsed configuration to be passed to action.

    _executeHook(self, type, hook)

    source code 

    Executes a hook command via util.executeCommand().

    Parameters:
    • type - String describing the type of hook, for logging.
    • hook - Hook, in terms of a ActionHook object.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.purge-pysrc.html0000664000175000017500000007642512657665546027276 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.purge
    Package CedarBackup3 :: Package actions :: Module purge
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.actions.purge

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2007,2010,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Cedar Backup, release 3 
     30  # Purpose  : Implements the standard 'purge' action. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Implements the standard 'purge' action. 
     40  @sort: executePurge 
     41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     42  """ 
     43   
     44   
     45  ######################################################################## 
     46  # Imported modules 
     47  ######################################################################## 
     48   
     49  # System modules 
     50  import logging 
     51   
     52  # Cedar Backup modules 
     53  from CedarBackup3.filesystem import PurgeItemList 
     54   
     55   
     56  ######################################################################## 
     57  # Module-wide constants and variables 
     58  ######################################################################## 
     59   
     60  logger = logging.getLogger("CedarBackup3.log.actions.purge") 
     61   
     62   
     63  ######################################################################## 
     64  # Public functions 
     65  ######################################################################## 
     66   
     67  ########################## 
     68  # executePurge() function 
     69  ########################## 
     70   
    
    71 -def executePurge(configPath, options, config):
    72 """ 73 Executes the purge backup action. 74 75 For each configured directory, we create a purge item list, remove from the 76 list anything that's younger than the configured retain days value, and then 77 purge from the filesystem what's left. 78 79 @param configPath: Path to configuration file on disk. 80 @type configPath: String representing a path on disk. 81 82 @param options: Program command-line options. 83 @type options: Options object. 84 85 @param config: Program configuration. 86 @type config: Config object. 87 88 @raise ValueError: Under many generic error conditions 89 """ 90 logger.debug("Executing the 'purge' action.") 91 if config.options is None or config.purge is None: 92 raise ValueError("Purge configuration is not properly filled in.") 93 if config.purge.purgeDirs is not None: 94 for purgeDir in config.purge.purgeDirs: 95 purgeList = PurgeItemList() 96 purgeList.addDirContents(purgeDir.absolutePath) # add everything within directory 97 purgeList.removeYoungFiles(purgeDir.retainDays) # remove young files *from the list* so they won't be purged 98 purgeList.purgeItems() # remove remaining items from the filesystem 99 logger.info("Executed the 'purge' action successfully.")
    100

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.writers.cdwriter._ImageProperties-class.html0000664000175000017500000002171212657665545033224 0ustar pronovicpronovic00000000000000 CedarBackup3.writers.cdwriter._ImageProperties
    Package CedarBackup3 :: Package writers :: Module cdwriter :: Class _ImageProperties
    [hide private]
    [frames] | no frames]

    Class _ImageProperties

    source code

    object --+
             |
            _ImageProperties
    

    Simple value object to hold image properties for DvdWriter.

    Instance Methods [hide private]
     
    __init__(self)
    x.__init__(...) initializes x; see help(type(x)) for signature
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    x.__init__(...) initializes x; see help(type(x)) for signature

    Overrides: object.__init__
    (inherited documentation)

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.image-pysrc.html0000664000175000017500000002447512657665546025575 0ustar pronovicpronovic00000000000000 CedarBackup3.image
    Package CedarBackup3 :: Module image
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.image

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 3 (>= 3.4) 
    13  # Project  : Cedar Backup, release 3 
    14  # Purpose  : Provides interface backwards compatibility. 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Provides interface backwards compatibility. 
    24   
    25  In Cedar Backup 2.10.0, a refactoring effort took place while adding code to 
    26  support DVD hardware.  All of the writer functionality was moved to the 
    27  writers/ package.  This mostly-empty file remains to preserve the Cedar Backup 
    28  library interface. 
    29   
    30  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    31  """ 
    32   
    33  ######################################################################## 
    34  # Imported modules 
    35  ######################################################################## 
    36   
    37  from CedarBackup3.writers.util import IsoImage  # pylint: disable=W0611 
    38   
    

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.collect-pysrc.html0000664000175000017500000100756612657665546027602 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.collect
    Package CedarBackup3 :: Package actions :: Module collect
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.actions.collect

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2008,2011,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Cedar Backup, release 3 
     30  # Purpose  : Implements the standard 'collect' action. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Implements the standard 'collect' action. 
     40  @sort: executeCollect 
     41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     42  """ 
     43   
     44   
     45  ######################################################################## 
     46  # Imported modules 
     47  ######################################################################## 
     48   
     49  # System modules 
     50  import os 
     51  import logging 
     52  import pickle 
     53   
     54  # Cedar Backup modules 
     55  from CedarBackup3.filesystem import BackupFileList, FilesystemList 
     56  from CedarBackup3.util import isStartOfWeek, changeOwnership, displayBytes, buildNormalizedPath 
     57  from CedarBackup3.actions.constants import DIGEST_EXTENSION, COLLECT_INDICATOR 
     58  from CedarBackup3.actions.util import writeIndicatorFile 
     59   
     60   
     61  ######################################################################## 
     62  # Module-wide constants and variables 
     63  ######################################################################## 
     64   
     65  logger = logging.getLogger("CedarBackup3.log.actions.collect") 
     66   
     67   
     68  ######################################################################## 
     69  # Public functions 
     70  ######################################################################## 
     71   
     72  ############################ 
     73  # executeCollect() function 
     74  ############################ 
     75   
    
    76 -def executeCollect(configPath, options, config):
    77 """ 78 Executes the collect backup action. 79 80 @note: When the collect action is complete, we will write a collect 81 indicator to the collect directory, so it's obvious that the collect action 82 has completed. The stage process uses this indicator to decide whether a 83 peer is ready to be staged. 84 85 @param configPath: Path to configuration file on disk. 86 @type configPath: String representing a path on disk. 87 88 @param options: Program command-line options. 89 @type options: Options object. 90 91 @param config: Program configuration. 92 @type config: Config object. 93 94 @raise ValueError: Under many generic error conditions 95 @raise TarError: If there is a problem creating a tar file 96 """ 97 logger.debug("Executing the 'collect' action.") 98 if config.options is None or config.collect is None: 99 raise ValueError("Collect configuration is not properly filled in.") 100 if ((config.collect.collectFiles is None or len(config.collect.collectFiles) < 1) and 101 (config.collect.collectDirs is None or len(config.collect.collectDirs) < 1)): 102 raise ValueError("There must be at least one collect file or collect directory.") 103 fullBackup = options.full 104 logger.debug("Full backup flag is [%s]", fullBackup) 105 todayIsStart = isStartOfWeek(config.options.startingDay) 106 resetDigest = fullBackup or todayIsStart 107 logger.debug("Reset digest flag is [%s]", resetDigest) 108 if config.collect.collectFiles is not None: 109 for collectFile in config.collect.collectFiles: 110 logger.debug("Working with collect file [%s]", collectFile.absolutePath) 111 collectMode = _getCollectMode(config, collectFile) 112 archiveMode = _getArchiveMode(config, collectFile) 113 digestPath = _getDigestPath(config, collectFile.absolutePath) 114 tarfilePath = _getTarfilePath(config, collectFile.absolutePath, archiveMode) 115 if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): 116 logger.debug("File meets criteria to be backed up today.") 117 _collectFile(config, collectFile.absolutePath, tarfilePath, 118 collectMode, archiveMode, resetDigest, digestPath) 119 else: 120 logger.debug("File will not be backed up, per collect mode.") 121 logger.info("Completed collecting file [%s]", collectFile.absolutePath) 122 if config.collect.collectDirs is not None: 123 for collectDir in config.collect.collectDirs: 124 logger.debug("Working with collect directory [%s]", collectDir.absolutePath) 125 collectMode = _getCollectMode(config, collectDir) 126 archiveMode = _getArchiveMode(config, collectDir) 127 ignoreFile = _getIgnoreFile(config, collectDir) 128 linkDepth = _getLinkDepth(collectDir) 129 dereference = _getDereference(collectDir) 130 recursionLevel = _getRecursionLevel(collectDir) 131 (excludePaths, excludePatterns) = _getExclusions(config, collectDir) 132 if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): 133 logger.debug("Directory meets criteria to be backed up today.") 134 _collectDirectory(config, collectDir.absolutePath, 135 collectMode, archiveMode, ignoreFile, linkDepth, dereference, 136 resetDigest, excludePaths, excludePatterns, recursionLevel) 137 else: 138 logger.debug("Directory will not be backed up, per collect mode.") 139 logger.info("Completed collecting directory [%s]", collectDir.absolutePath) 140 writeIndicatorFile(config.collect.targetDir, COLLECT_INDICATOR, 141 config.options.backupUser, config.options.backupGroup) 142 logger.info("Executed the 'collect' action successfully.")
    143 144 145 ######################################################################## 146 # Private utility functions 147 ######################################################################## 148 149 ########################## 150 # _collectFile() function 151 ########################## 152
    153 -def _collectFile(config, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath):
    154 """ 155 Collects a configured collect file. 156 157 The indicated collect file is collected into the indicated tarfile. 158 For files that are collected incrementally, we'll use the indicated 159 digest path and pay attention to the reset digest flag (basically, the reset 160 digest flag ignores any existing digest, but a new digest is always 161 rewritten). 162 163 The caller must decide what the collect and archive modes are, since they 164 can be on both the collect configuration and the collect file itself. 165 166 @param config: Config object. 167 @param absolutePath: Absolute path of file to collect. 168 @param tarfilePath: Path to tarfile that should be created. 169 @param collectMode: Collect mode to use. 170 @param archiveMode: Archive mode to use. 171 @param resetDigest: Reset digest flag. 172 @param digestPath: Path to digest file on disk, if needed. 173 """ 174 backupList = BackupFileList() 175 backupList.addFile(absolutePath) 176 _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath)
    177 178 179 ############################### 180 # _collectDirectory() function 181 ############################### 182
    183 -def _collectDirectory(config, absolutePath, collectMode, archiveMode, 184 ignoreFile, linkDepth, dereference, resetDigest, 185 excludePaths, excludePatterns, recursionLevel):
    186 """ 187 Collects a configured collect directory. 188 189 The indicated collect directory is collected into the indicated tarfile. 190 For directories that are collected incrementally, we'll use the indicated 191 digest path and pay attention to the reset digest flag (basically, the reset 192 digest flag ignores any existing digest, but a new digest is always 193 rewritten). 194 195 The caller must decide what the collect and archive modes are, since they 196 can be on both the collect configuration and the collect directory itself. 197 198 @param config: Config object. 199 @param absolutePath: Absolute path of directory to collect. 200 @param collectMode: Collect mode to use. 201 @param archiveMode: Archive mode to use. 202 @param ignoreFile: Ignore file to use. 203 @param linkDepth: Link depth value to use. 204 @param dereference: Dereference flag to use. 205 @param resetDigest: Reset digest flag. 206 @param excludePaths: List of absolute paths to exclude. 207 @param excludePatterns: List of patterns to exclude. 208 @param recursionLevel: Recursion level (zero for no recursion) 209 """ 210 if recursionLevel == 0: 211 # Collect the actual directory because we're at recursion level 0 212 logger.info("Collecting directory [%s]", absolutePath) 213 tarfilePath = _getTarfilePath(config, absolutePath, archiveMode) 214 digestPath = _getDigestPath(config, absolutePath) 215 216 backupList = BackupFileList() 217 backupList.ignoreFile = ignoreFile 218 backupList.excludePaths = excludePaths 219 backupList.excludePatterns = excludePatterns 220 backupList.addDirContents(absolutePath, linkDepth=linkDepth, dereference=dereference) 221 222 _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath) 223 else: 224 # Find all of the immediate subdirectories 225 subdirs = FilesystemList() 226 subdirs.excludeFiles = True 227 subdirs.excludeLinks = True 228 subdirs.excludePaths = excludePaths 229 subdirs.excludePatterns = excludePatterns 230 subdirs.addDirContents(path=absolutePath, recursive=False, addSelf=False) 231 232 # Back up the subdirectories separately 233 for subdir in subdirs: 234 _collectDirectory(config, subdir, collectMode, archiveMode, 235 ignoreFile, linkDepth, dereference, resetDigest, 236 excludePaths, excludePatterns, recursionLevel-1) 237 excludePaths.append(subdir) # this directory is already backed up, so exclude it 238 239 # Back up everything that hasn't previously been backed up 240 _collectDirectory(config, absolutePath, collectMode, archiveMode, 241 ignoreFile, linkDepth, dereference, resetDigest, 242 excludePaths, excludePatterns, 0)
    243 244 245 ############################ 246 # _executeBackup() function 247 ############################ 248
    249 -def _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath):
    250 """ 251 Execute the backup process for the indicated backup list. 252 253 This function exists mainly to consolidate functionality between the 254 L{_collectFile} and L{_collectDirectory} functions. Those functions build 255 the backup list; this function causes the backup to execute properly and 256 also manages usage of the digest file on disk as explained in their 257 comments. 258 259 For collect files, the digest file will always just contain the single file 260 that is being backed up. This might little wasteful in terms of the number 261 of files that we keep around, but it's consistent and easy to understand. 262 263 @param config: Config object. 264 @param backupList: List to execute backup for 265 @param absolutePath: Absolute path of directory or file to collect. 266 @param tarfilePath: Path to tarfile that should be created. 267 @param collectMode: Collect mode to use. 268 @param archiveMode: Archive mode to use. 269 @param resetDigest: Reset digest flag. 270 @param digestPath: Path to digest file on disk, if needed. 271 """ 272 if collectMode != 'incr': 273 logger.debug("Collect mode is [%s]; no digest will be used.", collectMode) 274 if len(backupList) == 1 and backupList[0] == absolutePath: # special case for individual file 275 logger.info("Backing up file [%s] (%s).", absolutePath, displayBytes(backupList.totalSize())) 276 else: 277 logger.info("Backing up %d files in [%s] (%s).", len(backupList), absolutePath, displayBytes(backupList.totalSize())) 278 if len(backupList) > 0: 279 backupList.generateTarfile(tarfilePath, archiveMode, True) 280 changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) 281 else: 282 if resetDigest: 283 logger.debug("Based on resetDigest flag, digest will be cleared.") 284 oldDigest = {} 285 else: 286 logger.debug("Based on resetDigest flag, digest will loaded from disk.") 287 oldDigest = _loadDigest(digestPath) 288 (removed, newDigest) = backupList.removeUnchanged(oldDigest, captureDigest=True) 289 logger.debug("Removed %d unchanged files based on digest values.", removed) 290 if len(backupList) == 1 and backupList[0] == absolutePath: # special case for individual file 291 logger.info("Backing up file [%s] (%s).", absolutePath, displayBytes(backupList.totalSize())) 292 else: 293 logger.info("Backing up %d files in [%s] (%s).", len(backupList), absolutePath, displayBytes(backupList.totalSize())) 294 if len(backupList) > 0: 295 backupList.generateTarfile(tarfilePath, archiveMode, True) 296 changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) 297 _writeDigest(config, newDigest, digestPath)
    298 299 300 ######################### 301 # _loadDigest() function 302 ######################### 303
    304 -def _loadDigest(digestPath):
    305 """ 306 Loads the indicated digest path from disk into a dictionary. 307 308 If we can't load the digest successfully (either because it doesn't exist or 309 for some other reason), then an empty dictionary will be returned - but the 310 condition will be logged. 311 312 @param digestPath: Path to the digest file on disk. 313 314 @return: Dictionary representing contents of digest path. 315 """ 316 if not os.path.isfile(digestPath): 317 digest = {} 318 logger.debug("Digest [%s] does not exist on disk.", digestPath) 319 else: 320 try: 321 with open(digestPath, "rb") as f: 322 digest = pickle.load(f, fix_imports=True) # be compatible with Python 2 323 logger.debug("Loaded digest [%s] from disk: %d entries.", digestPath, len(digest)) 324 except Exception as e: 325 digest = {} 326 logger.error("Failed loading digest [%s] from disk: %s", digestPath, e) 327 return digest
    328 329 330 ########################## 331 # _writeDigest() function 332 ########################## 333
    334 -def _writeDigest(config, digest, digestPath):
    335 """ 336 Writes the digest dictionary to the indicated digest path on disk. 337 338 If we can't write the digest successfully for any reason, we'll log the 339 condition but won't throw an exception. 340 341 @param config: Config object. 342 @param digest: Digest dictionary to write to disk. 343 @param digestPath: Path to the digest file on disk. 344 """ 345 try: 346 with open(digestPath, "wb") as f: 347 pickle.dump(digest, f, 0, fix_imports=True) # be compatible with Python 2 348 changeOwnership(digestPath, config.options.backupUser, config.options.backupGroup) 349 logger.debug("Wrote new digest [%s] to disk: %d entries.", digestPath, len(digest)) 350 except Exception as e: 351 logger.error("Failed to write digest [%s] to disk: %s", digestPath, e)
    352 353 354 ######################################################################## 355 # Private attribute "getter" functions 356 ######################################################################## 357 358 ############################ 359 # getCollectMode() function 360 ############################ 361
    362 -def _getCollectMode(config, item):
    363 """ 364 Gets the collect mode that should be used for a collect directory or file. 365 If possible, use the one on the file or directory, otherwise take from collect section. 366 @param config: Config object. 367 @param item: C{CollectFile} or C{CollectDir} object 368 @return: Collect mode to use. 369 """ 370 if item.collectMode is None: 371 collectMode = config.collect.collectMode 372 else: 373 collectMode = item.collectMode 374 logger.debug("Collect mode is [%s]", collectMode) 375 return collectMode
    376 377 378 ############################# 379 # _getArchiveMode() function 380 ############################# 381
    382 -def _getArchiveMode(config, item):
    383 """ 384 Gets the archive mode that should be used for a collect directory or file. 385 If possible, use the one on the file or directory, otherwise take from collect section. 386 @param config: Config object. 387 @param item: C{CollectFile} or C{CollectDir} object 388 @return: Archive mode to use. 389 """ 390 if item.archiveMode is None: 391 archiveMode = config.collect.archiveMode 392 else: 393 archiveMode = item.archiveMode 394 logger.debug("Archive mode is [%s]", archiveMode) 395 return archiveMode
    396 397 398 ############################ 399 # _getIgnoreFile() function 400 ############################ 401
    402 -def _getIgnoreFile(config, item):
    403 """ 404 Gets the ignore file that should be used for a collect directory or file. 405 If possible, use the one on the file or directory, otherwise take from collect section. 406 @param config: Config object. 407 @param item: C{CollectFile} or C{CollectDir} object 408 @return: Ignore file to use. 409 """ 410 if item.ignoreFile is None: 411 ignoreFile = config.collect.ignoreFile 412 else: 413 ignoreFile = item.ignoreFile 414 logger.debug("Ignore file is [%s]", ignoreFile) 415 return ignoreFile
    416 417 418 ############################ 419 # _getLinkDepth() function 420 ############################ 421
    422 -def _getLinkDepth(item):
    423 """ 424 Gets the link depth that should be used for a collect directory. 425 If possible, use the one on the directory, otherwise set a value of 0 (zero). 426 @param item: C{CollectDir} object 427 @return: Link depth to use. 428 """ 429 if item.linkDepth is None: 430 linkDepth = 0 431 else: 432 linkDepth = item.linkDepth 433 logger.debug("Link depth is [%d]", linkDepth) 434 return linkDepth
    435 436 437 ############################ 438 # _getDereference() function 439 ############################ 440
    441 -def _getDereference(item):
    442 """ 443 Gets the dereference flag that should be used for a collect directory. 444 If possible, use the one on the directory, otherwise set a value of False. 445 @param item: C{CollectDir} object 446 @return: Dereference flag to use. 447 """ 448 if item.dereference is None: 449 dereference = False 450 else: 451 dereference = item.dereference 452 logger.debug("Dereference flag is [%s]", dereference) 453 return dereference
    454 455 456 ################################ 457 # _getRecursionLevel() function 458 ################################ 459
    460 -def _getRecursionLevel(item):
    461 """ 462 Gets the recursion level that should be used for a collect directory. 463 If possible, use the one on the directory, otherwise set a value of 0 (zero). 464 @param item: C{CollectDir} object 465 @return: Recursion level to use. 466 """ 467 if item.recursionLevel is None: 468 recursionLevel = 0 469 else: 470 recursionLevel = item.recursionLevel 471 logger.debug("Recursion level is [%d]", recursionLevel) 472 return recursionLevel
    473 474 475 ############################ 476 # _getDigestPath() function 477 ############################ 478
    479 -def _getDigestPath(config, absolutePath):
    480 """ 481 Gets the digest path associated with a collect directory or file. 482 @param config: Config object. 483 @param absolutePath: Absolute path to generate digest for 484 @return: Absolute path to the digest associated with the collect directory or file. 485 """ 486 normalized = buildNormalizedPath(absolutePath) 487 filename = "%s.%s" % (normalized, DIGEST_EXTENSION) 488 digestPath = os.path.join(config.options.workingDir, filename) 489 logger.debug("Digest path is [%s]", digestPath) 490 return digestPath
    491 492 493 ############################# 494 # _getTarfilePath() function 495 ############################# 496
    497 -def _getTarfilePath(config, absolutePath, archiveMode):
    498 """ 499 Gets the tarfile path (including correct extension) associated with a collect directory. 500 @param config: Config object. 501 @param absolutePath: Absolute path to generate tarfile for 502 @param archiveMode: Archive mode to use for this tarfile. 503 @return: Absolute path to the tarfile associated with the collect directory. 504 """ 505 if archiveMode == 'tar': 506 extension = "tar" 507 elif archiveMode == 'targz': 508 extension = "tar.gz" 509 elif archiveMode == 'tarbz2': 510 extension = "tar.bz2" 511 normalized = buildNormalizedPath(absolutePath) 512 filename = "%s.%s" % (normalized, extension) 513 tarfilePath = os.path.join(config.collect.targetDir, filename) 514 logger.debug("Tarfile path is [%s]", tarfilePath) 515 return tarfilePath
    516 517 518 ############################ 519 # _getExclusions() function 520 ############################ 521
    522 -def _getExclusions(config, collectDir):
    523 """ 524 Gets exclusions (file and patterns) associated with a collect directory. 525 526 The returned files value is a list of absolute paths to be excluded from the 527 backup for a given directory. It is derived from the collect configuration 528 absolute exclude paths and the collect directory's absolute and relative 529 exclude paths. 530 531 The returned patterns value is a list of patterns to be excluded from the 532 backup for a given directory. It is derived from the list of patterns from 533 the collect configuration and from the collect directory itself. 534 535 @param config: Config object. 536 @param collectDir: Collect directory object. 537 538 @return: Tuple (files, patterns) indicating what to exclude. 539 """ 540 paths = [] 541 if config.collect.absoluteExcludePaths is not None: 542 paths.extend(config.collect.absoluteExcludePaths) 543 if collectDir.absoluteExcludePaths is not None: 544 paths.extend(collectDir.absoluteExcludePaths) 545 if collectDir.relativeExcludePaths is not None: 546 for relativePath in collectDir.relativeExcludePaths: 547 paths.append(os.path.join(collectDir.absolutePath, relativePath)) 548 patterns = [] 549 if config.collect.excludePatterns is not None: 550 patterns.extend(config.collect.excludePatterns) 551 if collectDir.excludePatterns is not None: 552 patterns.extend(collectDir.excludePatterns) 553 logger.debug("Exclude paths: %s", paths) 554 logger.debug("Exclude patterns: %s", patterns) 555 return(paths, patterns)
    556

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.writers.cdwriter-module.html0000664000175000017500000002524612657665544030154 0ustar pronovicpronovic00000000000000 CedarBackup3.writers.cdwriter
    Package CedarBackup3 :: Package writers :: Module cdwriter
    [hide private]
    [frames] | no frames]

    Module cdwriter

    source code

    Provides functionality related to CD writer devices.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      MediaDefinition
    Class encapsulating information about CD media definitions.
      MediaCapacity
    Class encapsulating information about CD media capacity.
      CdWriter
    Class representing a device that knows how to write CD media.
      _ImageProperties
    Simple value object to hold image properties for DvdWriter.
    Variables [hide private]
      MEDIA_CDRW_74 = 1
    Constant representing 74-minute CD-RW media.
      MEDIA_CDR_74 = 2
    Constant representing 74-minute CD-R media.
      MEDIA_CDRW_80 = 3
    Constant representing 80-minute CD-RW media.
      MEDIA_CDR_80 = 4
    Constant representing 80-minute CD-R media.
      logger = logging.getLogger("CedarBackup3.log.writers.cdwriter")
      CDRECORD_COMMAND = ['cdrecord']
      EJECT_COMMAND = ['eject']
      MKISOFS_COMMAND = ['mkisofs']
      __package__ = 'CedarBackup3.writers'
    CedarBackup3-3.1.6/doc/interface/CedarBackup3.tools.amazons3.Options-class.html0000664000175000017500000024757512657665545031013 0ustar pronovicpronovic00000000000000 CedarBackup3.tools.amazons3.Options
    Package CedarBackup3 :: Package tools :: Module amazons3 :: Class Options
    [hide private]
    [frames] | no frames]

    Class Options

    source code

    object --+
             |
            Options
    

    Class representing command-line options for the cback3-amazons3-sync script.

    The Options class is a Python object representation of the command-line options of the cback3-amazons3-sync script.

    The object representation is two-way: a command line string or a list of command line arguments can be used to create an Options object, and then changes to the object can be propogated back to a list of command-line arguments or to a command-line string. An Options object can even be created from scratch programmatically (if you have a need for that).

    There are two main levels of validation in the Options class. The first is field-level validation. Field-level validation comes into play when a given field in an object is assigned to or updated. We use Python's property functionality to enforce specific validations on field values, and in some places we even use customized list classes to enforce validations on list members. You should expect to catch a ValueError exception when making assignments to fields if you are programmatically filling an object.

    The second level of validation is post-completion validation. Certain validations don't make sense until an object representation of options is fully "complete". We don't want these validations to apply all of the time, because it would make building up a valid object from scratch a real pain. For instance, we might have to do things in the right order to keep from throwing exceptions, etc.

    All of these post-completion validations are encapsulated in the Options.validate method. This method can be called at any time by a client, and will always be called immediately after creating a Options object from a command line and before exporting a Options object back to a command line. This way, we get acceptable ease-of-use but we also don't accept or emit invalid command lines.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, argumentList=None, argumentString=None, validate=True)
    Initializes an options object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    _setHelp(self, value)
    Property target used to set the help flag.
    source code
     
    _getHelp(self)
    Property target used to get the help flag.
    source code
     
    _setVersion(self, value)
    Property target used to set the version flag.
    source code
     
    _getVersion(self)
    Property target used to get the version flag.
    source code
     
    _setVerbose(self, value)
    Property target used to set the verbose flag.
    source code
     
    _getVerbose(self)
    Property target used to get the verbose flag.
    source code
     
    _setQuiet(self, value)
    Property target used to set the quiet flag.
    source code
     
    _getQuiet(self)
    Property target used to get the quiet flag.
    source code
     
    _setLogfile(self, value)
    Property target used to set the logfile parameter.
    source code
     
    _getLogfile(self)
    Property target used to get the logfile parameter.
    source code
     
    _setOwner(self, value)
    Property target used to set the owner parameter.
    source code
     
    _getOwner(self)
    Property target used to get the owner parameter.
    source code
     
    _setMode(self, value)
    Property target used to set the mode parameter.
    source code
     
    _getMode(self)
    Property target used to get the mode parameter.
    source code
     
    _setOutput(self, value)
    Property target used to set the output flag.
    source code
     
    _getOutput(self)
    Property target used to get the output flag.
    source code
     
    _setDebug(self, value)
    Property target used to set the debug flag.
    source code
     
    _getDebug(self)
    Property target used to get the debug flag.
    source code
     
    _setStacktrace(self, value)
    Property target used to set the stacktrace flag.
    source code
     
    _getStacktrace(self)
    Property target used to get the stacktrace flag.
    source code
     
    _setDiagnostics(self, value)
    Property target used to set the diagnostics flag.
    source code
     
    _getDiagnostics(self)
    Property target used to get the diagnostics flag.
    source code
     
    _setVerifyOnly(self, value)
    Property target used to set the verifyOnly flag.
    source code
     
    _getVerifyOnly(self)
    Property target used to get the verifyOnly flag.
    source code
     
    _setIgnoreWarnings(self, value)
    Property target used to set the ignoreWarnings flag.
    source code
     
    _getIgnoreWarnings(self)
    Property target used to get the ignoreWarnings flag.
    source code
     
    _setSourceDir(self, value)
    Property target used to set the sourceDir parameter.
    source code
     
    _getSourceDir(self)
    Property target used to get the sourceDir parameter.
    source code
     
    _setS3BucketUrl(self, value)
    Property target used to set the s3BucketUrl parameter.
    source code
     
    _getS3BucketUrl(self)
    Property target used to get the s3BucketUrl parameter.
    source code
     
    validate(self)
    Validates command-line options represented by the object.
    source code
     
    buildArgumentList(self, validate=True)
    Extracts options into a list of command line arguments.
    source code
     
    buildArgumentString(self, validate=True)
    Extracts options into a string of command-line arguments.
    source code
     
    _parseArgumentList(self, argumentList)
    Internal method to parse a list of command-line arguments.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Class Variables [hide private]
      help = property(_getHelp, _setHelp, None, "Command-line help (...
      version = property(_getVersion, _setVersion, None, "Command-li...
      verbose = property(_getVerbose, _setVerbose, None, "Command-li...
      quiet = property(_getQuiet, _setQuiet, None, "Command-line qui...
      logfile = property(_getLogfile, _setLogfile, None, "Command-li...
      owner = property(_getOwner, _setOwner, None, "Command-line own...
      mode = property(_getMode, _setMode, None, "Command-line mode (...
      output = property(_getOutput, _setOutput, None, "Command-line ...
      debug = property(_getDebug, _setDebug, None, "Command-line deb...
      stacktrace = property(_getStacktrace, _setStacktrace, None, "C...
      diagnostics = property(_getDiagnostics, _setDiagnostics, None,...
      verifyOnly = property(_getVerifyOnly, _setVerifyOnly, None, "C...
      ignoreWarnings = property(_getIgnoreWarnings, _setIgnoreWarnin...
      sourceDir = property(_getSourceDir, _setSourceDir, None, "Comm...
      s3BucketUrl = property(_getS3BucketUrl, _setS3BucketUrl, None,...
    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, argumentList=None, argumentString=None, validate=True)
    (Constructor)

    source code 

    Initializes an options object.

    If you initialize the object without passing either argumentList or argumentString, the object will be empty and will be invalid until it is filled in properly.

    No reference to the original arguments is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    The argument list is assumed to be a list of arguments, not including the name of the command, something like sys.argv[1:]. If you pass sys.argv instead, things are not going to work.

    The argument string will be parsed into an argument list by the util.splitCommandLine function (see the documentation for that function for some important notes about its limitations). There is an assumption that the resulting list will be equivalent to sys.argv[1:], just like argumentList.

    Unless the validate argument is False, the Options.validate method will be called (with its default arguments) after successfully parsing any passed-in command line. This validation ensures that appropriate actions, etc. have been specified. Keep in mind that even if validate is False, it might not be possible to parse the passed-in command line, so an exception might still be raised.

    Parameters:
    • argumentList (List of arguments, i.e. sys.argv) - Command line for a program.
    • argumentString (String, i.e. "cback3-amazons3-sync --verbose stage store") - Command line for a program.
    • validate (Boolean true/false.) - Validate the command line after parsing it.
    Raises:
    • getopt.GetoptError - If the command-line arguments could not be parsed.
    • ValueError - If the command-line arguments are invalid.
    Overrides: object.__init__
    Notes:
    • The command line format is specified by the _usage function. Call _usage to see a usage statement for the cback3-amazons3-sync script.
    • It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid command line arguments.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setHelp(self, value)

    source code 

    Property target used to set the help flag. No validations, but we normalize the value to True or False.

    _setVersion(self, value)

    source code 

    Property target used to set the version flag. No validations, but we normalize the value to True or False.

    _setVerbose(self, value)

    source code 

    Property target used to set the verbose flag. No validations, but we normalize the value to True or False.

    _setQuiet(self, value)

    source code 

    Property target used to set the quiet flag. No validations, but we normalize the value to True or False.

    _setLogfile(self, value)

    source code 

    Property target used to set the logfile parameter.

    Raises:
    • ValueError - If the value cannot be encoded properly.

    _setOwner(self, value)

    source code 

    Property target used to set the owner parameter. If not None, the owner must be a (user,group) tuple or list. Strings (and inherited children of strings) are explicitly disallowed. The value will be normalized to a tuple.

    Raises:
    • ValueError - If the value is not valid.

    _getOwner(self)

    source code 

    Property target used to get the owner parameter. The parameter is a tuple of (user, group).

    _setOutput(self, value)

    source code 

    Property target used to set the output flag. No validations, but we normalize the value to True or False.

    _setDebug(self, value)

    source code 

    Property target used to set the debug flag. No validations, but we normalize the value to True or False.

    _setStacktrace(self, value)

    source code 

    Property target used to set the stacktrace flag. No validations, but we normalize the value to True or False.

    _setDiagnostics(self, value)

    source code 

    Property target used to set the diagnostics flag. No validations, but we normalize the value to True or False.

    _setVerifyOnly(self, value)

    source code 

    Property target used to set the verifyOnly flag. No validations, but we normalize the value to True or False.

    _setIgnoreWarnings(self, value)

    source code 

    Property target used to set the ignoreWarnings flag. No validations, but we normalize the value to True or False.

    validate(self)

    source code 

    Validates command-line options represented by the object.

    Unless --help or --version are supplied, at least one action must be specified. Other validations (as for allowed values for particular options) will be taken care of at assignment time by the properties functionality.

    Raises:
    • ValueError - If one of the validations fails.

    Note: The command line format is specified by the _usage function. Call _usage to see a usage statement for the cback3-amazons3-sync script.

    buildArgumentList(self, validate=True)

    source code 

    Extracts options into a list of command line arguments.

    The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument list. Besides that, the argument list is normalized to use the long option names (i.e. --version rather than -V). The resulting list will be suitable for passing back to the constructor in the argumentList parameter. Unlike buildArgumentString, string arguments are not quoted here, because there is no need for it.

    Unless the validate parameter is False, the Options.validate method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument list will not be extracted.

    Parameters:
    • validate (Boolean true/false.) - Validate the options before extracting the command line.
    Returns:
    List representation of command-line arguments.
    Raises:
    • ValueError - If options within the object are invalid.

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to extract an invalid command line.

    buildArgumentString(self, validate=True)

    source code 

    Extracts options into a string of command-line arguments.

    The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument string. Besides that, the argument string is normalized to use the long option names (i.e. --version rather than -V) and to quote all string arguments with double quotes ("). The resulting string will be suitable for passing back to the constructor in the argumentString parameter.

    Unless the validate parameter is False, the Options.validate method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument string will not be extracted.

    Parameters:
    • validate (Boolean true/false.) - Validate the options before extracting the command line.
    Returns:
    String representation of command-line arguments.
    Raises:
    • ValueError - If options within the object are invalid.

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to extract an invalid command line.

    _parseArgumentList(self, argumentList)

    source code 

    Internal method to parse a list of command-line arguments.

    Most of the validation we do here has to do with whether the arguments can be parsed and whether any values which exist are valid. We don't do any validation as to whether required elements exist or whether elements exist in the proper combination (instead, that's the job of the validate method).

    For any of the options which supply parameters, if the option is duplicated with long and short switches (i.e. -l and a --logfile) then the long switch is used. If the same option is duplicated with the same switch (long or short), then the last entry on the command line is used.

    Parameters:
    • argumentList (List of arguments to a command, i.e. sys.argv[1:]) - List of arguments to a command.
    Raises:
    • ValueError - If the argument list cannot be successfully parsed.

    Class Variable Details [hide private]

    help

    Value:
    property(_getHelp, _setHelp, None, "Command-line help (C{-h,--help}) f\
    lag.")
    

    version

    Value:
    property(_getVersion, _setVersion, None, "Command-line version (C{-V,-\
    -version}) flag.")
    

    verbose

    Value:
    property(_getVerbose, _setVerbose, None, "Command-line verbose (C{-b,-\
    -verbose}) flag.")
    

    quiet

    Value:
    property(_getQuiet, _setQuiet, None, "Command-line quiet (C{-q,--quiet\
    }) flag.")
    

    logfile

    Value:
    property(_getLogfile, _setLogfile, None, "Command-line logfile (C{-l,-\
    -logfile}) parameter.")
    

    owner

    Value:
    property(_getOwner, _setOwner, None, "Command-line owner (C{-o,--owner\
    }) parameter, as tuple C{(user,group)}.")
    

    mode

    Value:
    property(_getMode, _setMode, None, "Command-line mode (C{-m,--mode}) p\
    arameter.")
    

    output

    Value:
    property(_getOutput, _setOutput, None, "Command-line output (C{-O,--ou\
    tput}) flag.")
    

    debug

    Value:
    property(_getDebug, _setDebug, None, "Command-line debug (C{-d,--debug\
    }) flag.")
    

    stacktrace

    Value:
    property(_getStacktrace, _setStacktrace, None, "Command-line stacktrac\
    e (C{-s,--stack}) flag.")
    

    diagnostics

    Value:
    property(_getDiagnostics, _setDiagnostics, None, "Command-line diagnos\
    tics (C{-D,--diagnostics}) flag.")
    

    verifyOnly

    Value:
    property(_getVerifyOnly, _setVerifyOnly, None, "Command-line verifyOnl\
    y (C{-v,--verifyOnly}) flag.")
    

    ignoreWarnings

    Value:
    property(_getIgnoreWarnings, _setIgnoreWarnings, None, "Command-line i\
    gnoreWarnings (C{-w,--ignoreWarnings}) flag.")
    

    sourceDir

    Value:
    property(_getSourceDir, _setSourceDir, None, "Command-line sourceDir, \
    source of sync.")
    

    s3BucketUrl

    Value:
    property(_getS3BucketUrl, _setS3BucketUrl, None, "Command-line s3Bucke\
    tUrl, target of sync.")
    

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.util-module.html0000664000175000017500000024344112657665544025607 0ustar pronovicpronovic00000000000000 CedarBackup3.util
    Package CedarBackup3 :: Module util
    [hide private]
    [frames] | no frames]

    Module util

    source code

    Provides general-purpose utilities.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      AbsolutePathList
    Class representing a list of absolute paths.
      ObjectTypeList
    Class representing a list containing only objects with a certain type.
      RestrictedContentList
    Class representing a list containing only object with certain values.
      RegexMatchList
    Class representing a list containing only strings that match a regular expression.
      RegexList
    Class representing a list of valid regular expression strings.
      _Vertex
    Represents a vertex (or node) in a directed graph.
      DirectedGraph
    Represents a directed graph.
      PathResolverSingleton
    Singleton used for resolving executable paths.
      UnorderedList
    Class representing an "unordered list".
      Pipe
    Specialized pipe class for use by executeCommand.
      Diagnostics
    Class holding runtime diagnostic information.
    Functions [hide private]
     
    sortDict(d)
    Returns the keys of the dictionary sorted by value.
    source code
     
    convertSize(size, fromUnit, toUnit)
    Converts a size in one unit to a size in another unit.
    source code
     
    getUidGid(user, group)
    Get the uid/gid associated with a user/group pair
    source code
     
    changeOwnership(path, user, group)
    Changes ownership of path to match the user and group.
    source code
     
    splitCommandLine(commandLine)
    Splits a command line string into a list of arguments.
    source code
     
    resolveCommand(command)
    Resolves the real path to a command through the path resolver mechanism.
    source code
     
    executeCommand(command, args, returnOutput=False, ignoreStderr=False, doNotLog=False, outputFile=None)
    Executes a shell command, hopefully in a safe way.
    source code
     
    calculateFileAge(path)
    Calculates the age (in days) of a file.
    source code
     
    encodePath(path)
    Safely encodes a filesystem path as a Unicode string, converting bytes to fileystem encoding if necessary.
    source code
     
    nullDevice()
    Attempts to portably return the null device on this system.
    source code
     
    deriveDayOfWeek(dayName)
    Converts English day name to numeric day of week as from time.localtime.
    source code
     
    isStartOfWeek(startingDay)
    Indicates whether "today" is the backup starting day per configuration.
    source code
     
    buildNormalizedPath(path)
    Returns a "normalized" path based on a path name.
    source code
     
    removeKeys(d, keys)
    Removes all of the keys from the dictionary.
    source code
     
    displayBytes(bytes, digits=2)
    Format a byte quantity so it can be sensibly displayed.
    source code
     
    getFunctionReference(module, function)
    Gets a reference to a named function.
    source code
     
    isRunningAsRoot()
    Indicates whether the program is running as the root user.
    source code
     
    mount(devicePath, mountPoint, fsType)
    Mounts the indicated device at the indicated mount point.
    source code
     
    unmount(mountPoint, removeAfter=False, attempts=1, waitSeconds=0)
    Unmounts whatever device is mounted at the indicated mount point.
    source code
     
    deviceMounted(devicePath)
    Indicates whether a specific filesystem device is currently mounted.
    source code
     
    sanitizeEnvironment()
    Sanitizes the operating system environment.
    source code
     
    dereferenceLink(path, absolute=True)
    Deference a soft link, optionally normalizing it to an absolute path.
    source code
     
    checkUnique(prefix, values)
    Checks that all values are unique.
    source code
     
    parseCommaSeparatedString(commaString)
    Parses a list of values out of a comma-separated string.
    source code
    Variables [hide private]
      ISO_SECTOR_SIZE = 2048.0
    Size of an ISO image sector, in bytes.
      BYTES_PER_SECTOR = 2048.0
    Number of bytes (B) per ISO sector.
      BYTES_PER_KBYTE = 1024.0
    Number of bytes (B) per kilobyte (kB).
      BYTES_PER_MBYTE = 1048576.0
    Number of bytes (B) per megabyte (MB).
      BYTES_PER_GBYTE = 1073741824.0
    Number of bytes (B) per megabyte (GB).
      KBYTES_PER_MBYTE = 1024.0
    Number of kilobytes (kB) per megabyte (MB).
      MBYTES_PER_GBYTE = 1024.0
    Number of megabytes (MB) per gigabyte (GB).
      SECONDS_PER_MINUTE = 60.0
    Number of seconds per minute.
      MINUTES_PER_HOUR = 60.0
    Number of minutes per hour.
      HOURS_PER_DAY = 24.0
    Number of hours per day.
      SECONDS_PER_DAY = 86400.0
    Number of seconds per day.
      UNIT_BYTES = 0
    Constant representing the byte (B) unit for conversion.
      UNIT_KBYTES = 1
    Constant representing the kilobyte (kB) unit for conversion.
      UNIT_MBYTES = 2
    Constant representing the megabyte (MB) unit for conversion.
      UNIT_GBYTES = 4
    Constant representing the gigabyte (GB) unit for conversion.
      UNIT_SECTORS = 3
    Constant representing the ISO sector unit for conversion.
      _UID_GID_AVAILABLE = True
      logger = logging.getLogger("CedarBackup3.log.util")
      outputLogger = logging.getLogger("CedarBackup3.output")
      MTAB_FILE = '/etc/mtab'
      MOUNT_COMMAND = ['mount']
      UMOUNT_COMMAND = ['umount']
      DEFAULT_LANGUAGE = 'C'
      LANG_VAR = 'LANG'
      LOCALE_VARS = ['LC_ADDRESS', 'LC_ALL', 'LC_COLLATE', 'LC_CTYPE...
      __package__ = 'CedarBackup3'
    Function Details [hide private]

    sortDict(d)

    source code 

    Returns the keys of the dictionary sorted by value.

    Parameters:
    • d - Dictionary to operate on
    Returns:
    List of dictionary keys sorted in order by dictionary value.

    convertSize(size, fromUnit, toUnit)

    source code 

    Converts a size in one unit to a size in another unit.

    This is just a convenience function so that the functionality can be implemented in just one place. Internally, we convert values to bytes and then to the final unit.

    The available units are:

    • UNIT_BYTES - Bytes
    • UNIT_KBYTES - Kilobytes, where 1 kB = 1024 B
    • UNIT_MBYTES - Megabytes, where 1 MB = 1024 kB
    • UNIT_GBYTES - Gigabytes, where 1 GB = 1024 MB
    • UNIT_SECTORS - Sectors, where 1 sector = 2048 B
    Parameters:
    • size (Integer or float value in units of fromUnit) - Size to convert
    • fromUnit (One of the units listed above) - Unit to convert from
    • toUnit (One of the units listed above) - Unit to convert to
    Returns:
    Number converted to new unit, as a float.
    Raises:
    • ValueError - If one of the units is invalid.

    getUidGid(user, group)

    source code 

    Get the uid/gid associated with a user/group pair

    This is a no-op if user/group functionality is not available on the platform.

    Parameters:
    • user (User name as a string) - User name
    • group (Group name as a string) - Group name
    Returns:
    Tuple (uid, gid) matching passed-in user and group.
    Raises:
    • ValueError - If the ownership user/group values are invalid

    changeOwnership(path, user, group)

    source code 

    Changes ownership of path to match the user and group.

    This is a no-op if user/group functionality is not available on the platform, or if the either passed-in user or group is None. Further, we won't even try to do it unless running as root, since it's unlikely to work.

    Parameters:
    • path - Path whose ownership to change.
    • user - User which owns file.
    • group - Group which owns file.

    splitCommandLine(commandLine)

    source code 

    Splits a command line string into a list of arguments.

    Unfortunately, there is no "standard" way to parse a command line string, and it's actually not an easy problem to solve portably (essentially, we have to emulate the shell argument-processing logic). This code only respects double quotes (") for grouping arguments, not single quotes ('). Make sure you take this into account when building your command line.

    Incidentally, I found this particular parsing method while digging around in Google Groups, and I tweaked it for my own use.

    Parameters:
    • commandLine (String, i.e. "cback3 --verbose stage store") - Command line string
    Returns:
    List of arguments, suitable for passing to popen2.
    Raises:
    • ValueError - If the command line is None.

    resolveCommand(command)

    source code 

    Resolves the real path to a command through the path resolver mechanism.

    Both extensions and standard Cedar Backup functionality need a way to resolve the "real" location of various executables. Normally, they assume that these executables are on the system path, but some callers need to specify an alternate location.

    Ideally, we want to handle this configuration in a central location. The Cedar Backup path resolver mechanism (a singleton called PathResolverSingleton) provides the central location to store the mappings. This function wraps access to the singleton, and is what all functions (extensions or standard functionality) should call if they need to find a command.

    The passed-in command must actually be a list, in the standard form used by all existing Cedar Backup code (something like ["svnlook", ]). The lookup will actually be done on the first element in the list, and the returned command will always be in list form as well.

    If the passed-in command can't be resolved or no mapping exists, then the command itself will be returned unchanged. This way, we neatly fall back on default behavior if we have no sensible alternative.

    Parameters:
    • command (List form of command, i.e. ["svnlook", ].) - Command to resolve.
    Returns:
    Path to command or just command itself if no mapping exists.

    executeCommand(command, args, returnOutput=False, ignoreStderr=False, doNotLog=False, outputFile=None)

    source code 

    Executes a shell command, hopefully in a safe way.

    This function exists to replace direct calls to os.popen in the Cedar Backup code. It's not safe to call a function such as os.popen() with untrusted arguments, since that can cause problems if the string contains non-safe variables or other constructs (imagine that the argument is $WHATEVER, but $WHATEVER contains something like "; rm -fR ~/; echo" in the current environment).

    Instead, it's safer to pass a list of arguments in the style supported bt popen2 or popen4. This function actually uses a specialized Pipe class implemented using either subprocess.Popen or popen2.Popen4.

    Under the normal case, this function will return a tuple of (status, None) where the status is the wait-encoded return status of the call per the popen2.Popen4 documentation. If returnOutput is passed in as True, the function will return a tuple of (status, output) where output is a list of strings, one entry per line in the output from the command. Output is always logged to the outputLogger.info() target, regardless of whether it's returned.

    By default, stdout and stderr will be intermingled in the output. However, if you pass in ignoreStderr=True, then only stdout will be included in the output.

    The doNotLog parameter exists so that callers can force the function to not log command output to the debug log. Normally, you would want to log. However, if you're using this function to write huge output files (i.e. database backups written to stdout) then you might want to avoid putting all that information into the debug log.

    The outputFile parameter exists to make it easier for a caller to push output into a file, i.e. as a substitute for redirection to a file. If this value is passed in, each time a line of output is generated, it will be written to the file using outputFile.write(). At the end, the file descriptor will be flushed using outputFile.flush(). The caller maintains responsibility for closing the file object appropriately.

    Parameters:
    • command (List of individual arguments that make up the command) - Shell command to execute
    • args (List of additional arguments to the command) - List of arguments to the command
    • returnOutput (Boolean True or False) - Indicates whether to return the output of the command
    • ignoreStderr (Boolean True or False) - Whether stderr should be discarded
    • doNotLog (Boolean True or False) - Indicates that output should not be logged.
    • outputFile (File object as returned from open() or file(), configured for binary write) - File object that all output should be written to.
    Returns:
    Tuple of (result, output) as described above.
    Notes:
    • I know that it's a bit confusing that the command and the arguments are both lists. I could have just required the caller to pass in one big list. However, I think it makes some sense to keep the command (the constant part of what we're executing, i.e. "scp -B") separate from its arguments, even if they both end up looking kind of similar.
    • You cannot redirect output via shell constructs (i.e. >file, 2>/dev/null, etc.) using this function. The redirection string would be passed to the command just like any other argument. However, you can implement the equivalent to redirection using ignoreStderr and outputFile, as discussed above.
    • The operating system environment is partially sanitized before the command is invoked. See sanitizeEnvironment for details.

    calculateFileAge(path)

    source code 

    Calculates the age (in days) of a file.

    The "age" of a file is the amount of time since the file was last used, per the most recent of the file's st_atime and st_mtime values.

    Technically, we only intend this function to work with files, but it will probably work with anything on the filesystem.

    Parameters:
    • path - Path to a file on disk.
    Returns:
    Age of the file in days (possibly fractional).
    Raises:
    • OSError - If the file doesn't exist.

    encodePath(path)

    source code 

    Safely encodes a filesystem path as a Unicode string, converting bytes to fileystem encoding if necessary.

    Parameters:
    • path - Path to encode
    Returns:
    Path, as a string, encoded appropriately
    Raises:
    • ValueError - If the path cannot be encoded properly.

    See Also: http://lucumr.pocoo.org/2013/7/2/the-updated-guide-to-unicode/

    nullDevice()

    source code 

    Attempts to portably return the null device on this system.

    The null device is something like /dev/null on a UNIX system. The name varies on other platforms.

    deriveDayOfWeek(dayName)

    source code 

    Converts English day name to numeric day of week as from time.localtime.

    For instance, the day monday would be converted to the number 0.

    Parameters:
    • dayName (string, i.e. "monday", "tuesday", etc.) - Day of week to convert
    Returns:
    Integer, where Monday is 0 and Sunday is 6; or -1 if no conversion is possible.

    isStartOfWeek(startingDay)

    source code 

    Indicates whether "today" is the backup starting day per configuration.

    If the current day's English name matches the indicated starting day, then today is a starting day.

    Parameters:
    • startingDay (string, i.e. "monday", "tuesday", etc.) - Configured starting day.
    Returns:
    Boolean indicating whether today is the starting day.

    buildNormalizedPath(path)

    source code 

    Returns a "normalized" path based on a path name.

    A normalized path is a representation of a path that is also a valid file name. To make a valid file name out of a complete path, we have to convert or remove some characters that are significant to the filesystem -- in particular, the path separator and any leading '.' character (which would cause the file to be hidden in a file listing).

    Note that this is a one-way transformation -- you can't safely derive the original path from the normalized path.

    To normalize a path, we begin by looking at the first character. If the first character is '/' or '\', it gets removed. If the first character is '.', it gets converted to '_'. Then, we look through the rest of the path and convert all remaining '/' or '\' characters '-', and all remaining whitespace characters to '_'.

    As a special case, a path consisting only of a single '/' or '\' character will be converted to '-'.

    Parameters:
    • path - Path to normalize
    Returns:
    Normalized path as described above.
    Raises:
    • ValueError - If the path is None

    removeKeys(d, keys)

    source code 

    Removes all of the keys from the dictionary. The dictionary is altered in-place. Each key must exist in the dictionary.

    Parameters:
    • d - Dictionary to operate on
    • keys - List of keys to remove
    Raises:
    • KeyError - If one of the keys does not exist

    displayBytes(bytes, digits=2)

    source code 

    Format a byte quantity so it can be sensibly displayed.

    It's rather difficult to look at a number like "72372224 bytes" and get any meaningful information out of it. It would be more useful to see something like "69.02 MB". That's what this function does. Any time you want to display a byte value, i.e.:

      print "Size: %s bytes" % bytes
    

    Call this function instead:

      print "Size: %s" % displayBytes(bytes)
    

    What comes out will be sensibly formatted. The indicated number of digits will be listed after the decimal point, rounded based on whatever rules are used by Python's standard %f string format specifier. (Values less than 1 kB will be listed in bytes and will not have a decimal point, since the concept of a fractional byte is nonsensical.)

    Parameters:
    • bytes (Integer number of bytes.) - Byte quantity.
    • digits (Integer value, typically 2-5.) - Number of digits to display after the decimal point.
    Returns:
    String, formatted for sensible display.

    getFunctionReference(module, function)

    source code 

    Gets a reference to a named function.

    This does some hokey-pokey to get back a reference to a dynamically named function. For instance, say you wanted to get a reference to the os.path.isdir function. You could use:

      myfunc = getFunctionReference("os.path", "isdir")
    

    Although we won't bomb out directly, behavior is pretty much undefined if you pass in None or "" for either module or function.

    The only validation we enforce is that whatever we get back must be callable.

    I derived this code based on the internals of the Python unittest implementation. I don't claim to completely understand how it works.

    Parameters:
    • module (Something like "os.path" or "CedarBackup3.util") - Name of module associated with function.
    • function (Something like "isdir" or "getUidGid") - Name of function
    Returns:
    Reference to function associated with name.
    Raises:
    • ImportError - If the function cannot be found.
    • ValueError - If the resulting reference is not callable.

    Copyright: Some of this code, prior to customization, was originally part of the Python 2.3 codebase. Python code is copyright (c) 2001, 2002 Python Software Foundation; All Rights Reserved.

    mount(devicePath, mountPoint, fsType)

    source code 

    Mounts the indicated device at the indicated mount point.

    For instance, to mount a CD, you might use device path /dev/cdrw, mount point /media/cdrw and filesystem type iso9660. You can safely use any filesystem type that is supported by mount on your platform. If the type is None, we'll attempt to let mount auto-detect it. This may or may not work on all systems.

    Parameters:
    • devicePath - Path of device to be mounted.
    • mountPoint - Path that device should be mounted at.
    • fsType - Type of the filesystem assumed to be available via the device.
    Raises:
    • IOError - If the device cannot be mounted.

    Note: This only works on platforms that have a concept of "mounting" a filesystem through a command-line "mount" command, like UNIXes. It won't work on Windows.

    unmount(mountPoint, removeAfter=False, attempts=1, waitSeconds=0)

    source code 

    Unmounts whatever device is mounted at the indicated mount point.

    Sometimes, it might not be possible to unmount the mount point immediately, if there are still files open there. Use the attempts and waitSeconds arguments to indicate how many unmount attempts to make and how many seconds to wait between attempts. If you pass in zero attempts, no attempts will be made (duh).

    If the indicated mount point is not really a mount point per os.path.ismount(), then it will be ignored. This seems to be a safer check then looking through /etc/mtab, since ismount() is already in the Python standard library and is documented as working on all POSIX systems.

    If removeAfter is True, then the mount point will be removed using os.rmdir() after the unmount action succeeds. If for some reason the mount point is not a directory, then it will not be removed.

    Parameters:
    • mountPoint - Mount point to be unmounted.
    • removeAfter - Remove the mount point after unmounting it.
    • attempts - Number of times to attempt the unmount.
    • waitSeconds - Number of seconds to wait between repeated attempts.
    Raises:
    • IOError - If the mount point is still mounted after attempts are exhausted.

    Note: This only works on platforms that have a concept of "mounting" a filesystem through a command-line "mount" command, like UNIXes. It won't work on Windows.

    deviceMounted(devicePath)

    source code 

    Indicates whether a specific filesystem device is currently mounted.

    We determine whether the device is mounted by looking through the system's mtab file. This file shows every currently-mounted filesystem, ordered by device. We only do the check if the mtab file exists and is readable. Otherwise, we assume that the device is not mounted.

    Parameters:
    • devicePath - Path of device to be checked
    Returns:
    True if device is mounted, false otherwise.

    Note: This only works on platforms that have a concept of an mtab file to show mounted volumes, like UNIXes. It won't work on Windows.

    sanitizeEnvironment()

    source code 

    Sanitizes the operating system environment.

    The operating system environment is contained in os.environ. This method sanitizes the contents of that dictionary.

    Currently, all it does is reset the locale (removing $LC_*) and set the default language ($LANG) to DEFAULT_LANGUAGE. This way, we can count on consistent localization regardless of what the end-user has configured. This is important for code that needs to parse program output.

    The os.environ dictionary is modifed in-place. If $LANG is already set to the proper value, it is not re-set, so we can avoid the memory leaks that are documented to occur on BSD-based systems.

    Returns:
    Copy of the sanitized environment.

    dereferenceLink(path, absolute=True)

    source code 

    Deference a soft link, optionally normalizing it to an absolute path.

    Parameters:
    • path - Path of link to dereference
    • absolute - Whether to normalize the result to an absolute path
    Returns:
    Dereferenced path, or original path if original is not a link.

    checkUnique(prefix, values)

    source code 

    Checks that all values are unique.

    The values list is checked for duplicate values. If there are duplicates, an exception is thrown. All duplicate values are listed in the exception.

    Parameters:
    • prefix - Prefix to use in the thrown exception
    • values - List of values to check
    Raises:
    • ValueError - If there are duplicates in the list

    parseCommaSeparatedString(commaString)

    source code 

    Parses a list of values out of a comma-separated string.

    The items in the list are split by comma, and then have whitespace stripped. As a special case, if commaString is None, then None will be returned.

    Parameters:
    • commaString - List of values in comma-separated string format.
    Returns:
    Values from commaString split into a list, or None.

    Variables Details [hide private]

    LOCALE_VARS

    Value:
    ['LC_ADDRESS',
     'LC_ALL',
     'LC_COLLATE',
     'LC_CTYPE',
     'LC_IDENTIFICATION',
     'LC_MEASUREMENT',
     'LC_MESSAGES',
     'LC_MONETARY',
    ...
    

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.util-module.html0000664000175000017500000001667112657665544026375 0ustar pronovicpronovic00000000000000 util

    Module util


    Classes

    AbsolutePathList
    Diagnostics
    DirectedGraph
    ObjectTypeList
    PathResolverSingleton
    Pipe
    RegexList
    RegexMatchList
    RestrictedContentList
    UnorderedList

    Functions

    buildNormalizedPath
    calculateFileAge
    changeOwnership
    checkUnique
    convertSize
    dereferenceLink
    deriveDayOfWeek
    deviceMounted
    displayBytes
    encodePath
    executeCommand
    getFunctionReference
    getUidGid
    isRunningAsRoot
    isStartOfWeek
    mount
    nullDevice
    parseCommaSeparatedString
    removeKeys
    resolveCommand
    sanitizeEnvironment
    sortDict
    splitCommandLine
    unmount

    Variables

    BYTES_PER_GBYTE
    BYTES_PER_KBYTE
    BYTES_PER_MBYTE
    BYTES_PER_SECTOR
    DEFAULT_LANGUAGE
    HOURS_PER_DAY
    ISO_SECTOR_SIZE
    KBYTES_PER_MBYTE
    LANG_VAR
    LOCALE_VARS
    MBYTES_PER_GBYTE
    MINUTES_PER_HOUR
    MOUNT_COMMAND
    MTAB_FILE
    SECONDS_PER_DAY
    SECONDS_PER_MINUTE
    UMOUNT_COMMAND
    UNIT_BYTES
    UNIT_GBYTES
    UNIT_KBYTES
    UNIT_MBYTES
    UNIT_SECTORS
    __package__
    logger
    outputLogger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.util.Pipe-class.html0000664000175000017500000002664412657665545026330 0ustar pronovicpronovic00000000000000 CedarBackup3.util.Pipe
    Package CedarBackup3 :: Module util :: Class Pipe
    [hide private]
    [frames] | no frames]

    Class Pipe

    source code

          object --+    
                   |    
    subprocess.Popen --+
                       |
                      Pipe
    

    Specialized pipe class for use by executeCommand.

    The executeCommand function needs a specialized way of interacting with a pipe. First, executeCommand only reads from the pipe, and never writes to it. Second, executeCommand needs a way to discard all output written to stderr, as a means of simulating the shell 2>/dev/null construct.

    Instance Methods [hide private]
     
    __init__(self, cmd, bufsize=-1, ignoreStderr=False)
    Create new Popen instance.
    source code

    Inherited from subprocess.Popen: __del__, communicate, kill, pipe_cloexec, poll, send_signal, terminate, wait

    Inherited from subprocess.Popen (private): _close_fds, _communicate, _communicate_with_poll, _communicate_with_select, _execute_child, _find_w9xpopen, _get_handles, _handle_exitstatus, _internal_poll, _make_inheritable, _readerthread, _set_cloexec_flag, _translate_newlines

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from subprocess.Popen (private): _child_created

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, cmd, bufsize=-1, ignoreStderr=False)
    (Constructor)

    source code 

    Create new Popen instance.

    Overrides: object.__init__
    (inherited documentation)

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.PreActionHook-class.html0000664000175000017500000003337212657665544030423 0ustar pronovicpronovic00000000000000 CedarBackup3.config.PreActionHook
    Package CedarBackup3 :: Module config :: Class PreActionHook
    [hide private]
    [frames] | no frames]

    Class PreActionHook

    source code

    object --+    
             |    
    ActionHook --+
                 |
                PreActionHook
    

    Class representing a pre-action hook associated with an action.

    A hook associated with an action is a shell command to be executed either before or after a named action is executed. In this case, a pre-action hook is executed before the named action.

    The following restrictions exist on data in this class:

    • The action name must be a non-empty string consisting of lower-case letters and digits.
    • The shell command must be a non-empty string.

    The internal before instance variable is always set to True in this class.

    Instance Methods [hide private]
     
    __init__(self, action=None, command=None)
    Constructor for the PreActionHook class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code

    Inherited from ActionHook: __str__, __cmp__, __eq__, __lt__, __gt__, __ge__, __le__

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]

    Inherited from ActionHook: action, command, before, after

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, action=None, command=None)
    (Constructor)

    source code 

    Constructor for the PreActionHook class.

    Parameters:
    • action - Action this hook is associated with
    • command - Shell command to execute
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.util-module.html0000664000175000017500000006554512657665544027255 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.util
    Package CedarBackup3 :: Package actions :: Module util
    [hide private]
    [frames] | no frames]

    Module util

    source code

    Implements action-related utilities


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    findDailyDirs(stagingDir, indicatorFile)
    Returns a list of all daily staging directories that do not contain the indicated indicator file.
    source code
     
    createWriter(config)
    Creates a writer object based on current configuration.
    source code
     
    writeIndicatorFile(targetDir, indicatorFile, backupUser, backupGroup)
    Writes an indicator file into a target directory.
    source code
     
    getBackupFiles(targetDir)
    Gets a list of backup files in a target directory.
    source code
     
    checkMediaState(storeConfig)
    Checks state of the media in the backup device to confirm whether it has been initialized for use with Cedar Backup.
    source code
     
    initializeMediaState(config)
    Initializes state of the media in the backup device so Cedar Backup can recognize it.
    source code
     
    buildMediaLabel()
    Builds a media label to be used on Cedar Backup media.
    source code
     
    _getDeviceType(config)
    Gets the device type that should be used for storing.
    source code
     
    _getMediaType(config)
    Gets the media type that should be used for storing.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.actions.util")
      MEDIA_LABEL_PREFIX = 'CEDAR BACKUP'
      __package__ = 'CedarBackup3.actions'
    Function Details [hide private]

    findDailyDirs(stagingDir, indicatorFile)

    source code 

    Returns a list of all daily staging directories that do not contain the indicated indicator file.

    Parameters:
    • stagingDir - Configured staging directory (config.targetDir)
    Returns:
    List of absolute paths to daily staging directories.

    createWriter(config)

    source code 

    Creates a writer object based on current configuration.

    This function creates and returns a writer based on configuration. This is done to abstract action functionality from knowing what kind of writer is in use. Since all writers implement the same interface, there's no need for actions to care which one they're working with.

    Currently, the cdwriter and dvdwriter device types are allowed. An exception will be raised if any other device type is used.

    This function also checks to make sure that the device isn't mounted before creating a writer object for it. Experience shows that sometimes if the device is mounted, we have problems with the backup. We may as well do the check here first, before instantiating the writer.

    Parameters:
    • config - Config object.
    Returns:
    Writer that can be used to write a directory to some media.
    Raises:
    • ValueError - If there is a problem getting the writer.
    • IOError - If there is a problem creating the writer object.

    writeIndicatorFile(targetDir, indicatorFile, backupUser, backupGroup)

    source code 

    Writes an indicator file into a target directory.

    Parameters:
    • targetDir - Target directory in which to write indicator
    • indicatorFile - Name of the indicator file
    • backupUser - User that indicator file should be owned by
    • backupGroup - Group that indicator file should be owned by
    Raises:
    • IOException - If there is a problem writing the indicator file

    getBackupFiles(targetDir)

    source code 

    Gets a list of backup files in a target directory.

    Files that match INDICATOR_PATTERN (i.e. "cback.store", "cback.stage", etc.) are assumed to be indicator files and are ignored.

    Parameters:
    • targetDir - Directory to look in
    Returns:
    List of backup files in the directory
    Raises:
    • ValueError - If the target directory does not exist

    checkMediaState(storeConfig)

    source code 

    Checks state of the media in the backup device to confirm whether it has been initialized for use with Cedar Backup.

    We can tell whether the media has been initialized by looking at its media label. If the media label starts with MEDIA_LABEL_PREFIX, then it has been initialized.

    The check varies depending on whether the media is rewritable or not. For non-rewritable media, we also accept a None media label, since this kind of media cannot safely be initialized.

    Parameters:
    • storeConfig - Store configuration
    Raises:
    • ValueError - If media is not initialized.

    initializeMediaState(config)

    source code 

    Initializes state of the media in the backup device so Cedar Backup can recognize it.

    This is done by writing an mostly-empty image (it contains a "Cedar Backup" directory) to the media with a known media label.

    Parameters:
    • config - Cedar Backup configuration
    Raises:
    • ValueError - If media could not be initialized.
    • ValueError - If the configured media type is not rewritable

    Note: Only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup.

    buildMediaLabel()

    source code 

    Builds a media label to be used on Cedar Backup media.

    Returns:
    Media label as a string.

    _getDeviceType(config)

    source code 

    Gets the device type that should be used for storing.

    Use the configured device type if not None, otherwise use config.DEFAULT_DEVICE_TYPE.

    Parameters:
    • config - Config object.
    Returns:
    Device type to be used.

    _getMediaType(config)

    source code 

    Gets the media type that should be used for storing.

    Use the configured media type if not None, otherwise use DEFAULT_MEDIA_TYPE.

    Once we figure out what configuration value to use, we return a media type value that is valid in one of the supported writers:

      MEDIA_CDR_74
      MEDIA_CDRW_74
      MEDIA_CDR_80
      MEDIA_CDRW_80
      MEDIA_DVDPLUSR
      MEDIA_DVDPLUSRW
    
    Parameters:
    • config - Config object.
    Returns:
    Media type to be used as a writer media type value.
    Raises:
    • ValueError - If the media type is not valid.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.validate-module.html0000664000175000017500000006736312657665544030071 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.validate
    Package CedarBackup3 :: Package actions :: Module validate
    [hide private]
    [frames] | no frames]

    Module validate

    source code

    Implements the standard 'validate' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeValidate(configPath, options, config)
    Executes the validate action.
    source code
     
    _checkDir(path, writable, logfunc, prefix)
    Checks that the indicated directory is OK.
    source code
     
    _validateReference(config, logfunc)
    Execute runtime validations on reference configuration.
    source code
     
    _validateOptions(config, logfunc)
    Execute runtime validations on options configuration.
    source code
     
    _validateCollect(config, logfunc)
    Execute runtime validations on collect configuration.
    source code
     
    _validateStage(config, logfunc)
    Execute runtime validations on stage configuration.
    source code
     
    _validateStore(config, logfunc)
    Execute runtime validations on store configuration.
    source code
     
    _validatePurge(config, logfunc)
    Execute runtime validations on purge configuration.
    source code
     
    _validateExtensions(config, logfunc)
    Execute runtime validations on extensions configuration.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.actions.validate")
      __package__ = 'CedarBackup3.actions'
    Function Details [hide private]

    executeValidate(configPath, options, config)

    source code 

    Executes the validate action.

    This action validates each of the individual sections in the config file. This is a "runtime" validation. The config file itself is already valid in a structural sense, so what we check here that is that we can actually use the configuration without any problems.

    There's a separate validation function for each of the configuration sections. Each validation function returns a true/false indication for whether configuration was valid, and then logs any configuration problems it finds. This way, one pass over configuration indicates most or all of the obvious problems, rather than finding just one problem at a time.

    Any reported problems will be logged at the ERROR level normally, or at the INFO level if the quiet flag is enabled.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - If some configuration value is invalid.

    _checkDir(path, writable, logfunc, prefix)

    source code 

    Checks that the indicated directory is OK.

    The path must exist, must be a directory, must be readable and executable, and must optionally be writable.

    Parameters:
    • path - Path to check.
    • writable - Check that path is writable.
    • logfunc - Function to use for logging errors.
    • prefix - Prefix to use on logged errors.
    Returns:
    True if the directory is OK, False otherwise.

    _validateReference(config, logfunc)

    source code 

    Execute runtime validations on reference configuration.

    We only validate that reference configuration exists at all.

    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, false otherwise.

    _validateOptions(config, logfunc)

    source code 

    Execute runtime validations on options configuration.

    The following validations are enforced:

    • The options section must exist
    • The working directory must exist and must be writable
    • The backup user and backup group must exist
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, false otherwise.

    _validateCollect(config, logfunc)

    source code 

    Execute runtime validations on collect configuration.

    The following validations are enforced:

    • The target directory must exist and must be writable
    • Each of the individual collect directories must exist and must be readable
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, false otherwise.

    _validateStage(config, logfunc)

    source code 

    Execute runtime validations on stage configuration.

    The following validations are enforced:

    • The target directory must exist and must be writable
    • Each local peer's collect directory must exist and must be readable
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, False otherwise.

    Note: We currently do not validate anything having to do with remote peers, since we don't have a straightforward way of doing it. It would require adding an rsh command rather than just an rcp command to configuration, and that just doesn't seem worth it right now.

    _validateStore(config, logfunc)

    source code 

    Execute runtime validations on store configuration.

    The following validations are enforced:

    • The source directory must exist and must be readable
    • The backup device (path and SCSI device) must be valid
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, False otherwise.

    _validatePurge(config, logfunc)

    source code 

    Execute runtime validations on purge configuration.

    The following validations are enforced:

    • Each purge directory must exist and must be writable
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, False otherwise.

    _validateExtensions(config, logfunc)

    source code 

    Execute runtime validations on extensions configuration.

    The following validations are enforced:

    • Each indicated extension function must exist.
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, False otherwise.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.OptionsConfig-class.html0000664000175000017500000020074412657665544030476 0ustar pronovicpronovic00000000000000 CedarBackup3.config.OptionsConfig
    Package CedarBackup3 :: Module config :: Class OptionsConfig
    [hide private]
    [frames] | no frames]

    Class OptionsConfig

    source code

    object --+
             |
            OptionsConfig
    

    Class representing a Cedar Backup global options configuration.

    The options section is used to store global configuration options and defaults that can be applied to other sections.

    The following restrictions exist on data in this class:

    • The working directory must be an absolute path.
    • The starting day must be a day of the week in English, i.e. "monday", "tuesday", etc.
    • All of the other values must be non-empty strings if they are set to something other than None.
    • The overrides list must be a list of CommandOverride objects.
    • The hooks list must be a list of ActionHook objects.
    • The cback command must be a non-empty string.
    • Any managed action name must be a non-empty string matching ACTION_NAME_REGEX
    Instance Methods [hide private]
     
    __init__(self, startingDay=None, workingDir=None, backupUser=None, backupGroup=None, rcpCommand=None, overrides=None, hooks=None, rshCommand=None, cbackCommand=None, managedActions=None)
    Constructor for the OptionsConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    addOverride(self, command, absolutePath)
    If no override currently exists for the command, add one.
    source code
     
    replaceOverride(self, command, absolutePath)
    If override currently exists for the command, replace it; otherwise add it.
    source code
     
    _setStartingDay(self, value)
    Property target used to set the starting day.
    source code
     
    _getStartingDay(self)
    Property target used to get the starting day.
    source code
     
    _setWorkingDir(self, value)
    Property target used to set the working directory.
    source code
     
    _getWorkingDir(self)
    Property target used to get the working directory.
    source code
     
    _setBackupUser(self, value)
    Property target used to set the backup user.
    source code
     
    _getBackupUser(self)
    Property target used to get the backup user.
    source code
     
    _setBackupGroup(self, value)
    Property target used to set the backup group.
    source code
     
    _getBackupGroup(self)
    Property target used to get the backup group.
    source code
     
    _setRcpCommand(self, value)
    Property target used to set the rcp command.
    source code
     
    _getRcpCommand(self)
    Property target used to get the rcp command.
    source code
     
    _setRshCommand(self, value)
    Property target used to set the rsh command.
    source code
     
    _getRshCommand(self)
    Property target used to get the rsh command.
    source code
     
    _setCbackCommand(self, value)
    Property target used to set the cback command.
    source code
     
    _getCbackCommand(self)
    Property target used to get the cback command.
    source code
     
    _setOverrides(self, value)
    Property target used to set the command path overrides list.
    source code
     
    _getOverrides(self)
    Property target used to get the command path overrides list.
    source code
     
    _setHooks(self, value)
    Property target used to set the pre- and post-action hooks list.
    source code
     
    _getHooks(self)
    Property target used to get the command path hooks list.
    source code
     
    _setManagedActions(self, value)
    Property target used to set the managed actions list.
    source code
     
    _getManagedActions(self)
    Property target used to get the managed actions list.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      startingDay
    Day that starts the week.
      workingDir
    Working (temporary) directory to use for backups.
      backupUser
    Effective user that backups should run as.
      backupGroup
    Effective group that backups should run as.
      rcpCommand
    Default rcp-compatible copy command for staging.
      rshCommand
    Default rsh-compatible command to use for remote shells.
      overrides
    List of configured command path overrides, if any.
      cbackCommand
    Default cback-compatible command to use on managed remote peers.
      hooks
    List of configured pre- and post-action hooks.
      managedActions
    Default set of actions that are managed on remote peers.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, startingDay=None, workingDir=None, backupUser=None, backupGroup=None, rcpCommand=None, overrides=None, hooks=None, rshCommand=None, cbackCommand=None, managedActions=None)
    (Constructor)

    source code 

    Constructor for the OptionsConfig class.

    Parameters:
    • startingDay - Day that starts the week.
    • workingDir - Working (temporary) directory to use for backups.
    • backupUser - Effective user that backups should run as.
    • backupGroup - Effective group that backups should run as.
    • rcpCommand - Default rcp-compatible copy command for staging.
    • rshCommand - Default rsh-compatible command to use for remote shells.
    • cbackCommand - Default cback-compatible command to use on managed remote peers.
    • overrides - List of configured command path overrides, if any.
    • hooks - List of configured pre- and post-action hooks.
    • managedActions - Default set of actions that are managed on remote peers.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    addOverride(self, command, absolutePath)

    source code 

    If no override currently exists for the command, add one.

    Parameters:
    • command - Name of command to be overridden.
    • absolutePath - Absolute path of the overrridden command.

    replaceOverride(self, command, absolutePath)

    source code 

    If override currently exists for the command, replace it; otherwise add it.

    Parameters:
    • command - Name of command to be overridden.
    • absolutePath - Absolute path of the overrridden command.

    _setStartingDay(self, value)

    source code 

    Property target used to set the starting day. If it is not None, the value must be a valid English day of the week, one of "monday", "tuesday", "wednesday", etc.

    Raises:
    • ValueError - If the value is not a valid day of the week.

    _setWorkingDir(self, value)

    source code 

    Property target used to set the working directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setBackupUser(self, value)

    source code 

    Property target used to set the backup user. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setBackupGroup(self, value)

    source code 

    Property target used to set the backup group. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setRcpCommand(self, value)

    source code 

    Property target used to set the rcp command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setRshCommand(self, value)

    source code 

    Property target used to set the rsh command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setCbackCommand(self, value)

    source code 

    Property target used to set the cback command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setOverrides(self, value)

    source code 

    Property target used to set the command path overrides list. Either the value must be None or each element must be a CommandOverride.

    Raises:
    • ValueError - If the value is not a CommandOverride

    _setHooks(self, value)

    source code 

    Property target used to set the pre- and post-action hooks list. Either the value must be None or each element must be an ActionHook.

    Raises:
    • ValueError - If the value is not a CommandOverride

    _setManagedActions(self, value)

    source code 

    Property target used to set the managed actions list. Elements do not have to exist on disk at the time of assignment.


    Property Details [hide private]

    startingDay

    Day that starts the week.

    Get Method:
    _getStartingDay(self) - Property target used to get the starting day.
    Set Method:
    _setStartingDay(self, value) - Property target used to set the starting day.

    workingDir

    Working (temporary) directory to use for backups.

    Get Method:
    _getWorkingDir(self) - Property target used to get the working directory.
    Set Method:
    _setWorkingDir(self, value) - Property target used to set the working directory.

    backupUser

    Effective user that backups should run as.

    Get Method:
    _getBackupUser(self) - Property target used to get the backup user.
    Set Method:
    _setBackupUser(self, value) - Property target used to set the backup user.

    backupGroup

    Effective group that backups should run as.

    Get Method:
    _getBackupGroup(self) - Property target used to get the backup group.
    Set Method:
    _setBackupGroup(self, value) - Property target used to set the backup group.

    rcpCommand

    Default rcp-compatible copy command for staging.

    Get Method:
    _getRcpCommand(self) - Property target used to get the rcp command.
    Set Method:
    _setRcpCommand(self, value) - Property target used to set the rcp command.

    rshCommand

    Default rsh-compatible command to use for remote shells.

    Get Method:
    _getRshCommand(self) - Property target used to get the rsh command.
    Set Method:
    _setRshCommand(self, value) - Property target used to set the rsh command.

    overrides

    List of configured command path overrides, if any.

    Get Method:
    _getOverrides(self) - Property target used to get the command path overrides list.
    Set Method:
    _setOverrides(self, value) - Property target used to set the command path overrides list.

    cbackCommand

    Default cback-compatible command to use on managed remote peers.

    Get Method:
    _getCbackCommand(self) - Property target used to get the cback command.
    Set Method:
    _setCbackCommand(self, value) - Property target used to set the cback command.

    hooks

    List of configured pre- and post-action hooks.

    Get Method:
    _getHooks(self) - Property target used to get the command path hooks list.
    Set Method:
    _setHooks(self, value) - Property target used to set the pre- and post-action hooks list.

    managedActions

    Default set of actions that are managed on remote peers.

    Get Method:
    _getManagedActions(self) - Property target used to get the managed actions list.
    Set Method:
    _setManagedActions(self, value) - Property target used to set the managed actions list.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.rebuild-module.html0000664000175000017500000003060012657665544027706 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.rebuild
    Package CedarBackup3 :: Package actions :: Module rebuild
    [hide private]
    [frames] | no frames]

    Module rebuild

    source code

    Implements the standard 'rebuild' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeRebuild(configPath, options, config)
    Executes the rebuild backup action.
    source code
     
    _findRebuildDirs(config)
    Finds the set of directories to be included in a disc rebuild.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.actions.rebuild")
      __package__ = 'CedarBackup3.actions'
    Function Details [hide private]

    executeRebuild(configPath, options, config)

    source code 

    Executes the rebuild backup action.

    This function exists mainly to recreate a disc that has been "trashed" due to media or hardware problems. Note that the "stage complete" indicator isn't checked for this action.

    Note that the rebuild action and the store action are very similar. The main difference is that while store only stores a single day's staging directory, the rebuild action operates on multiple staging directories.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are problems reading or writing files.

    _findRebuildDirs(config)

    source code 

    Finds the set of directories to be included in a disc rebuild.

    A the rebuild action is supposed to recreate the "last week's" disc. This won't always be possible if some of the staging directories are missing. However, the general procedure is to look back into the past no further than the previous "starting day of week", and then work forward from there trying to find all of the staging directories between then and now that still exist and have a stage indicator.

    Parameters:
    • config - Config object.
    Returns:
    Correct staging dir, as a dict mapping directory to date suffix.
    Raises:
    • IOError - If we do not find at least one staging directory.

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.extend.capacity-module.html0000664000175000017500000000333312657665544030472 0ustar pronovicpronovic00000000000000 capacity

    Module capacity


    Classes

    CapacityConfig
    LocalConfig
    PercentageQuantity

    Functions

    executeAction

    Variables

    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.purge-module.html0000664000175000017500000002315212657665544027406 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.purge
    Package CedarBackup3 :: Package actions :: Module purge
    [hide private]
    [frames] | no frames]

    Module purge

    source code

    Implements the standard 'purge' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executePurge(configPath, options, config)
    Executes the purge backup action.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.actions.purge")
      __package__ = 'CedarBackup3.actions'
    Function Details [hide private]

    executePurge(configPath, options, config)

    source code 

    Executes the purge backup action.

    For each configured directory, we create a purge item list, remove from the list anything that's younger than the configured retain days value, and then purge from the filesystem what's left.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.customize-module.html0000664000175000017500000000323312657665544027430 0ustar pronovicpronovic00000000000000 customize

    Module customize


    Functions

    customizeOverrides

    Variables

    DEBIAN_CDRECORD
    DEBIAN_MKISOFS
    PLATFORM
    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.util.RestrictedContentList-class.html0000664000175000017500000004643312657665545031730 0ustar pronovicpronovic00000000000000 CedarBackup3.util.RestrictedContentList
    Package CedarBackup3 :: Module util :: Class RestrictedContentList
    [hide private]
    [frames] | no frames]

    Class RestrictedContentList

    source code

    object --+        
             |        
          list --+    
                 |    
     UnorderedList --+
                     |
                    RestrictedContentList
    

    Class representing a list containing only object with certain values.

    This is an unordered list.

    We override the append, insert and extend methods to ensure that any item added to the list is among the valid values. We use a standard comparison, so pretty much anything can be in the list of valid values.

    The valuesDescr value will be used in exceptions, i.e. "Item must be one of values in VALID_ACTIONS" if valuesDescr is "VALID_ACTIONS".


    Note: This class doesn't make any attempt to trap for nonsensical arguments. All of the values in the values list should be of the same type (i.e. strings). Then, all list operations also need to be of that type (i.e. you should always insert or append just strings). If you mix types -- for instance lists and strings -- you will likely see AttributeError exceptions or other problems.

    Instance Methods [hide private]
    new empty list
    __init__(self, valuesList, valuesDescr, prefix=None)
    Initializes a list restricted to containing certain values.
    source code
     
    append(self, item)
    Overrides the standard append method.
    source code
     
    insert(self, index, item)
    Overrides the standard insert method.
    source code
     
    extend(self, seq)
    Overrides the standard insert method.
    source code

    Inherited from UnorderedList: __eq__, __ge__, __gt__, __le__, __lt__, __ne__

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, count, index, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Static Methods [hide private]

    Inherited from UnorderedList: mixedkey, mixedsort

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, valuesList, valuesDescr, prefix=None)
    (Constructor)

    source code 

    Initializes a list restricted to containing certain values.

    Parameters:
    • valuesList - List of valid values.
    • valuesDescr - Short string describing list of values.
    • prefix - Prefix to use in error messages (None results in prefix "Item")
    Returns: new empty list
    Overrides: object.__init__

    append(self, item)

    source code 

    Overrides the standard append method.

    Raises:
    • ValueError - If item is not in the values list.
    Overrides: list.append

    insert(self, index, item)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item is not in the values list.
    Overrides: list.insert

    extend(self, seq)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item is not in the values list.
    Overrides: list.extend

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.subversion.Repository-class.html0000664000175000017500000010704512657665545032315 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.subversion.Repository
    Package CedarBackup3 :: Package extend :: Module subversion :: Class Repository
    [hide private]
    [frames] | no frames]

    Class Repository

    source code

    object --+
             |
            Repository
    
    Known Subclasses:

    Class representing generic Subversion repository configuration..

    The following restrictions exist on data in this class:

    • The respository path must be absolute.
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.

    The repository type value is kept around just for reference. It doesn't affect the behavior of the backup.

    Instance Methods [hide private]
     
    __init__(self, repositoryType=None, repositoryPath=None, collectMode=None, compressMode=None)
    Constructor for the Repository class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    _setRepositoryType(self, value)
    Property target used to set the repository type.
    source code
     
    _getRepositoryType(self)
    Property target used to get the repository type.
    source code
     
    _setRepositoryPath(self, value)
    Property target used to set the repository path.
    source code
     
    _getRepositoryPath(self)
    Property target used to get the repository path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      repositoryPath
    Path to the repository to collect.
      collectMode
    Overridden collect mode for this repository.
      compressMode
    Overridden compress mode for this repository.
      repositoryType
    Type of this repository, for reference.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, repositoryType=None, repositoryPath=None, collectMode=None, compressMode=None)
    (Constructor)

    source code 

    Constructor for the Repository class.

    Parameters:
    • repositoryType - Type of repository, for reference
    • repositoryPath - Absolute path to a Subversion repository on disk.
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setRepositoryType(self, value)

    source code 

    Property target used to set the repository type. There is no validation; this value is kept around just for reference.

    _setRepositoryPath(self, value)

    source code 

    Property target used to set the repository path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    repositoryPath

    Path to the repository to collect.

    Get Method:
    _getRepositoryPath(self) - Property target used to get the repository path.
    Set Method:
    _setRepositoryPath(self, value) - Property target used to set the repository path.

    collectMode

    Overridden collect mode for this repository.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Overridden compress mode for this repository.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    repositoryType

    Type of this repository, for reference.

    Get Method:
    _getRepositoryType(self) - Property target used to get the repository type.
    Set Method:
    _setRepositoryType(self, value) - Property target used to set the repository type.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.util.AbsolutePathList-class.html0000664000175000017500000004053712657665545030657 0ustar pronovicpronovic00000000000000 CedarBackup3.util.AbsolutePathList
    Package CedarBackup3 :: Module util :: Class AbsolutePathList
    [hide private]
    [frames] | no frames]

    Class AbsolutePathList

    source code

    object --+        
             |        
          list --+    
                 |    
     UnorderedList --+
                     |
                    AbsolutePathList
    

    Class representing a list of absolute paths.

    This is an unordered list.

    We override the append, insert and extend methods to ensure that any item added to the list is an absolute path.

    Each item added to the list is encoded using encodePath. If we don't do this, we have problems trying certain operations between strings and unicode objects, particularly for "odd" filenames that can't be encoded in standard ASCII.

    Instance Methods [hide private]
     
    append(self, item)
    Overrides the standard append method.
    source code
     
    insert(self, index, item)
    Overrides the standard insert method.
    source code
     
    extend(self, seq)
    Overrides the standard insert method.
    source code

    Inherited from UnorderedList: __eq__, __ge__, __gt__, __le__, __lt__, __ne__

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __init__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, count, index, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Static Methods [hide private]

    Inherited from UnorderedList: mixedkey, mixedsort

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    append(self, item)

    source code 

    Overrides the standard append method.

    Raises:
    • ValueError - If item is not an absolute path.
    Overrides: list.append

    insert(self, index, item)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item is not an absolute path.
    Overrides: list.insert

    extend(self, seq)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If any item is not an absolute path.
    Overrides: list.extend

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.extend.subversion-module.html0000664000175000017500000001077212657665544031101 0ustar pronovicpronovic00000000000000 subversion

    Module subversion


    Classes

    BDBRepository
    FSFSRepository
    LocalConfig
    Repository
    RepositoryDir
    SubversionConfig

    Functions

    backupBDBRepository
    backupFSFSRepository
    backupRepository
    executeAction
    getYoungestRevision

    Variables

    REVISION_PATH_EXTENSION
    SVNADMIN_COMMAND
    SVNLOOK_COMMAND
    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.initialize-module.html0000664000175000017500000002304012657665544030421 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.initialize
    Package CedarBackup3 :: Package actions :: Module initialize
    [hide private]
    [frames] | no frames]

    Module initialize

    source code

    Implements the standard 'initialize' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeInitialize(configPath, options, config)
    Executes the initialize action.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.actions.initialize")
      __package__ = 'CedarBackup3.actions'
    Function Details [hide private]

    executeInitialize(configPath, options, config)

    source code 

    Executes the initialize action.

    The initialize action initializes the media currently in the writer device so that Cedar Backup can recognize it later. This is an optional step; it's only required if checkMedia is set on the store configuration.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.

    CedarBackup3-3.1.6/doc/interface/class-tree.html0000664000175000017500000005615712657665544023171 0ustar pronovicpronovic00000000000000 Class Hierarchy
     
    [hide private]
    [frames] | no frames]
    [ Module Hierarchy | Class Hierarchy ]

    Class Hierarchy

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.filesystem.BackupFileList-class.html0000664000175000017500000014302212657665545031471 0ustar pronovicpronovic00000000000000 CedarBackup3.filesystem.BackupFileList
    Package CedarBackup3 :: Module filesystem :: Class BackupFileList
    [hide private]
    [frames] | no frames]

    Class BackupFileList

    source code

    object --+        
             |        
          list --+    
                 |    
    FilesystemList --+
                     |
                    BackupFileList
    

    List of files to be backed up.

    A BackupFileList is a FilesystemList containing a list of files to be backed up. It only contains files, not directories (soft links are treated like files). On top of the generic functionality provided by FilesystemList, this class adds functionality to keep a hash (checksum) for each file in the list, and it also provides a method to calculate the total size of the files in the list and a way to export the list into tar form.

    Instance Methods [hide private]
    new empty list
    __init__(self)
    Initializes a list with no configured exclusions.
    source code
     
    addDir(self, path)
    Adds a directory to the list.
    source code
     
    totalSize(self)
    Returns the total size among all files in the list.
    source code
     
    generateSizeMap(self)
    Generates a mapping from file to file size in bytes.
    source code
     
    generateDigestMap(self, stripPrefix=None)
    Generates a mapping from file to file digest.
    source code
     
    generateFitted(self, capacity, algorithm='worst_fit')
    Generates a list of items that fit in the indicated capacity.
    source code
     
    generateTarfile(self, path, mode='tar', ignore=False, flat=False)
    Creates a tar file containing the files in the list.
    source code
     
    removeUnchanged(self, digestMap, captureDigest=False)
    Removes unchanged entries from the list.
    source code
     
    generateSpan(self, capacity, algorithm='worst_fit')
    Splits the list of items into sub-lists that fit in a given capacity.
    source code
     
    _getKnapsackTable(self, capacity=None)
    Converts the list into the form needed by the knapsack algorithms.
    source code

    Inherited from FilesystemList: addDirContents, addFile, normalize, removeDirs, removeFiles, removeInvalid, removeLinks, removeMatch, verify

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __eq__, __ge__, __getattribute__, __getitem__, __getslice__, __gt__, __iadd__, __imul__, __iter__, __le__, __len__, __lt__, __mul__, __ne__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, append, count, extend, index, insert, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _generateDigest(path)
    Generates an SHA digest for a given file on disk.
    source code
     
    _getKnapsackFunction(algorithm)
    Returns a reference to the function associated with an algorithm name.
    source code
    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from FilesystemList: excludeBasenamePatterns, excludeDirs, excludeFiles, excludeLinks, excludePaths, excludePatterns, ignoreFile

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    Initializes a list with no configured exclusions.

    Returns: new empty list
    Overrides: object.__init__

    addDir(self, path)

    source code 

    Adds a directory to the list.

    Note that this class does not allow directories to be added by themselves (a backup list contains only files). However, since links to directories are technically files, we allow them to be added.

    This method is implemented in terms of the superclass method, with one additional validation: the superclass method is only called if the passed-in path is both a directory and a link. All of the superclass's existing validations and restrictions apply.

    Parameters:
    • path (String representing a path on disk) - Directory path to be added to the list
    Returns:
    Number of items added to the list.
    Raises:
    • ValueError - If path is not a directory or does not exist.
    • ValueError - If the path could not be encoded properly.
    Overrides: FilesystemList.addDir

    totalSize(self)

    source code 

    Returns the total size among all files in the list. Only files are counted. Soft links that point at files are ignored. Entries which do not exist on disk are ignored.

    Returns:
    Total size, in bytes

    generateSizeMap(self)

    source code 

    Generates a mapping from file to file size in bytes. The mapping does include soft links, which are listed with size zero. Entries which do not exist on disk are ignored.

    Returns:
    Dictionary mapping file to file size

    generateDigestMap(self, stripPrefix=None)

    source code 

    Generates a mapping from file to file digest.

    Currently, the digest is an SHA hash, which should be pretty secure. In the future, this might be a different kind of hash, but we guarantee that the type of the hash will not change unless the library major version number is bumped.

    Entries which do not exist on disk are ignored.

    Soft links are ignored. We would end up generating a digest for the file that the soft link points at, which doesn't make any sense.

    If stripPrefix is passed in, then that prefix will be stripped from each key when the map is generated. This can be useful in generating two "relative" digest maps to be compared to one another.

    Parameters:
    • stripPrefix (String with any contents) - Common prefix to be stripped from paths
    Returns:
    Dictionary mapping file to digest value

    See Also: removeUnchanged

    generateFitted(self, capacity, algorithm='worst_fit')

    source code 

    Generates a list of items that fit in the indicated capacity.

    Sometimes, callers would like to include every item in a list, but are unable to because not all of the items fit in the space available. This method returns a copy of the list, containing only the items that fit in a given capacity. A copy is returned so that we don't lose any information if for some reason the fitted list is unsatisfactory.

    The fitting is done using the functions in the knapsack module. By default, the first fit algorithm is used, but you can also choose from best fit, worst fit and alternate fit.

    Parameters:
    • capacity (Integer, in bytes) - Maximum capacity among the files in the new list
    • algorithm (One of "first_fit", "best_fit", "worst_fit", "alternate_fit") - Knapsack (fit) algorithm to use
    Returns:
    Copy of list with total size no larger than indicated capacity
    Raises:
    • ValueError - If the algorithm is invalid.

    generateTarfile(self, path, mode='tar', ignore=False, flat=False)

    source code 

    Creates a tar file containing the files in the list.

    By default, this method will create uncompressed tar files. If you pass in mode 'targz', then it will create gzipped tar files, and if you pass in mode 'tarbz2', then it will create bzipped tar files.

    The tar file will be created as a GNU tar archive, which enables extended file name lengths, etc. Since GNU tar is so prevalent, I've decided that the extra functionality out-weighs the disadvantage of not being "standard".

    If you pass in flat=True, then a "flat" archive will be created, and all of the files will be added to the root of the archive. So, the file /tmp/something/whatever.txt would be added as just whatever.txt.

    By default, the whole method call fails if there are problems adding any of the files to the archive, resulting in an exception. Under these circumstances, callers are advised that they might want to call removeInvalid() and then attempt to extract the tar file a second time, since the most common cause of failures is a missing file (a file that existed when the list was built, but is gone again by the time the tar file is built).

    If you want to, you can pass in ignore=True, and the method will ignore errors encountered when adding individual files to the archive (but not errors opening and closing the archive itself).

    We'll always attempt to remove the tarfile from disk if an exception will be thrown.

    Parameters:
    • path (String representing a path on disk) - Path of tar file to create on disk
    • mode (One of either 'tar', 'targz' or 'tarbz2') - Tar creation mode
    • ignore (Boolean) - Indicates whether to ignore certain errors.
    • flat (Boolean) - Creates "flat" archive by putting all items in root
    Raises:
    • ValueError - If mode is not valid
    • ValueError - If list is empty
    • ValueError - If the path could not be encoded properly.
    • TarError - If there is a problem creating the tar file
    Notes:
    • No validation is done as to whether the entries in the list are files, since only files or soft links should be in an object like this. However, to be safe, everything is explicitly added to the tar archive non-recursively so it's safe to include soft links to directories.
    • The Python tarfile module, which is used internally here, is supposed to deal properly with long filenames and links. In my testing, I have found that it appears to be able to add long really long filenames to archives, but doesn't do a good job reading them back out, even out of an archive it created. Fortunately, all Cedar Backup does is add files to archives.

    removeUnchanged(self, digestMap, captureDigest=False)

    source code 

    Removes unchanged entries from the list.

    This method relies on a digest map as returned from generateDigestMap. For each entry in digestMap, if the entry also exists in the current list and the entry in the current list has the same digest value as in the map, the entry in the current list will be removed.

    This method offers a convenient way for callers to filter unneeded entries from a list. The idea is that a caller will capture a digest map from generateDigestMap at some point in time (perhaps the beginning of the week), and will save off that map using pickle or some other method. Then, the caller could use this method sometime in the future to filter out any unchanged files based on the saved-off map.

    If captureDigest is passed-in as True, then digest information will be captured for the entire list before the removal step occurs using the same rules as in generateDigestMap. The check will involve a lookup into the complete digest map.

    If captureDigest is passed in as False, we will only generate a digest value for files we actually need to check, and we'll ignore any entry in the list which isn't a file that currently exists on disk.

    The return value varies depending on captureDigest, as well. To preserve backwards compatibility, if captureDigest is False, then we'll just return a single value representing the number of entries removed. Otherwise, we'll return a tuple of (entries removed, digest map). The returned digest map will be in exactly the form returned by generateDigestMap.

    Parameters:
    • digestMap (Map as returned from generateDigestMap.) - Dictionary mapping file name to digest value.
    • captureDigest (Boolean) - Indicates that digest information should be captured.
    Returns:
    Results as discussed above (format varies based on arguments)

    Note: For performance reasons, this method actually ends up rebuilding the list from scratch. First, we build a temporary dictionary containing all of the items from the original list. Then, we remove items as needed from the dictionary (which is faster than the equivalent operation on a list). Finally, we replace the contents of the current list based on the keys left in the dictionary. This should be transparent to the caller.

    _generateDigest(path)
    Static Method

    source code 

    Generates an SHA digest for a given file on disk.

    The original code for this function used this simplistic implementation, which requires reading the entire file into memory at once in order to generate a digest value:

      sha.new(open(path).read()).hexdigest()
    

    Not surprisingly, this isn't an optimal solution. The Simple file hashing Python Cookbook recipe describes how to incrementally generate a hash value by reading in chunks of data rather than reading the file all at once. The recipe relies on the the update() method of the various Python hashing algorithms.

    In my tests using a 110 MB file on CD, the original implementation requires 111 seconds. This implementation requires only 40-45 seconds, which is a pretty substantial speed-up.

    Experience shows that reading in around 4kB (4096 bytes) at a time yields the best performance. Smaller reads are quite a bit slower, and larger reads don't make much of a difference. The 4kB number makes me a little suspicious, and I think it might be related to the size of a filesystem read at the hardware level. However, I've decided to just hardcode 4096 until I have evidence that shows it's worthwhile making the read size configurable.

    Parameters:
    • path - Path to generate digest for.
    Returns:
    ASCII-safe SHA digest for the file.
    Raises:
    • OSError - If the file cannot be opened.

    generateSpan(self, capacity, algorithm='worst_fit')

    source code 

    Splits the list of items into sub-lists that fit in a given capacity.

    Sometimes, callers need split to a backup file list into a set of smaller lists. For instance, you could use this to "span" the files across a set of discs.

    The fitting is done using the functions in the knapsack module. By default, the first fit algorithm is used, but you can also choose from best fit, worst fit and alternate fit.

    Parameters:
    • capacity (Integer, in bytes) - Maximum capacity among the files in the new list
    • algorithm (One of "first_fit", "best_fit", "worst_fit", "alternate_fit") - Knapsack (fit) algorithm to use
    Returns:
    List of SpanItem objects.
    Raises:
    • ValueError - If the algorithm is invalid.
    • ValueError - If it's not possible to fit some items

    Note: If any of your items are larger than the capacity, then it won't be possible to find a solution. In this case, a value error will be raised.

    _getKnapsackTable(self, capacity=None)

    source code 

    Converts the list into the form needed by the knapsack algorithms.

    Returns:
    Dictionary mapping file name to tuple of (file path, file size).

    _getKnapsackFunction(algorithm)
    Static Method

    source code 

    Returns a reference to the function associated with an algorithm name. Algorithm name must be one of "first_fit", "best_fit", "worst_fit", "alternate_fit"

    Parameters:
    • algorithm - Name of the algorithm
    Returns:
    Reference to knapsack function
    Raises:
    • ValueError - If the algorithm name is unknown.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.writers.util-module.html0000664000175000017500000004267412657665544027312 0ustar pronovicpronovic00000000000000 CedarBackup3.writers.util
    Package CedarBackup3 :: Package writers :: Module util
    [hide private]
    [frames] | no frames]

    Module util

    source code

    Provides utilities related to image writers.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      IsoImage
    Represents an ISO filesystem image.
    Functions [hide private]
     
    validateDevice(device, unittest=False)
    Validates a configured device.
    source code
     
    validateScsiId(scsiId)
    Validates a SCSI id string.
    source code
     
    validateDriveSpeed(driveSpeed)
    Validates a drive speed value.
    source code
     
    readMediaLabel(devicePath)
    Reads the media label (volume name) from the indicated device.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.writers.util")
      MKISOFS_COMMAND = ['mkisofs']
      VOLNAME_COMMAND = ['volname']
      __package__ = 'CedarBackup3.writers'
    Function Details [hide private]

    validateDevice(device, unittest=False)

    source code 

    Validates a configured device. The device must be an absolute path, must exist, and must be writable. The unittest flag turns off validation of the device on disk.

    Parameters:
    • device - Filesystem device path.
    • unittest - Indicates whether we're unit testing.
    Returns:
    Device as a string, for instance "/dev/cdrw"
    Raises:
    • ValueError - If the device value is invalid.
    • ValueError - If some path cannot be encoded properly.

    validateScsiId(scsiId)

    source code 

    Validates a SCSI id string. SCSI id must be a string in the form [<method>:]scsibus,target,lun. For Mac OS X (Darwin), we also accept the form IO.*Services[/N].

    Parameters:
    • scsiId - SCSI id for the device.
    Returns:
    SCSI id as a string, for instance "ATA:1,0,0"
    Raises:
    • ValueError - If the SCSI id string is invalid.

    Note: For consistency, if None is passed in, None will be returned.

    validateDriveSpeed(driveSpeed)

    source code 

    Validates a drive speed value. Drive speed must be an integer which is >= 1.

    Parameters:
    • driveSpeed - Speed at which the drive writes.
    Returns:
    Drive speed as an integer
    Raises:
    • ValueError - If the drive speed value is invalid.

    Note: For consistency, if None is passed in, None will be returned.

    readMediaLabel(devicePath)

    source code 

    Reads the media label (volume name) from the indicated device. The volume name is read using the volname command.

    Parameters:
    • devicePath - Device path to read from
    Returns:
    Media label as a string, or None if there is no name or it could not be read.

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.writer-module.html0000664000175000017500000000211612657665544026721 0ustar pronovicpronovic00000000000000 writer

    Module writer


    Variables

    __package__

    [hide private] CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.actions.rebuild-module.html0000664000175000017500000000276412657665544030503 0ustar pronovicpronovic00000000000000 rebuild

    Module rebuild


    Functions

    executeRebuild

    Variables

    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.writer-pysrc.html0000664000175000017500000003543412657665547026025 0ustar pronovicpronovic00000000000000 CedarBackup3.writer
    Package CedarBackup3 :: Module writer
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.writer

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 3 (>= 3.4) 
    13  # Project  : Cedar Backup, release 3 
    14  # Purpose  : Provides interface backwards compatibility. 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Provides interface backwards compatibility. 
    24   
    25  In Cedar Backup 2.10.0, a refactoring effort took place while adding code to 
    26  support DVD hardware.  All of the writer functionality was moved to the 
    27  writers/ package.  This mostly-empty file remains to preserve the Cedar Backup 
    28  library interface. 
    29   
    30  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    31  """ 
    32   
    33  ######################################################################## 
    34  # Imported modules 
    35  ######################################################################## 
    36   
    37  # pylint: disable=W0611 
    38  from CedarBackup3.writers.util import validateScsiId, validateDriveSpeed 
    39  from CedarBackup3.writers.cdwriter import MediaDefinition, MediaCapacity, CdWriter 
    40  from CedarBackup3.writers.cdwriter import MEDIA_CDRW_74, MEDIA_CDR_74, MEDIA_CDRW_80, MEDIA_CDR_80 
    41   
    

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.mbox.MboxFile-class.html0000664000175000017500000007533312657665545030415 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.mbox.MboxFile
    Package CedarBackup3 :: Package extend :: Module mbox :: Class MboxFile
    [hide private]
    [frames] | no frames]

    Class MboxFile

    source code

    object --+
             |
            MboxFile
    

    Class representing mbox file configuration..

    The following restrictions exist on data in this class:

    • The absolute path must be absolute.
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.
    Instance Methods [hide private]
     
    __init__(self, absolutePath=None, collectMode=None, compressMode=None)
    Constructor for the MboxFile class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      absolutePath
    Absolute path to the mbox file.
      collectMode
    Overridden collect mode for this mbox file.
      compressMode
    Overridden compress mode for this mbox file.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, absolutePath=None, collectMode=None, compressMode=None)
    (Constructor)

    source code 

    Constructor for the MboxFile class.

    You should never directly instantiate this class.

    Parameters:
    • absolutePath - Absolute path to an mbox file on disk.
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    absolutePath

    Absolute path to the mbox file.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    collectMode

    Overridden collect mode for this mbox file.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Overridden compress mode for this mbox file.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.actions.validate-module.html0000664000175000017500000000506712657665544030645 0ustar pronovicpronovic00000000000000 validate

    Module validate


    Functions

    executeValidate

    Variables

    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.util.ObjectTypeList-class.html0000664000175000017500000004462612657665545030337 0ustar pronovicpronovic00000000000000 CedarBackup3.util.ObjectTypeList
    Package CedarBackup3 :: Module util :: Class ObjectTypeList
    [hide private]
    [frames] | no frames]

    Class ObjectTypeList

    source code

    object --+        
             |        
          list --+    
                 |    
     UnorderedList --+
                     |
                    ObjectTypeList
    

    Class representing a list containing only objects with a certain type.

    This is an unordered list.

    We override the append, insert and extend methods to ensure that any item added to the list matches the type that is requested. The comparison uses the built-in isinstance, which should allow subclasses of of the requested type to be added to the list as well.

    The objectName value will be used in exceptions, i.e. "Item must be a CollectDir object." if objectName is "CollectDir".

    Instance Methods [hide private]
    new empty list
    __init__(self, objectType, objectName)
    Initializes a typed list for a particular type.
    source code
     
    append(self, item)
    Overrides the standard append method.
    source code
     
    insert(self, index, item)
    Overrides the standard insert method.
    source code
     
    extend(self, seq)
    Overrides the standard insert method.
    source code

    Inherited from UnorderedList: __eq__, __ge__, __gt__, __le__, __lt__, __ne__

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, count, index, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Static Methods [hide private]

    Inherited from UnorderedList: mixedkey, mixedsort

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, objectType, objectName)
    (Constructor)

    source code 

    Initializes a typed list for a particular type.

    Parameters:
    • objectType - Type that the list elements must match.
    • objectName - Short string containing the "name" of the type.
    Returns: new empty list
    Overrides: object.__init__

    append(self, item)

    source code 

    Overrides the standard append method.

    Raises:
    • ValueError - If item does not match requested type.
    Overrides: list.append

    insert(self, index, item)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item does not match requested type.
    Overrides: list.insert

    extend(self, seq)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item does not match requested type.
    Overrides: list.extend

    CedarBackup3-3.1.6/doc/interface/epydoc.css0000664000175000017500000003722712657665544022233 0ustar pronovicpronovic00000000000000 /* Epydoc CSS Stylesheet * * This stylesheet can be used to customize the appearance of epydoc's * HTML output. * */ /* Default Colors & Styles * - Set the default foreground & background color with 'body'; and * link colors with 'a:link' and 'a:visited'. * - Use bold for decision list terms. * - The heading styles defined here are used for headings *within* * docstring descriptions. All headings used by epydoc itself use * either class='epydoc' or class='toc' (CSS styles for both * defined below). */ body { background: #ffffff; color: #000000; } p { margin-top: 0.5em; margin-bottom: 0.5em; } a:link { color: #0000ff; } a:visited { color: #204080; } dt { font-weight: bold; } h1 { font-size: +140%; font-style: italic; font-weight: bold; } h2 { font-size: +125%; font-style: italic; font-weight: bold; } h3 { font-size: +110%; font-style: italic; font-weight: normal; } code { font-size: 100%; } /* N.B.: class, not pseudoclass */ a.link { font-family: monospace; } /* Page Header & Footer * - The standard page header consists of a navigation bar (with * pointers to standard pages such as 'home' and 'trees'); a * breadcrumbs list, which can be used to navigate to containing * classes or modules; options links, to show/hide private * variables and to show/hide frames; and a page title (using *

    ). The page title may be followed by a link to the * corresponding source code (using 'span.codelink'). * - The footer consists of a navigation bar, a timestamp, and a * pointer to epydoc's homepage. */ h1.epydoc { margin: 0; font-size: +140%; font-weight: bold; } h2.epydoc { font-size: +130%; font-weight: bold; } h3.epydoc { font-size: +115%; font-weight: bold; margin-top: 0.2em; } td h3.epydoc { font-size: +115%; font-weight: bold; margin-bottom: 0; } table.navbar { background: #a0c0ff; color: #000000; border: 2px groove #c0d0d0; } table.navbar table { color: #000000; } th.navbar-select { background: #70b0ff; color: #000000; } table.navbar a { text-decoration: none; } table.navbar a:link { color: #0000ff; } table.navbar a:visited { color: #204080; } span.breadcrumbs { font-size: 85%; font-weight: bold; } span.options { font-size: 70%; } span.codelink { font-size: 85%; } td.footer { font-size: 85%; } /* Table Headers * - Each summary table and details section begins with a 'header' * row. This row contains a section title (marked by * 'span.table-header') as well as a show/hide private link * (marked by 'span.options', defined above). * - Summary tables that contain user-defined groups mark those * groups using 'group header' rows. */ td.table-header { background: #70b0ff; color: #000000; border: 1px solid #608090; } td.table-header table { color: #000000; } td.table-header table a:link { color: #0000ff; } td.table-header table a:visited { color: #204080; } span.table-header { font-size: 120%; font-weight: bold; } th.group-header { background: #c0e0f8; color: #000000; text-align: left; font-style: italic; font-size: 115%; border: 1px solid #608090; } /* Summary Tables (functions, variables, etc) * - Each object is described by a single row of the table with * two cells. The left cell gives the object's type, and is * marked with 'code.summary-type'. The right cell gives the * object's name and a summary description. * - CSS styles for the table's header and group headers are * defined above, under 'Table Headers' */ table.summary { border-collapse: collapse; background: #e8f0f8; color: #000000; border: 1px solid #608090; margin-bottom: 0.5em; } td.summary { border: 1px solid #608090; } code.summary-type { font-size: 85%; } table.summary a:link { color: #0000ff; } table.summary a:visited { color: #204080; } /* Details Tables (functions, variables, etc) * - Each object is described in its own div. * - A single-row summary table w/ table-header is used as * a header for each details section (CSS style for table-header * is defined above, under 'Table Headers'). */ table.details { border-collapse: collapse; background: #e8f0f8; color: #000000; border: 1px solid #608090; margin: .2em 0 0 0; } table.details table { color: #000000; } table.details a:link { color: #0000ff; } table.details a:visited { color: #204080; } /* Fields */ dl.fields { margin-left: 2em; margin-top: 1em; margin-bottom: 1em; } dl.fields dd ul { margin-left: 0em; padding-left: 0em; } dl.fields dd ul li ul { margin-left: 2em; padding-left: 0em; } div.fields { margin-left: 2em; } div.fields p { margin-bottom: 0.5em; } /* Index tables (identifier index, term index, etc) * - link-index is used for indices containing lists of links * (namely, the identifier index & term index). * - index-where is used in link indices for the text indicating * the container/source for each link. * - metadata-index is used for indices containing metadata * extracted from fields (namely, the bug index & todo index). */ table.link-index { border-collapse: collapse; background: #e8f0f8; color: #000000; border: 1px solid #608090; } td.link-index { border-width: 0px; } table.link-index a:link { color: #0000ff; } table.link-index a:visited { color: #204080; } span.index-where { font-size: 70%; } table.metadata-index { border-collapse: collapse; background: #e8f0f8; color: #000000; border: 1px solid #608090; margin: .2em 0 0 0; } td.metadata-index { border-width: 1px; border-style: solid; } table.metadata-index a:link { color: #0000ff; } table.metadata-index a:visited { color: #204080; } /* Function signatures * - sig* is used for the signature in the details section. * - .summary-sig* is used for the signature in the summary * table, and when listing property accessor functions. * */ .sig-name { color: #006080; } .sig-arg { color: #008060; } .sig-default { color: #602000; } .summary-sig { font-family: monospace; } .summary-sig-name { color: #006080; font-weight: bold; } table.summary a.summary-sig-name:link { color: #006080; font-weight: bold; } table.summary a.summary-sig-name:visited { color: #006080; font-weight: bold; } .summary-sig-arg { color: #006040; } .summary-sig-default { color: #501800; } /* Subclass list */ ul.subclass-list { display: inline; } ul.subclass-list li { display: inline; } /* To render variables, classes etc. like functions */ table.summary .summary-name { color: #006080; font-weight: bold; font-family: monospace; } table.summary a.summary-name:link { color: #006080; font-weight: bold; font-family: monospace; } table.summary a.summary-name:visited { color: #006080; font-weight: bold; font-family: monospace; } /* Variable values * - In the 'variable details' sections, each varaible's value is * listed in a 'pre.variable' box. The width of this box is * restricted to 80 chars; if the value's repr is longer than * this it will be wrapped, using a backslash marked with * class 'variable-linewrap'. If the value's repr is longer * than 3 lines, the rest will be ellided; and an ellipsis * marker ('...' marked with 'variable-ellipsis') will be used. * - If the value is a string, its quote marks will be marked * with 'variable-quote'. * - If the variable is a regexp, it is syntax-highlighted using * the re* CSS classes. */ pre.variable { padding: .5em; margin: 0; background: #dce4ec; color: #000000; border: 1px solid #708890; } .variable-linewrap { color: #604000; font-weight: bold; } .variable-ellipsis { color: #604000; font-weight: bold; } .variable-quote { color: #604000; font-weight: bold; } .variable-group { color: #008000; font-weight: bold; } .variable-op { color: #604000; font-weight: bold; } .variable-string { color: #006030; } .variable-unknown { color: #a00000; font-weight: bold; } .re { color: #000000; } .re-char { color: #006030; } .re-op { color: #600000; } .re-group { color: #003060; } .re-ref { color: #404040; } /* Base tree * - Used by class pages to display the base class hierarchy. */ pre.base-tree { font-size: 80%; margin: 0; } /* Frames-based table of contents headers * - Consists of two frames: one for selecting modules; and * the other listing the contents of the selected module. * - h1.toc is used for each frame's heading * - h2.toc is used for subheadings within each frame. */ h1.toc { text-align: center; font-size: 105%; margin: 0; font-weight: bold; padding: 0; } h2.toc { font-size: 100%; font-weight: bold; margin: 0.5em 0 0 -0.3em; } /* Syntax Highlighting for Source Code * - doctest examples are displayed in a 'pre.py-doctest' block. * If the example is in a details table entry, then it will use * the colors specified by the 'table pre.py-doctest' line. * - Source code listings are displayed in a 'pre.py-src' block. * Each line is marked with 'span.py-line' (used to draw a line * down the left margin, separating the code from the line * numbers). Line numbers are displayed with 'span.py-lineno'. * The expand/collapse block toggle button is displayed with * 'a.py-toggle' (Note: the CSS style for 'a.py-toggle' should not * modify the font size of the text.) * - If a source code page is opened with an anchor, then the * corresponding code block will be highlighted. The code * block's header is highlighted with 'py-highlight-hdr'; and * the code block's body is highlighted with 'py-highlight'. * - The remaining py-* classes are used to perform syntax * highlighting (py-string for string literals, py-name for names, * etc.) */ pre.py-doctest { padding: .5em; margin: 1em; background: #e8f0f8; color: #000000; border: 1px solid #708890; } table pre.py-doctest { background: #dce4ec; color: #000000; } pre.py-src { border: 2px solid #000000; background: #f0f0f0; color: #000000; } .py-line { border-left: 2px solid #000000; margin-left: .2em; padding-left: .4em; } .py-lineno { font-style: italic; font-size: 90%; padding-left: .5em; } a.py-toggle { text-decoration: none; } div.py-highlight-hdr { border-top: 2px solid #000000; border-bottom: 2px solid #000000; background: #d8e8e8; } div.py-highlight { border-bottom: 2px solid #000000; background: #d0e0e0; } .py-prompt { color: #005050; font-weight: bold;} .py-more { color: #005050; font-weight: bold;} .py-string { color: #006030; } .py-comment { color: #003060; } .py-keyword { color: #600000; } .py-output { color: #404040; } .py-name { color: #000050; } .py-name:link { color: #000050 !important; } .py-name:visited { color: #000050 !important; } .py-number { color: #005000; } .py-defname { color: #000060; font-weight: bold; } .py-def-name { color: #000060; font-weight: bold; } .py-base-class { color: #000060; } .py-param { color: #000060; } .py-docstring { color: #006030; } .py-decorator { color: #804020; } /* Use this if you don't want links to names underlined: */ /*a.py-name { text-decoration: none; }*/ /* Graphs & Diagrams * - These CSS styles are used for graphs & diagrams generated using * Graphviz dot. 'img.graph-without-title' is used for bare * diagrams (to remove the border created by making the image * clickable). */ img.graph-without-title { border: none; } img.graph-with-title { border: 1px solid #000000; } span.graph-title { font-weight: bold; } span.graph-caption { } /* General-purpose classes * - 'p.indent-wrapped-lines' defines a paragraph whose first line * is not indented, but whose subsequent lines are. * - The 'nomargin-top' class is used to remove the top margin (e.g. * from lists). The 'nomargin' class is used to remove both the * top and bottom margin (but not the left or right margin -- * for lists, that would cause the bullets to disappear.) */ p.indent-wrapped-lines { padding: 0 0 0 7em; text-indent: -7em; margin: 0; } .nomargin-top { margin-top: 0; } .nomargin { margin-top: 0; margin-bottom: 0; } /* HTML Log */ div.log-block { padding: 0; margin: .5em 0 .5em 0; background: #e8f0f8; color: #000000; border: 1px solid #000000; } div.log-error { padding: .1em .3em .1em .3em; margin: 4px; background: #ffb0b0; color: #000000; border: 1px solid #000000; } div.log-warning { padding: .1em .3em .1em .3em; margin: 4px; background: #ffffb0; color: #000000; border: 1px solid #000000; } div.log-info { padding: .1em .3em .1em .3em; margin: 4px; background: #b0ffb0; color: #000000; border: 1px solid #000000; } h2.log-hdr { background: #70b0ff; color: #000000; margin: 0; padding: 0em 0.5em 0em 0.5em; border-bottom: 1px solid #000000; font-size: 110%; } p.log { font-weight: bold; margin: .5em 0 .5em 0; } tr.opt-changed { color: #000000; font-weight: bold; } tr.opt-default { color: #606060; } pre.log { margin: 0; padding: 0; padding-left: 1em; } CedarBackup3-3.1.6/doc/interface/CedarBackup3.tools.amazons3-module.html0000664000175000017500000007204112657665544027520 0ustar pronovicpronovic00000000000000 CedarBackup3.tools.amazons3
    Package CedarBackup3 :: Package tools :: Module amazons3
    [hide private]
    [frames] | no frames]

    Module amazons3

    source code

    Synchonizes a local directory with an Amazon S3 bucket.

    No configuration is required; all necessary information is taken from the command-line. The only thing configuration would help with is the path resolver interface, and it doesn't seem worth it to require configuration just to get that.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      Options
    Class representing command-line options for the cback3-amazons3-sync script.
    Functions [hide private]
     
    cli()
    Implements the command-line interface for the cback3-amazons3-sync script.
    source code
     
    _usage(fd=sys.stderr)
    Prints usage information for the cback3-amazons3-sync script.
    source code
     
    _version(fd=sys.stdout)
    Prints version information for the cback3-amazons3-sync script.
    source code
     
    _diagnostics(fd=sys.stdout)
    Prints runtime diagnostics information.
    source code
     
    _executeAction(options)
    Implements the guts of the cback3-amazons3-sync tool.
    source code
     
    _buildSourceFiles(sourceDir)
    Build a list of files in a source directory
    source code
     
    _checkSourceFiles(sourceDir, sourceFiles)
    Check source files, trying to guess which ones will have encoding problems.
    source code
     
    _synchronizeBucket(sourceDir, s3BucketUrl)
    Synchronize a local directory to an Amazon S3 bucket.
    source code
     
    _verifyBucketContents(sourceDir, sourceFiles, s3BucketUrl)
    Verify that a source directory is equivalent to an Amazon S3 bucket.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.tools.amazons3")
      AWS_COMMAND = ["aws"]
      SHORT_SWITCHES = "hVbql:o:m:OdsDvw"
      LONG_SWITCHES = ['help', 'version', 'verbose', 'quiet', 'logfi...
    Function Details [hide private]

    cli()

    source code 

    Implements the command-line interface for the cback3-amazons3-sync script.

    Essentially, this is the "main routine" for the cback3-amazons3-sync script. It does all of the argument processing for the script, and then also implements the tool functionality.

    This function looks pretty similiar to CedarBackup3.cli.cli(). It's not easy to refactor this code to make it reusable and also readable, so I've decided to just live with the duplication.

    A different error code is returned for each type of failure:

    • 1: The Python interpreter version is < 3.4
    • 2: Error processing command-line arguments
    • 3: Error configuring logging
    • 5: Backup was interrupted with a CTRL-C or similar
    • 6: Error executing other parts of the script
    Returns:
    Error code as described above.

    Note: This script uses print rather than logging to the INFO level, because it is interactive. Underlying Cedar Backup functionality uses the logging mechanism exclusively.

    _usage(fd=sys.stderr)

    source code 

    Prints usage information for the cback3-amazons3-sync script.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _version(fd=sys.stdout)

    source code 

    Prints version information for the cback3-amazons3-sync script.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _diagnostics(fd=sys.stdout)

    source code 

    Prints runtime diagnostics information.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _executeAction(options)

    source code 

    Implements the guts of the cback3-amazons3-sync tool.

    Parameters:
    • options (Options object.) - Program command-line options.
    Raises:
    • Exception - Under many generic error conditions

    _buildSourceFiles(sourceDir)

    source code 

    Build a list of files in a source directory

    Parameters:
    • sourceDir - Local source directory
    Returns:
    FilesystemList with contents of source directory

    _checkSourceFiles(sourceDir, sourceFiles)

    source code 

    Check source files, trying to guess which ones will have encoding problems.

    Parameters:
    • sourceDir - Local source directory
    • sourceDir - Local source directory
    Raises:

    _synchronizeBucket(sourceDir, s3BucketUrl)

    source code 

    Synchronize a local directory to an Amazon S3 bucket.

    Parameters:
    • sourceDir - Local source directory
    • s3BucketUrl - Target S3 bucket URL

    _verifyBucketContents(sourceDir, sourceFiles, s3BucketUrl)

    source code 

    Verify that a source directory is equivalent to an Amazon S3 bucket.

    Parameters:
    • sourceDir - Local source directory
    • sourceFiles - Filesystem list containing contents of source directory
    • s3BucketUrl - Target S3 bucket URL

    Variables Details [hide private]

    LONG_SWITCHES

    Value:
    ['help', 'version', 'verbose', 'quiet', 'logfile=', 'owner=', 'mode=',\
     'output', 'debug', 'stack', 'diagnostics', 'verifyOnly', 'ignoreWarni\
    ngs',]
    

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.validate-pysrc.html0000664000175000017500000032124012657665546027731 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.validate
    Package CedarBackup3 :: Package actions :: Module validate
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.actions.validate

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2007,2010,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Cedar Backup, release 3 
     30  # Purpose  : Implements the standard 'validate' action. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Implements the standard 'validate' action. 
     40  @sort: executeValidate 
     41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     42  """ 
     43   
     44   
     45  ######################################################################## 
     46  # Imported modules 
     47  ######################################################################## 
     48   
     49  # System modules 
     50  import os 
     51  import logging 
     52   
     53  # Cedar Backup modules 
     54  from CedarBackup3.util import getUidGid, getFunctionReference 
     55  from CedarBackup3.actions.util import createWriter 
     56   
     57   
     58  ######################################################################## 
     59  # Module-wide constants and variables 
     60  ######################################################################## 
     61   
     62  logger = logging.getLogger("CedarBackup3.log.actions.validate") 
     63   
     64   
     65  ######################################################################## 
     66  # Public functions 
     67  ######################################################################## 
     68   
     69  ############################# 
     70  # executeValidate() function 
     71  ############################# 
     72   
    
    73 -def executeValidate(configPath, options, config):
    74 """ 75 Executes the validate action. 76 77 This action validates each of the individual sections in the config file. 78 This is a "runtime" validation. The config file itself is already valid in 79 a structural sense, so what we check here that is that we can actually use 80 the configuration without any problems. 81 82 There's a separate validation function for each of the configuration 83 sections. Each validation function returns a true/false indication for 84 whether configuration was valid, and then logs any configuration problems it 85 finds. This way, one pass over configuration indicates most or all of the 86 obvious problems, rather than finding just one problem at a time. 87 88 Any reported problems will be logged at the ERROR level normally, or at the 89 INFO level if the quiet flag is enabled. 90 91 @param configPath: Path to configuration file on disk. 92 @type configPath: String representing a path on disk. 93 94 @param options: Program command-line options. 95 @type options: Options object. 96 97 @param config: Program configuration. 98 @type config: Config object. 99 100 @raise ValueError: If some configuration value is invalid. 101 """ 102 logger.debug("Executing the 'validate' action.") 103 if options.quiet: 104 logfunc = logger.info # info so it goes to the log 105 else: 106 logfunc = logger.error # error so it goes to the screen 107 valid = True 108 valid &= _validateReference(config, logfunc) 109 valid &= _validateOptions(config, logfunc) 110 valid &= _validateCollect(config, logfunc) 111 valid &= _validateStage(config, logfunc) 112 valid &= _validateStore(config, logfunc) 113 valid &= _validatePurge(config, logfunc) 114 valid &= _validateExtensions(config, logfunc) 115 if valid: 116 logfunc("Configuration is valid.") 117 else: 118 logfunc("Configuration is not valid.")
    119 120 121 ######################################################################## 122 # Private utility functions 123 ######################################################################## 124 125 ####################### 126 # _checkDir() function 127 ####################### 128
    129 -def _checkDir(path, writable, logfunc, prefix):
    130 """ 131 Checks that the indicated directory is OK. 132 133 The path must exist, must be a directory, must be readable and executable, 134 and must optionally be writable. 135 136 @param path: Path to check. 137 @param writable: Check that path is writable. 138 @param logfunc: Function to use for logging errors. 139 @param prefix: Prefix to use on logged errors. 140 141 @return: True if the directory is OK, False otherwise. 142 """ 143 if not os.path.exists(path): 144 logfunc("%s [%s] does not exist." % (prefix, path)) 145 return False 146 if not os.path.isdir(path): 147 logfunc("%s [%s] is not a directory." % (prefix, path)) 148 return False 149 if not os.access(path, os.R_OK): 150 logfunc("%s [%s] is not readable." % (prefix, path)) 151 return False 152 if not os.access(path, os.X_OK): 153 logfunc("%s [%s] is not executable." % (prefix, path)) 154 return False 155 if writable and not os.access(path, os.W_OK): 156 logfunc("%s [%s] is not writable." % (prefix, path)) 157 return False 158 return True
    159 160 161 ################################ 162 # _validateReference() function 163 ################################ 164
    165 -def _validateReference(config, logfunc):
    166 """ 167 Execute runtime validations on reference configuration. 168 169 We only validate that reference configuration exists at all. 170 171 @param config: Program configuration. 172 @param logfunc: Function to use for logging errors 173 174 @return: True if configuration is valid, false otherwise. 175 """ 176 valid = True 177 if config.reference is None: 178 logfunc("Required reference configuration does not exist.") 179 valid = False 180 return valid
    181 182 183 ############################## 184 # _validateOptions() function 185 ############################## 186
    187 -def _validateOptions(config, logfunc):
    188 """ 189 Execute runtime validations on options configuration. 190 191 The following validations are enforced: 192 193 - The options section must exist 194 - The working directory must exist and must be writable 195 - The backup user and backup group must exist 196 197 @param config: Program configuration. 198 @param logfunc: Function to use for logging errors 199 200 @return: True if configuration is valid, false otherwise. 201 """ 202 valid = True 203 if config.options is None: 204 logfunc("Required options configuration does not exist.") 205 valid = False 206 else: 207 valid &= _checkDir(config.options.workingDir, True, logfunc, "Working directory") 208 try: 209 getUidGid(config.options.backupUser, config.options.backupGroup) 210 except ValueError: 211 logfunc("Backup user:group [%s:%s] invalid." % (config.options.backupUser, config.options.backupGroup)) 212 valid = False 213 return valid
    214 215 216 ############################## 217 # _validateCollect() function 218 ############################## 219
    220 -def _validateCollect(config, logfunc):
    221 """ 222 Execute runtime validations on collect configuration. 223 224 The following validations are enforced: 225 226 - The target directory must exist and must be writable 227 - Each of the individual collect directories must exist and must be readable 228 229 @param config: Program configuration. 230 @param logfunc: Function to use for logging errors 231 232 @return: True if configuration is valid, false otherwise. 233 """ 234 valid = True 235 if config.collect is not None: 236 valid &= _checkDir(config.collect.targetDir, True, logfunc, "Collect target directory") 237 if config.collect.collectDirs is not None: 238 for collectDir in config.collect.collectDirs: 239 valid &= _checkDir(collectDir.absolutePath, False, logfunc, "Collect directory") 240 return valid
    241 242 243 ############################ 244 # _validateStage() function 245 ############################ 246
    247 -def _validateStage(config, logfunc):
    248 """ 249 Execute runtime validations on stage configuration. 250 251 The following validations are enforced: 252 253 - The target directory must exist and must be writable 254 - Each local peer's collect directory must exist and must be readable 255 256 @note: We currently do not validate anything having to do with remote peers, 257 since we don't have a straightforward way of doing it. It would require 258 adding an rsh command rather than just an rcp command to configuration, and 259 that just doesn't seem worth it right now. 260 261 @param config: Program configuration. 262 @param logfunc: Function to use for logging errors 263 264 @return: True if configuration is valid, False otherwise. 265 """ 266 valid = True 267 if config.stage is not None: 268 valid &= _checkDir(config.stage.targetDir, True, logfunc, "Stage target dir ") 269 if config.stage.localPeers is not None: 270 for peer in config.stage.localPeers: 271 valid &= _checkDir(peer.collectDir, False, logfunc, "Local peer collect dir ") 272 return valid
    273 274 275 ############################ 276 # _validateStore() function 277 ############################ 278
    279 -def _validateStore(config, logfunc):
    280 """ 281 Execute runtime validations on store configuration. 282 283 The following validations are enforced: 284 285 - The source directory must exist and must be readable 286 - The backup device (path and SCSI device) must be valid 287 288 @param config: Program configuration. 289 @param logfunc: Function to use for logging errors 290 291 @return: True if configuration is valid, False otherwise. 292 """ 293 valid = True 294 if config.store is not None: 295 valid &= _checkDir(config.store.sourceDir, False, logfunc, "Store source directory") 296 try: 297 createWriter(config) 298 except ValueError: 299 logfunc("Backup device [%s] [%s] is not valid." % (config.store.devicePath, config.store.deviceScsiId)) 300 valid = False 301 return valid
    302 303 304 ############################ 305 # _validatePurge() function 306 ############################ 307
    308 -def _validatePurge(config, logfunc):
    309 """ 310 Execute runtime validations on purge configuration. 311 312 The following validations are enforced: 313 314 - Each purge directory must exist and must be writable 315 316 @param config: Program configuration. 317 @param logfunc: Function to use for logging errors 318 319 @return: True if configuration is valid, False otherwise. 320 """ 321 valid = True 322 if config.purge is not None: 323 if config.purge.purgeDirs is not None: 324 for purgeDir in config.purge.purgeDirs: 325 valid &= _checkDir(purgeDir.absolutePath, True, logfunc, "Purge directory") 326 return valid
    327 328 329 ################################# 330 # _validateExtensions() function 331 ################################# 332
    333 -def _validateExtensions(config, logfunc):
    334 """ 335 Execute runtime validations on extensions configuration. 336 337 The following validations are enforced: 338 339 - Each indicated extension function must exist. 340 341 @param config: Program configuration. 342 @param logfunc: Function to use for logging errors 343 344 @return: True if configuration is valid, False otherwise. 345 """ 346 valid = True 347 if config.extensions is not None: 348 if config.extensions.actions is not None: 349 for action in config.extensions.actions: 350 try: 351 getFunctionReference(action.module, action.function) 352 except ImportError: 353 logfunc("Unable to find function [%s.%s]." % (action.module, action.function)) 354 valid = False 355 except ValueError: 356 logfunc("Function [%s.%s] is not callable." % (action.module, action.function)) 357 valid = False 358 return valid
    359

    CedarBackup3-3.1.6/doc/interface/help.html0000664000175000017500000002603312657665544022045 0ustar pronovicpronovic00000000000000 Help
     
    [hide private]
    [frames] | no frames]

    API Documentation

    This document contains the API (Application Programming Interface) documentation for CedarBackup3. Documentation for the Python objects defined by the project is divided into separate pages for each package, module, and class. The API documentation also includes two pages containing information about the project as a whole: a trees page, and an index page.

    Object Documentation

    Each Package Documentation page contains:

    • A description of the package.
    • A list of the modules and sub-packages contained by the package.
    • A summary of the classes defined by the package.
    • A summary of the functions defined by the package.
    • A summary of the variables defined by the package.
    • A detailed description of each function defined by the package.
    • A detailed description of each variable defined by the package.

    Each Module Documentation page contains:

    • A description of the module.
    • A summary of the classes defined by the module.
    • A summary of the functions defined by the module.
    • A summary of the variables defined by the module.
    • A detailed description of each function defined by the module.
    • A detailed description of each variable defined by the module.

    Each Class Documentation page contains:

    • A class inheritance diagram.
    • A list of known subclasses.
    • A description of the class.
    • A summary of the methods defined by the class.
    • A summary of the instance variables defined by the class.
    • A summary of the class (static) variables defined by the class.
    • A detailed description of each method defined by the class.
    • A detailed description of each instance variable defined by the class.
    • A detailed description of each class (static) variable defined by the class.

    Project Documentation

    The Trees page contains the module and class hierarchies:

    • The module hierarchy lists every package and module, with modules grouped into packages. At the top level, and within each package, modules and sub-packages are listed alphabetically.
    • The class hierarchy lists every class, grouped by base class. If a class has more than one base class, then it will be listed under each base class. At the top level, and under each base class, classes are listed alphabetically.

    The Index page contains indices of terms and identifiers:

    • The term index lists every term indexed by any object's documentation. For each term, the index provides links to each place where the term is indexed.
    • The identifier index lists the (short) name of every package, module, class, method, function, variable, and parameter. For each identifier, the index provides a short description, and a link to its documentation.

    The Table of Contents

    The table of contents occupies the two frames on the left side of the window. The upper-left frame displays the project contents, and the lower-left frame displays the module contents:

    Project
    Contents
    ...
    API
    Documentation
    Frame


    Module
    Contents
     
    ...
     

    The project contents frame contains a list of all packages and modules that are defined by the project. Clicking on an entry will display its contents in the module contents frame. Clicking on a special entry, labeled "Everything," will display the contents of the entire project.

    The module contents frame contains a list of every submodule, class, type, exception, function, and variable defined by a module or package. Clicking on an entry will display its documentation in the API documentation frame. Clicking on the name of the module, at the top of the frame, will display the documentation for the module itself.

    The "frames" and "no frames" buttons below the top navigation bar can be used to control whether the table of contents is displayed or not.

    The Navigation Bar

    A navigation bar is located at the top and bottom of every page. It indicates what type of page you are currently viewing, and allows you to go to related pages. The following table describes the labels on the navigation bar. Note that not some labels (such as [Parent]) are not displayed on all pages.

    Label Highlighted when... Links to...
    [Parent] (never highlighted) the parent of the current package
    [Package] viewing a package the package containing the current object
    [Module] viewing a module the module containing the current object
    [Class] viewing a class the class containing the current object
    [Trees] viewing the trees page the trees page
    [Index] viewing the index page the index page
    [Help] viewing the help page the help page

    The "show private" and "hide private" buttons below the top navigation bar can be used to control whether documentation for private objects is displayed. Private objects are usually defined as objects whose (short) names begin with a single underscore, but do not end with an underscore. For example, "_x", "__pprint", and "epydoc.epytext._tokenize" are private objects; but "re.sub", "__init__", and "type_" are not. However, if a module defines the "__all__" variable, then its contents are used to decide which objects are private.

    A timestamp below the bottom navigation bar indicates when each page was last updated.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.postgresql.PostgresqlConfig-class.html0000664000175000017500000010353012657665545033426 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.postgresql.PostgresqlConfig
    Package CedarBackup3 :: Package extend :: Module postgresql :: Class PostgresqlConfig
    [hide private]
    [frames] | no frames]

    Class PostgresqlConfig

    source code

    object --+
             |
            PostgresqlConfig
    

    Class representing PostgreSQL configuration.

    The PostgreSQL configuration information is used for backing up PostgreSQL databases.

    The following restrictions exist on data in this class:

    • The compress mode must be one of the values in VALID_COMPRESS_MODES.
    • The 'all' flag must be 'Y' if no databases are defined.
    • The 'all' flag must be 'N' if any databases are defined.
    • Any values in the databases list must be strings.
    Instance Methods [hide private]
     
    __init__(self, user=None, compressMode=None, all=None, databases=None)
    Constructor for the PostgresqlConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    _setUser(self, value)
    Property target used to set the user value.
    source code
     
    _getUser(self)
    Property target used to get the user value.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setAll(self, value)
    Property target used to set the 'all' flag.
    source code
     
    _getAll(self)
    Property target used to get the 'all' flag.
    source code
     
    _setDatabases(self, value)
    Property target used to set the databases list.
    source code
     
    _getDatabases(self)
    Property target used to get the databases list.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      user
    User to execute backup as.
      all
    Indicates whether to back up all databases.
      databases
    List of databases to back up.
      compressMode
    Compress mode to be used for backed-up files.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, user=None, compressMode=None, all=None, databases=None)
    (Constructor)

    source code 

    Constructor for the PostgresqlConfig class.

    Parameters:
    • user - User to execute backup as.
    • compressMode - Compress mode for backed-up files.
    • all - Indicates whether to back up all databases.
    • databases - List of databases to back up.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setAll(self, value)

    source code 

    Property target used to set the 'all' flag. No validations, but we normalize the value to True or False.

    _setDatabases(self, value)

    source code 

    Property target used to set the databases list. Either the value must be None or each element must be a string.

    Raises:
    • ValueError - If the value is not a string.

    Property Details [hide private]

    user

    User to execute backup as.

    Get Method:
    _getUser(self) - Property target used to get the user value.
    Set Method:
    _setUser(self, value) - Property target used to set the user value.

    all

    Indicates whether to back up all databases.

    Get Method:
    _getAll(self) - Property target used to get the 'all' flag.
    Set Method:
    _setAll(self, value) - Property target used to set the 'all' flag.

    databases

    List of databases to back up.

    Get Method:
    _getDatabases(self) - Property target used to get the databases list.
    Set Method:
    _setDatabases(self, value) - Property target used to set the databases list.

    compressMode

    Compress mode to be used for backed-up files.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.writers.dvdwriter.DvdWriter-class.html0000664000175000017500000023200012657665545032061 0ustar pronovicpronovic00000000000000 CedarBackup3.writers.dvdwriter.DvdWriter
    Package CedarBackup3 :: Package writers :: Module dvdwriter :: Class DvdWriter
    [hide private]
    [frames] | no frames]

    Class DvdWriter

    source code

    object --+
             |
            DvdWriter
    

    Class representing a device that knows how to write some kinds of DVD media.

    Summary

    This is a class representing a device that knows how to write some kinds of DVD media. It provides common operations for the device, such as ejecting the media and writing data to the media.

    This class is implemented in terms of the eject and growisofs utilities, all of which should be available on most UN*X platforms.

    Image Writer Interface

    The following methods make up the "image writer" interface shared with other kinds of writers:

      __init__
      initializeImage()
      addImageEntry()
      writeImage()
      setImageNewDisc()
      retrieveCapacity()
      getEstimatedImageSize()
    

    Only these methods will be used by other Cedar Backup functionality that expects a compatible image writer.

    The media attribute is also assumed to be available.

    Unlike the CdWriter, the DvdWriter can only operate in terms of filesystem devices, not SCSI devices. So, although the constructor interface accepts a SCSI device parameter for the sake of compatibility, it's not used.

    Media Types

    This class knows how to write to DVD+R and DVD+RW media, represented by the following constants:

    • MEDIA_DVDPLUSR: DVD+R media (4.4 GB capacity)
    • MEDIA_DVDPLUSRW: DVD+RW media (4.4 GB capacity)

    The difference is that DVD+RW media can be rewritten, while DVD+R media cannot be (although at present, DvdWriter does not really differentiate between rewritable and non-rewritable media).

    The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes of 1024*1024*1024 bytes per gigabyte.

    The underlying growisofs utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type.

    Device Attributes vs. Media Attributes

    As with the cdwriter functionality, a given dvdwriter instance has two different kinds of attributes associated with it. I call these device attributes and media attributes.

    Device attributes are things which can be determined without looking at the media. Media attributes are attributes which vary depending on the state of the media. In general, device attributes are available via instance variables and are constant over the life of an object, while media attributes can be retrieved through method calls.

    Compared to cdwriters, dvdwriters have very few attributes. This is due to differences between the way growisofs works relative to cdrecord.

    Media Capacity

    One major difference between the cdrecord/mkisofs utilities used by the cdwriter class and the growisofs utility used here is that the process of estimating remaining capacity and image size is more straightforward with cdrecord/mkisofs than with growisofs.

    In this class, remaining capacity is calculated by asking doing a dry run of growisofs and grabbing some information from the output of that command. Image size is estimated by asking the IsoImage class for an estimate and then adding on a "fudge factor" determined through experimentation.

    Testing

    It's rather difficult to test this code in an automated fashion, even if you have access to a physical DVD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to.

    Because of this, some of the implementation below is in terms of static methods that are supposed to take defined actions based on their arguments. Public methods are then implemented in terms of a series of calls to simplistic static methods. This way, we can test as much as possible of the "difficult" functionality via testing the static methods, while hoping that if the static methods are called appropriately, things will work properly. It's not perfect, but it's much better than no testing at all.

    Instance Methods [hide private]
     
    __init__(self, device, scsiId=None, driveSpeed=None, mediaType=2, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False)
    Initializes a DVD writer object.
    source code
     
    isRewritable(self)
    Indicates whether the media is rewritable per configuration.
    source code
     
    retrieveCapacity(self, entireDisc=False)
    Retrieves capacity for the current media in terms of a MediaCapacity object.
    source code
     
    openTray(self)
    Opens the device's tray and leaves it open.
    source code
     
    closeTray(self)
    Closes the device's tray.
    source code
     
    refreshMedia(self)
    Opens and then immediately closes the device's tray, to refresh the device's idea of the media.
    source code
     
    initializeImage(self, newDisc, tmpdir, mediaLabel=None)
    Initializes the writer's associated ISO image.
    source code
     
    addImageEntry(self, path, graftPoint)
    Adds a filepath entry to the writer's associated ISO image.
    source code
     
    writeImage(self, imagePath=None, newDisc=False, writeMulti=True)
    Writes an ISO image to the media in the device.
    source code
     
    setImageNewDisc(self, newDisc)
    Resets (overrides) the newDisc flag on the internal image.
    source code
     
    getEstimatedImageSize(self)
    Gets the estimated size of the image associated with the writer.
    source code
     
    _writeImage(self, newDisc, imagePath, entries, mediaLabel=None)
    Writes an image to disc using either an entries list or an ISO image on disk.
    source code
     
    _getDevice(self)
    Property target used to get the device value.
    source code
     
    _getScsiId(self)
    Property target used to get the SCSI id value.
    source code
     
    _getHardwareId(self)
    Property target used to get the hardware id value.
    source code
     
    _getDriveSpeed(self)
    Property target used to get the drive speed.
    source code
     
    _getMedia(self)
    Property target used to get the media description.
    source code
     
    _getDeviceHasTray(self)
    Property target used to get the device-has-tray flag.
    source code
     
    _getDeviceCanEject(self)
    Property target used to get the device-can-eject flag.
    source code
     
    _getRefreshMediaDelay(self)
    Property target used to get the configured refresh media delay, in seconds.
    source code
     
    _getEjectDelay(self)
    Property target used to get the configured eject delay, in seconds.
    source code
     
    unlockTray(self)
    Unlocks the device's tray via 'eject -i off'.
    source code
     
    _retrieveSectorsUsed(self)
    Retrieves the number of sectors used on the current media.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _getEstimatedImageSize(entries)
    Gets the estimated size of a set of image entries.
    source code
     
    _searchForOverburn(output)
    Search for an "overburn" error message in growisofs output.
    source code
     
    _buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel=None, dryRun=False)
    Builds a list of arguments to be passed to a growisofs command.
    source code
     
    _parseSectorsUsed(output)
    Parse sectors used information out of growisofs output.
    source code
    Properties [hide private]
      device
    Filesystem device name for this writer.
      scsiId
    SCSI id for the device (saved for reference only).
      hardwareId
    Hardware id for this writer (always the device path).
      driveSpeed
    Speed at which the drive writes.
      media
    Definition of media that is expected to be in the device.
      deviceHasTray
    Indicates whether the device has a media tray.
      deviceCanEject
    Indicates whether the device supports ejecting its media.
      refreshMediaDelay
    Refresh media delay, in seconds.
      ejectDelay
    Eject delay, in seconds.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, device, scsiId=None, driveSpeed=None, mediaType=2, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False)
    (Constructor)

    source code 

    Initializes a DVD writer object.

    Since growisofs can only address devices using the device path (i.e. /dev/dvd), the hardware id will always be set based on the device. If passed in, it will be saved for reference purposes only.

    We have no way to query the device to ask whether it has a tray or can be safely opened and closed. So, the noEject flag is used to set these values. If noEject=False, then we assume a tray exists and open/close is safe. If noEject=True, then we assume that there is no tray and open/close is not safe.

    Parameters:
    • device (Absolute path to a filesystem device, i.e. /dev/dvd) - Filesystem device associated with this writer.
    • scsiId (If provided, SCSI id in the form [<method>:]scsibus,target,lun) - SCSI id for the device (optional, for reference only).
    • driveSpeed (Use 2 for 2x device, etc. or None to use device default.) - Speed at which the drive writes.
    • mediaType (One of the valid media type as discussed above.) - Type of the media that is assumed to be in the drive.
    • noEject (Boolean true/false) - Tells Cedar Backup that the device cannot safely be ejected
    • refreshMediaDelay (Number of seconds, an integer >= 0) - Refresh media delay to use, if any
    • ejectDelay (Number of seconds, an integer >= 0) - Eject delay to use, if any
    • unittest (Boolean true/false) - Turns off certain validations, for use in unit testing.
    Raises:
    • ValueError - If the device is not valid for some reason.
    • ValueError - If the SCSI id is not in a valid form.
    • ValueError - If the drive speed is not an integer >= 1.
    Overrides: object.__init__

    Note: The unittest parameter should never be set to True outside of Cedar Backup code. It is intended for use in unit testing Cedar Backup internals and has no other sensible purpose.

    retrieveCapacity(self, entireDisc=False)

    source code 

    Retrieves capacity for the current media in terms of a MediaCapacity object.

    If entireDisc is passed in as True, the capacity will be for the entire disc, as if it were to be rewritten from scratch. The same will happen if the disc can't be read for some reason. Otherwise, the capacity will be calculated by subtracting the sectors currently used on the disc, as reported by growisofs itself.

    Parameters:
    • entireDisc (Boolean true/false) - Indicates whether to return capacity for entire disc.
    Returns:
    MediaCapacity object describing the capacity of the media.
    Raises:
    • ValueError - If there is a problem parsing the growisofs output
    • IOError - If the media could not be read for some reason.

    openTray(self)

    source code 

    Opens the device's tray and leaves it open.

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing.

    Starting with Debian wheezy on my backup hardware, I started seeing consistent problems with the eject command. I couldn't tell whether these problems were due to the device management system or to the new kernel (3.2.0). Initially, I saw simple eject failures, possibly because I was opening and closing the tray too quickly. I worked around that behavior with the new ejectDelay flag.

    Later, I sometimes ran into issues after writing an image to a disc: eject would give errors like "unable to eject, last error: Inappropriate ioctl for device". Various sources online (like Ubuntu bug #875543) suggested that the drive was being locked somehow, and that the workaround was to run 'eject -i off' to unlock it. Sure enough, that fixed the problem for me, so now it's a normal error-handling strategy.

    Raises:
    • IOError - If there is an error talking to the device.

    closeTray(self)

    source code 

    Closes the device's tray.

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing.

    Raises:
    • IOError - If there is an error talking to the device.

    refreshMedia(self)

    source code 

    Opens and then immediately closes the device's tray, to refresh the device's idea of the media.

    Sometimes, a device gets confused about the state of its media. Often, all it takes to solve the problem is to eject the media and then immediately reload it. (There are also configurable eject and refresh media delays which can be applied, for situations where this makes a difference.)

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. The configured delays still apply, though.

    Raises:
    • IOError - If there is an error talking to the device.

    initializeImage(self, newDisc, tmpdir, mediaLabel=None)

    source code 

    Initializes the writer's associated ISO image.

    This method initializes the image instance variable so that the caller can use the addImageEntry method. Once entries have been added, the writeImage method can be called with no arguments.

    Parameters:
    • newDisc (Boolean true/false) - Indicates whether the disc should be re-initialized
    • tmpdir (String representing a directory path on disk) - Temporary directory to use if needed
    • mediaLabel (String, no more than 25 characters long) - Media label to be applied to the image, if any

    addImageEntry(self, path, graftPoint)

    source code 

    Adds a filepath entry to the writer's associated ISO image.

    The contents of the filepath -- but not the path itself -- will be added to the image at the indicated graft point. If you don't want to use a graft point, just pass None.

    Parameters:
    • path (String representing a path on disk) - File or directory to be added to the image
    • graftPoint (String representing a graft point path, as described above) - Graft point to be used when adding this entry
    Raises:
    • ValueError - If initializeImage() was not previously called
    • ValueError - If the path is not a valid file or directory

    Note: Before calling this method, you must call initializeImage.

    writeImage(self, imagePath=None, newDisc=False, writeMulti=True)

    source code 

    Writes an ISO image to the media in the device.

    If newDisc is passed in as True, we assume that the entire disc will be re-created from scratch. Note that unlike CdWriter, DvdWriter does not blank rewritable media before reusing it; however, growisofs is called such that the media will be re-initialized as needed.

    If imagePath is passed in as None, then the existing image configured with initializeImage() will be used. Under these circumstances, the passed-in newDisc flag will be ignored and the value passed in to initializeImage() will apply instead.

    The writeMulti argument is ignored. It exists for compatibility with the Cedar Backup image writer interface.

    Parameters:
    • imagePath (String representing a path on disk) - Path to an ISO image on disk, or None to use writer's image
    • newDisc (Boolean true/false.) - Indicates whether the disc should be re-initialized
    • writeMulti (Boolean true/false) - Unused
    Raises:
    • ValueError - If the image path is not absolute.
    • ValueError - If some path cannot be encoded properly.
    • IOError - If the media could not be written to for some reason.
    • ValueError - If no image is passed in and initializeImage() was not previously called

    Note: The image size indicated in the log ("Image size will be...") is an estimate. The estimate is conservative and is probably larger than the actual space that dvdwriter will use.

    setImageNewDisc(self, newDisc)

    source code 

    Resets (overrides) the newDisc flag on the internal image.

    Parameters:
    • newDisc - New disc flag to set
    Raises:
    • ValueError - If initializeImage() was not previously called

    getEstimatedImageSize(self)

    source code 

    Gets the estimated size of the image associated with the writer.

    This is an estimate and is conservative. The actual image could be as much as 450 blocks (sectors) smaller under some circmstances.

    Returns:
    Estimated size of the image, in bytes.
    Raises:
    • IOError - If there is a problem calling mkisofs.
    • ValueError - If initializeImage() was not previously called

    _writeImage(self, newDisc, imagePath, entries, mediaLabel=None)

    source code 

    Writes an image to disc using either an entries list or an ISO image on disk.

    Callers are assumed to have done validation on paths, etc. before calling this method.

    Parameters:
    • newDisc - Indicates whether the disc should be re-initialized
    • imagePath - Path to an ISO image on disk, or c{None} to use entries
    • entries - Mapping from path to graft point, or None to use imagePath
    Raises:
    • IOError - If the media could not be written to for some reason.

    _getEstimatedImageSize(entries)
    Static Method

    source code 

    Gets the estimated size of a set of image entries.

    This is implemented in terms of the IsoImage class. The returned value is calculated by adding a "fudge factor" to the value from IsoImage. This fudge factor was determined by experimentation and is conservative -- the actual image could be as much as 450 blocks smaller under some circumstances.

    Parameters:
    • entries - Dictionary mapping path to graft point.
    Returns:
    Total estimated size of image, in bytes.
    Raises:
    • ValueError - If there are no entries in the dictionary
    • ValueError - If any path in the dictionary does not exist
    • IOError - If there is a problem calling mkisofs.

    _searchForOverburn(output)
    Static Method

    source code 

    Search for an "overburn" error message in growisofs output.

    The growisofs command returns a non-zero exit code and puts a message into the output -- even on a dry run -- if there is not enough space on the media. This is called an "overburn" condition.

    The error message looks like this:

      :-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!
    

    This method looks for the overburn error message anywhere in the output. If a matching error message is found, an IOError exception is raised containing relevant information about the problem. Otherwise, the method call returns normally.

    Parameters:
    • output - List of output lines to search, as from executeCommand
    Raises:
    • IOError - If an overburn condition is found.

    _buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel=None, dryRun=False)
    Static Method

    source code 

    Builds a list of arguments to be passed to a growisofs command.

    The arguments will either cause growisofs to write the indicated image file to disc, or will pass growisofs a list of directories or files that should be written to disc.

    If a new image is created, it will always be created with Rock Ridge extensions (-r). A volume name will be applied (-V) if mediaLabel is not None.

    Parameters:
    • newDisc - Indicates whether the disc should be re-initialized
    • hardwareId - Hardware id for the device
    • driveSpeed - Speed at which the drive writes.
    • imagePath - Path to an ISO image on disk, or c{None} to use entries
    • entries - Mapping from path to graft point, or None to use imagePath
    • mediaLabel - Media label to set on the image, if any
    • dryRun - Says whether to make this a dry run (for checking capacity)
    Returns:
    List suitable for passing to util.executeCommand as args.
    Raises:
    • ValueError - If caller does not pass one or the other of imagePath or entries.
    Notes:
    • If we write an existing image to disc, then the mediaLabel is ignored. The media label is an attribute of the image, and should be set on the image when it is created.
    • We always pass the undocumented option -use-the-force-like=tty to growisofs. Without this option, growisofs will refuse to execute certain actions when running from cron. A good example is -Z, which happily overwrites an existing DVD from the command-line, but fails when run from cron. It took a while to figure that out, since it worked every time I tested it by hand. :(

    unlockTray(self)

    source code 

    Unlocks the device's tray via 'eject -i off'.

    Raises:
    • IOError - If there is an error talking to the device.

    _retrieveSectorsUsed(self)

    source code 

    Retrieves the number of sectors used on the current media.

    This is a little ugly. We need to call growisofs in "dry-run" mode and parse some information from its output. However, to do that, we need to create a dummy file that we can pass to the command -- and we have to make sure to remove it later.

    Once growisofs has been run, then we call _parseSectorsUsed to parse the output and calculate the number of sectors used on the media.

    Returns:
    Number of sectors used on the media

    _parseSectorsUsed(output)
    Static Method

    source code 

    Parse sectors used information out of growisofs output.

    The first line of a growisofs run looks something like this:

      Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'
    

    Dmitry has determined that the seek value in this line gives us information about how much data has previously been written to the media. That value multiplied by 16 yields the number of sectors used.

    If the seek line cannot be found in the output, then sectors used of zero is assumed.

    Returns:
    Sectors used on the media, as a floating point number.
    Raises:
    • ValueError - If the output cannot be parsed properly.

    Property Details [hide private]

    device

    Filesystem device name for this writer.

    Get Method:
    _getDevice(self) - Property target used to get the device value.

    scsiId

    SCSI id for the device (saved for reference only).

    Get Method:
    _getScsiId(self) - Property target used to get the SCSI id value.

    hardwareId

    Hardware id for this writer (always the device path).

    Get Method:
    _getHardwareId(self) - Property target used to get the hardware id value.

    driveSpeed

    Speed at which the drive writes.

    Get Method:
    _getDriveSpeed(self) - Property target used to get the drive speed.

    media

    Definition of media that is expected to be in the device.

    Get Method:
    _getMedia(self) - Property target used to get the media description.

    deviceHasTray

    Indicates whether the device has a media tray.

    Get Method:
    _getDeviceHasTray(self) - Property target used to get the device-has-tray flag.

    deviceCanEject

    Indicates whether the device supports ejecting its media.

    Get Method:
    _getDeviceCanEject(self) - Property target used to get the device-can-eject flag.

    refreshMediaDelay

    Refresh media delay, in seconds.

    Get Method:
    _getRefreshMediaDelay(self) - Property target used to get the configured refresh media delay, in seconds.

    ejectDelay

    Eject delay, in seconds.

    Get Method:
    _getEjectDelay(self) - Property target used to get the configured eject delay, in seconds.

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.testutil-module.html0000664000175000017500000000613212657665544027264 0ustar pronovicpronovic00000000000000 testutil

    Module testutil


    Functions

    availableLocales
    buildPath
    captureOutput
    changeFileAge
    commandAvailable
    extractTar
    failUnlessAssignRaises
    findResources
    getLogin
    getMaskAsMode
    platformDebian
    platformMacOsX
    randomFilename
    removedir
    runningAsRoot
    setupDebugLogger
    setupOverrides

    Variables

    __package__

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.util.DirectedGraph-class.html0000664000175000017500000007466012657665545030141 0ustar pronovicpronovic00000000000000 CedarBackup3.util.DirectedGraph
    Package CedarBackup3 :: Module util :: Class DirectedGraph
    [hide private]
    [frames] | no frames]

    Class DirectedGraph

    source code

    object --+
             |
            DirectedGraph
    

    Represents a directed graph.

    A graph G=(V,E) consists of a set of vertices V together with a set E of vertex pairs or edges. In a directed graph, each edge also has an associated direction (from vertext v1 to vertex v2). A DirectedGraph object provides a way to construct a directed graph and execute a depth- first search.

    This data structure was designed based on the graphing chapter in The Algorithm Design Manual, by Steven S. Skiena.

    This class is intended to be used by Cedar Backup for dependency ordering. Because of this, it's not quite general-purpose. Unlike a "general" graph, every vertex in this graph has at least one edge pointing to it, from a special "start" vertex. This is so no vertices get "lost" either because they have no dependencies or because nothing depends on them.

    Instance Methods [hide private]
     
    __init__(self, name)
    Directed graph constructor.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    _getName(self)
    Property target used to get the graph name.
    source code
     
    createVertex(self, name)
    Creates a named vertex.
    source code
     
    createEdge(self, start, finish)
    Adds an edge with an associated direction, from start vertex to finish vertex.
    source code
     
    topologicalSort(self)
    Implements a topological sort of the graph.
    source code
     
    _topologicalSort(self, vertex, ordering)
    Recursive depth first search function implementing topological sort.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Class Variables [hide private]
      _UNDISCOVERED = 0
      _DISCOVERED = 1
      _EXPLORED = 2
    Properties [hide private]
      name
    Name of the graph.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name)
    (Constructor)

    source code 

    Directed graph constructor.

    Parameters:
    • name (String value.) - Name of this graph.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    createVertex(self, name)

    source code 

    Creates a named vertex.

    Parameters:
    • name - vertex name
    Raises:
    • ValueError - If the vertex name is None or empty.

    createEdge(self, start, finish)

    source code 

    Adds an edge with an associated direction, from start vertex to finish vertex.

    Parameters:
    • start - Name of start vertex.
    • finish - Name of finish vertex.
    Raises:
    • ValueError - If one of the named vertices is unknown.

    topologicalSort(self)

    source code 

    Implements a topological sort of the graph.

    This method also enforces that the graph is a directed acyclic graph, which is a requirement of a topological sort.

    A directed acyclic graph (or "DAG") is a directed graph with no directed cycles. A topological sort of a DAG is an ordering on the vertices such that all edges go from left to right. Only an acyclic graph can have a topological sort, but any DAG has at least one topological sort.

    Since a topological sort only makes sense for an acyclic graph, this method throws an exception if a cycle is found.

    A depth-first search only makes sense if the graph is acyclic. If the graph contains any cycles, it is not possible to determine a consistent ordering for the vertices.

    Returns:
    Ordering on the vertices so that all edges go from left to right.
    Raises:
    • ValueError - If a cycle is found in the graph.

    Note: If a particular vertex has no edges, then its position in the final list depends on the order in which the vertices were created in the graph. If you're using this method to determine a dependency order, this makes sense: a vertex with no dependencies can go anywhere (and will).

    _topologicalSort(self, vertex, ordering)

    source code 

    Recursive depth first search function implementing topological sort.

    Parameters:
    • vertex - Vertex to search
    • ordering - List of vertices in proper order

    Property Details [hide private]

    name

    Name of the graph.

    Get Method:
    _getName(self) - Property target used to get the graph name.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.store-module.html0000664000175000017500000007166512657665544027434 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.store
    Package CedarBackup3 :: Package actions :: Module store
    [hide private]
    [frames] | no frames]

    Module store

    source code

    Implements the standard 'store' action.


    Authors:
    Kenneth J. Pronovici <pronovic@ieee.org>, Dmitry Rutsky <rutsky@inbox.ru>
    Functions [hide private]
     
    executeStore(configPath, options, config)
    Executes the store backup action.
    source code
     
    writeImage(config, newDisc, stagingDirs)
    Builds and writes an ISO image containing the indicated stage directories.
    source code
     
    writeStoreIndicator(config, stagingDirs)
    Writes a store indicator file into staging directories.
    source code
     
    consistencyCheck(config, stagingDirs)
    Runs a consistency check against media in the backup device.
    source code
     
    writeImageBlankSafe(config, rebuildMedia, todayIsStart, blankBehavior, stagingDirs)
    Builds and writes an ISO image containing the indicated stage directories.
    source code
     
    _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior)
    Gets a value for the newDisc flag based on blanking factor rules.
    source code
     
    _findCorrectDailyDir(options, config)
    Finds the correct daily staging directory to be written to disk.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.actions.store")
      __package__ = 'CedarBackup3.actions'
    Function Details [hide private]

    executeStore(configPath, options, config)

    source code 

    Executes the store backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are problems reading or writing files.
    Notes:
    • The rebuild action and the store action are very similar. The main difference is that while store only stores a single day's staging directory, the rebuild action operates on multiple staging directories.
    • When the store action is complete, we will write a store indicator to the daily staging directory we used, so it's obvious that the store action has completed.

    writeImage(config, newDisc, stagingDirs)

    source code 

    Builds and writes an ISO image containing the indicated stage directories.

    The generated image will contain each of the staging directories listed in stagingDirs. The directories will be placed into the image at the root by date, so staging directory /opt/stage/2005/02/10 will be placed into the disc at /2005/02/10.

    Parameters:
    • config - Config object.
    • newDisc - Indicates whether the disc should be re-initialized
    • stagingDirs - Dictionary mapping directory path to date suffix.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there is a problem writing the image to disc.

    Note: This function is implemented in terms of writeImageBlankSafe. The newDisc flag is passed in for both rebuildMedia and todayIsStart.

    writeStoreIndicator(config, stagingDirs)

    source code 

    Writes a store indicator file into staging directories.

    The store indicator is written into each of the staging directories when either a store or rebuild action has written the staging directory to disc.

    Parameters:
    • config - Config object.
    • stagingDirs - Dictionary mapping directory path to date suffix.

    consistencyCheck(config, stagingDirs)

    source code 

    Runs a consistency check against media in the backup device.

    It seems that sometimes, it's possible to create a corrupted multisession disc (i.e. one that cannot be read) although no errors were encountered while writing the disc. This consistency check makes sure that the data read from disc matches the data that was used to create the disc.

    The function mounts the device at a temporary mount point in the working directory, and then compares the indicated staging directories in the staging directory and on the media. The comparison is done via functionality in filesystem.py.

    If no exceptions are thrown, there were no problems with the consistency check. A positive confirmation of "no problems" is also written to the log with info priority.

    Parameters:
    • config - Config object.
    • stagingDirs - Dictionary mapping directory path to date suffix.
    Raises:
    • ValueError - If the two directories are not equivalent.
    • IOError - If there is a problem working with the media.

    Warning: The implementation of this function is very UNIX-specific.

    writeImageBlankSafe(config, rebuildMedia, todayIsStart, blankBehavior, stagingDirs)

    source code 

    Builds and writes an ISO image containing the indicated stage directories.

    The generated image will contain each of the staging directories listed in stagingDirs. The directories will be placed into the image at the root by date, so staging directory /opt/stage/2005/02/10 will be placed into the disc at /2005/02/10. The media will always be written with a media label specific to Cedar Backup.

    This function is similar to writeImage, but tries to implement a smarter blanking strategy.

    First, the media is always blanked if the rebuildMedia flag is true. Then, if rebuildMedia is false, blanking behavior and todayIsStart come into effect:

      If no blanking behavior is specified, and it is the start of the week,
      the disc will be blanked
    
      If blanking behavior is specified, and either the blank mode is "daily"
      or the blank mode is "weekly" and it is the start of the week, then
      the disc will be blanked if it looks like the weekly backup will not
      fit onto the media.
    
      Otherwise, the disc will not be blanked
    

    How do we decide whether the weekly backup will fit onto the media? That is what the blanking factor is used for. The following formula is used:

      will backup fit? = (bytes available / (1 + bytes required) <= blankFactor
    

    The blanking factor will vary from setup to setup, and will probably require some experimentation to get it right.

    Parameters:
    • config - Config object.
    • rebuildMedia - Indicates whether media should be rebuilt
    • todayIsStart - Indicates whether today is the starting day of the week
    • blankBehavior - Blank behavior from configuration, or None to use default behavior
    • stagingDirs - Dictionary mapping directory path to date suffix.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there is a problem writing the image to disc.

    _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior)

    source code 

    Gets a value for the newDisc flag based on blanking factor rules.

    The blanking factor rules are described above by writeImageBlankSafe.

    Parameters:
    • writer - Previously configured image writer containing image entries
    • rebuildMedia - Indicates whether media should be rebuilt
    • todayIsStart - Indicates whether today is the starting day of the week
    • blankBehavior - Blank behavior from configuration, or None to use default behavior
    Returns:
    newDisc flag to be set on writer.

    _findCorrectDailyDir(options, config)

    source code 

    Finds the correct daily staging directory to be written to disk.

    In Cedar Backup v1.0, we assumed that the correct staging directory matched the current date. However, that has problems. In particular, it breaks down if collect is on one side of midnite and stage is on the other, or if certain processes span midnite.

    For v2.0, I'm trying to be smarter. I'll first check the current day. If that directory is found, it's good enough. If it's not found, I'll look for a valid directory from the day before or day after which has not yet been staged, according to the stage indicator file. The first one I find, I'll use. If I use a directory other than for the current day and config.store.warnMidnite is set, a warning will be put in the log.

    There is one exception to this rule. If the options.full flag is set, then the special "span midnite" logic will be disabled and any existing store indicator will be ignored. I did this because I think that most users who run cback3 --full store twice in a row expect the command to generate two identical discs. With the other rule in place, running that command twice in a row could result in an error ("no unstored directory exists") or could even cause a completely unexpected directory to be written to disc (if some previous day's contents had not yet been written).

    Parameters:
    • options - Options object.
    • config - Config object.
    Returns:
    Correct staging dir, as a dict mapping directory to date suffix.
    Raises:
    • IOError - If the staging directory cannot be found.

    Note: This code is probably longer and more verbose than it needs to be, but at least it's straightforward.


    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.subversion.BDBRepository-class.html0000664000175000017500000003476712657665545032637 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.subversion.BDBRepository
    Package CedarBackup3 :: Package extend :: Module subversion :: Class BDBRepository
    [hide private]
    [frames] | no frames]

    Class BDBRepository

    source code

    object --+    
             |    
    Repository --+
                 |
                BDBRepository
    

    Class representing Subversion BDB (Berkeley Database) repository configuration. This object is deprecated. Use a simple Repository instead.

    Instance Methods [hide private]
     
    __init__(self, repositoryPath=None, collectMode=None, compressMode=None)
    Constructor for the BDBRepository class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code

    Inherited from Repository: __cmp__, __eq__, __ge__, __gt__, __le__, __lt__, __str__

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]

    Inherited from Repository: collectMode, compressMode, repositoryPath, repositoryType

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, repositoryPath=None, collectMode=None, compressMode=None)
    (Constructor)

    source code 

    Constructor for the BDBRepository class.

    Parameters:
    • repositoryType - Type of repository, for reference
    • repositoryPath - Absolute path to a Subversion repository on disk.
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.tools.span.SpanOptions-class.html0000664000175000017500000003777512657665545031042 0ustar pronovicpronovic00000000000000 CedarBackup3.tools.span.SpanOptions
    Package CedarBackup3 :: Package tools :: Module span :: Class SpanOptions
    [hide private]
    [frames] | no frames]

    Class SpanOptions

    source code

     object --+    
              |    
    cli.Options --+
                  |
                 SpanOptions
    

    Tool-specific command-line options.

    Most of the cback3 command-line options are exactly what we need here -- logfile path, permissions, verbosity, etc. However, we need to make a few tweaks since we don't accept any actions.

    Also, a few extra command line options that we accept are really ignored underneath. I just don't care about that for a tool like this.

    Instance Methods [hide private]
     
    validate(self)
    Validates command-line options represented by the object.
    source code

    Inherited from cli.Options: __cmp__, __eq__, __ge__, __gt__, __init__, __le__, __lt__, __repr__, __str__, buildArgumentList, buildArgumentString

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]

    Inherited from cli.Options: actions, config, debug, diagnostics, full, help, logfile, managed, managedOnly, mode, output, owner, quiet, stacktrace, verbose, version

    Inherited from object: __class__

    Method Details [hide private]

    validate(self)

    source code 

    Validates command-line options represented by the object. There are no validations here, because we don't use any actions.

    Raises:
    • ValueError - If one of the validations fails.
    Overrides: cli.Options.validate

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.stage-module.html0000664000175000017500000006404212657665544027372 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.stage
    Package CedarBackup3 :: Package actions :: Module stage
    [hide private]
    [frames] | no frames]

    Module stage

    source code

    Implements the standard 'stage' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeStage(configPath, options, config)
    Executes the stage backup action.
    source code
     
    _createStagingDirs(config, dailyDir, peers)
    Creates staging directories as required.
    source code
     
    _getIgnoreFailuresFlag(options, config, peer)
    Gets the ignore failures flag based on options, configuration, and peer.
    source code
     
    _getDailyDir(config)
    Gets the daily staging directory.
    source code
     
    _getLocalPeers(config)
    Return a list of LocalPeer objects based on configuration.
    source code
     
    _getRemotePeers(config)
    Return a list of RemotePeer objects based on configuration.
    source code
     
    _getRemoteUser(config, remotePeer)
    Gets the remote user associated with a remote peer.
    source code
     
    _getLocalUser(config)
    Gets the remote user associated with a remote peer.
    source code
     
    _getRcpCommand(config, remotePeer)
    Gets the RCP command associated with a remote peer.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.actions.stage")
      __package__ = 'CedarBackup3.actions'
    Function Details [hide private]

    executeStage(configPath, options, config)

    source code 

    Executes the stage backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are problems reading or writing files.
    Notes:
    • The daily directory is derived once and then we stick with it, just in case a backup happens to span midnite.
    • As portions of the stage action is complete, we will write various indicator files so that it's obvious what actions have been completed. Each peer gets a stage indicator in its collect directory, and then the master gets a stage indicator in its daily staging directory. The store process uses the master's stage indicator to decide whether a directory is ready to be stored. Currently, nothing uses the indicator at each peer, and it exists for reference only.

    _createStagingDirs(config, dailyDir, peers)

    source code 

    Creates staging directories as required.

    The main staging directory is the passed in daily directory, something like staging/2002/05/23. Then, individual peers get their own directories, i.e. staging/2002/05/23/host.

    Parameters:
    • config - Config object.
    • dailyDir - Daily staging directory.
    • peers - List of all configured peers.
    Returns:
    Dictionary mapping peer name to staging directory.

    _getIgnoreFailuresFlag(options, config, peer)

    source code 

    Gets the ignore failures flag based on options, configuration, and peer.

    Parameters:
    • options - Options object
    • config - Configuration object
    • peer - Peer to check
    Returns:
    Whether to ignore stage failures for this peer

    _getDailyDir(config)

    source code 

    Gets the daily staging directory.

    This is just a directory in the form staging/YYYY/MM/DD, i.e. staging/2000/10/07, except it will be an absolute path based on config.stage.targetDir.

    Parameters:
    • config - Config object
    Returns:
    Path of daily staging directory.

    _getLocalPeers(config)

    source code 

    Return a list of LocalPeer objects based on configuration.

    Parameters:
    • config - Config object.
    Returns:
    List of LocalPeer objects.

    _getRemotePeers(config)

    source code 

    Return a list of RemotePeer objects based on configuration.

    Parameters:
    • config - Config object.
    Returns:
    List of RemotePeer objects.

    _getRemoteUser(config, remotePeer)

    source code 

    Gets the remote user associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • config - Config object.
    • remotePeer - Configuration-style remote peer object.
    Returns:
    Name of remote user associated with remote peer.

    _getLocalUser(config)

    source code 

    Gets the remote user associated with a remote peer.

    Parameters:
    • config - Config object.
    Returns:
    Name of local user that should be used

    _getRcpCommand(config, remotePeer)

    source code 

    Gets the RCP command associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • config - Config object.
    • remotePeer - Configuration-style remote peer object.
    Returns:
    RCP command associated with remote peer.

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.actions-module.html0000664000175000017500000000216212657665544027046 0ustar pronovicpronovic00000000000000 actions

    Module actions


    Variables


    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.subversion.LocalConfig-class.html0000664000175000017500000014070512657665545032316 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.subversion.LocalConfig
    Package CedarBackup3 :: Package extend :: Module subversion :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit Subversion-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds a <subversion> configuration section as the next child of a parent.
    source code
     
    _setSubversion(self, value)
    Property target used to set the subversion configuration value.
    source code
     
    _getSubversion(self)
    Property target used to get the subversion configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseSubversion(parent)
    Parses a subversion configuration section.
    source code
     
    _parseRepositories(parent)
    Reads a list of Repository objects from immediately beneath the parent.
    source code
     
    _addRepository(xmlDom, parentNode, repository)
    Adds a repository container as the next child of a parent.
    source code
     
    _parseRepositoryDirs(parent)
    Reads a list of RepositoryDir objects from immediately beneath the parent.
    source code
     
    _parseExclusions(parentNode)
    Reads exclusions data from immediately beneath the parent.
    source code
     
    _addRepositoryDir(xmlDom, parentNode, repositoryDir)
    Adds a repository dir container as the next child of a parent.
    source code
    Properties [hide private]
      subversion
    Subversion configuration in terms of a SubversionConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    Subversion configuration must be filled in. Within that, the collect mode and compress mode are both optional, but the list of repositories must contain at least one entry.

    Each repository must contain a repository path, and then must be either able to take collect mode and compress mode configuration from the parent SubversionConfig object, or must set each value on its own.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds a <subversion> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      collectMode    //cb_config/subversion/collectMode
      compressMode   //cb_config/subversion/compressMode
    

    We also add groups of the following items, one list element per item:

      repository     //cb_config/subversion/repository
      repository_dir //cb_config/subversion/repository_dir
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setSubversion(self, value)

    source code 

    Property target used to set the subversion configuration value. If not None, the value must be a SubversionConfig object.

    Raises:
    • ValueError - If the value is not a SubversionConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the subversion configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseSubversion(parent)
    Static Method

    source code 

    Parses a subversion configuration section.

    We read the following individual fields:

      collectMode    //cb_config/subversion/collect_mode
      compressMode   //cb_config/subversion/compress_mode
    

    We also read groups of the following item, one list element per item:

      repositories    //cb_config/subversion/repository
      repository_dirs //cb_config/subversion/repository_dir
    

    The repositories are parsed by _parseRepositories, and the repository dirs are parsed by _parseRepositoryDirs.

    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    SubversionConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseRepositories(parent)
    Static Method

    source code 

    Reads a list of Repository objects from immediately beneath the parent.

    We read the following individual fields:

      repositoryType          type
      repositoryPath          abs_path
      collectMode             collect_mode
      compressMode            compess_mode
    

    The type field is optional, and its value is kept around only for reference.

    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    List of Repository objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _addRepository(xmlDom, parentNode, repository)
    Static Method

    source code 

    Adds a repository container as the next child of a parent.

    We add the following fields to the document:

      repositoryType          repository/type
      repositoryPath          repository/abs_path
      collectMode             repository/collect_mode
      compressMode            repository/compress_mode
    

    The <repository> node itself is created as the next child of the parent node. This method only adds one repository node. The parent must loop for each repository in the SubversionConfig object.

    If repository is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.
    • repository - Repository to be added to the document.

    _parseRepositoryDirs(parent)
    Static Method

    source code 

    Reads a list of RepositoryDir objects from immediately beneath the parent.

    We read the following individual fields:

      repositoryType          type
      directoryPath           abs_path
      collectMode             collect_mode
      compressMode            compess_mode
    

    We also read groups of the following items, one list element per item:

      relativeExcludePaths    exclude/rel_path
      excludePatterns         exclude/pattern
    

    The exclusions are parsed by _parseExclusions.

    The type field is optional, and its value is kept around only for reference.

    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    List of RepositoryDir objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseExclusions(parentNode)
    Static Method

    source code 

    Reads exclusions data from immediately beneath the parent.

    We read groups of the following items, one list element per item:

      relative    exclude/rel_path
      patterns    exclude/pattern
    

    If there are none of some pattern (i.e. no relative path items) then None will be returned for that item in the tuple.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    Tuple of (relative, patterns) exclusions.

    _addRepositoryDir(xmlDom, parentNode, repositoryDir)
    Static Method

    source code 

    Adds a repository dir container as the next child of a parent.

    We add the following fields to the document:

      repositoryType          repository_dir/type
      directoryPath           repository_dir/abs_path
      collectMode             repository_dir/collect_mode
      compressMode            repository_dir/compress_mode
    

    We also add groups of the following items, one list element per item:

      relativeExcludePaths    dir/exclude/rel_path
      excludePatterns         dir/exclude/pattern
    

    The <repository_dir> node itself is created as the next child of the parent node. This method only adds one repository node. The parent must loop for each repository dir in the SubversionConfig object.

    If repositoryDir is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.
    • repositoryDir - Repository dir to be added to the document.

    Property Details [hide private]

    subversion

    Subversion configuration in terms of a SubversionConfig object.

    Get Method:
    _getSubversion(self) - Property target used to get the subversion configuration value.
    Set Method:
    _setSubversion(self, value) - Property target used to set the subversion configuration value.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions-pysrc.html0000664000175000017500000002556612657665550026150 0ustar pronovicpronovic00000000000000 CedarBackup3.actions
    Package CedarBackup3 :: Package actions
    [hide private]
    [frames] | no frames]

    Source Code for Package CedarBackup3.actions

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 3 (>= 3.4) 
    13  # Project  : Official Cedar Backup Extensions 
    14  # Purpose  : Provides package initialization 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Cedar Backup actions. 
    24   
    25  This package code related to the offical Cedar Backup actions (collect, 
    26  stage, store, purge, rebuild, and validate). 
    27   
    28  The action modules consist of mostly "glue" code that uses other lower-level 
    29  functionality to actually implement a backup.  There is one module for each 
    30  high-level backup action, plus a module that provides shared constants. 
    31   
    32  All of the public action function implement the Cedar Backup Extension 
    33  Architecture Interface, i.e. the same interface that extensions implement. 
    34   
    35  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    36  """ 
    37   
    38   
    39  ######################################################################## 
    40  # Package initialization 
    41  ######################################################################## 
    42   
    43  # Using 'from CedarBackup3.actions import *' will just import the modules listed 
    44  # in the __all__ variable. 
    45   
    46  __all__ = [ 'constants', 'collect', 'initialize', 'stage', 'store', 'purge', 'util', 'rebuild', 'validate', ] 
    47   
    

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.util.RegexMatchList-class.html0000664000175000017500000004735612657665545030321 0ustar pronovicpronovic00000000000000 CedarBackup3.util.RegexMatchList
    Package CedarBackup3 :: Module util :: Class RegexMatchList
    [hide private]
    [frames] | no frames]

    Class RegexMatchList

    source code

    object --+        
             |        
          list --+    
                 |    
     UnorderedList --+
                     |
                    RegexMatchList
    

    Class representing a list containing only strings that match a regular expression.

    If emptyAllowed is passed in as False, then empty strings are explicitly disallowed, even if they happen to match the regular expression. (None values are always disallowed, since string operations are not permitted on None.)

    This is an unordered list.

    We override the append, insert and extend methods to ensure that any item added to the list matches the indicated regular expression.


    Note: If you try to put values that are not strings into the list, you will likely get either TypeError or AttributeError exceptions as a result.

    Instance Methods [hide private]
    new empty list
    __init__(self, valuesRegex, emptyAllowed=True, prefix=None)
    Initializes a list restricted to containing certain values.
    source code
     
    append(self, item)
    Overrides the standard append method.
    source code
     
    insert(self, index, item)
    Overrides the standard insert method.
    source code
     
    extend(self, seq)
    Overrides the standard insert method.
    source code

    Inherited from UnorderedList: __eq__, __ge__, __gt__, __le__, __lt__, __ne__

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, count, index, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Static Methods [hide private]

    Inherited from UnorderedList: mixedkey, mixedsort

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, valuesRegex, emptyAllowed=True, prefix=None)
    (Constructor)

    source code 

    Initializes a list restricted to containing certain values.

    Parameters:
    • valuesRegex - Regular expression that must be matched, as a string
    • emptyAllowed - Indicates whether empty or None values are allowed.
    • prefix - Prefix to use in error messages (None results in prefix "Item")
    Returns: new empty list
    Overrides: object.__init__

    append(self, item)

    source code 

    Overrides the standard append method.

    Raises:
    • ValueError - If item is None
    • ValueError - If item is empty and empty values are not allowed
    • ValueError - If item does not match the configured regular expression
    Overrides: list.append

    insert(self, index, item)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item is None
    • ValueError - If item is empty and empty values are not allowed
    • ValueError - If item does not match the configured regular expression
    Overrides: list.insert

    extend(self, seq)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If any item is None
    • ValueError - If any item is empty and empty values are not allowed
    • ValueError - If any item does not match the configured regular expression
    Overrides: list.extend

    CedarBackup3-3.1.6/doc/interface/frames.html0000664000175000017500000000111512657665544022364 0ustar pronovicpronovic00000000000000 CedarBackup3 CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.collect-module.html0000664000175000017500000012673612657665544027725 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.collect
    Package CedarBackup3 :: Package actions :: Module collect
    [hide private]
    [frames] | no frames]

    Module collect

    source code

    Implements the standard 'collect' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeCollect(configPath, options, config)
    Executes the collect backup action.
    source code
     
    _collectFile(config, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath)
    Collects a configured collect file.
    source code
     
    _collectDirectory(config, absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, recursionLevel)
    Collects a configured collect directory.
    source code
     
    _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath)
    Execute the backup process for the indicated backup list.
    source code
     
    _loadDigest(digestPath)
    Loads the indicated digest path from disk into a dictionary.
    source code
     
    _writeDigest(config, digest, digestPath)
    Writes the digest dictionary to the indicated digest path on disk.
    source code
     
    _getCollectMode(config, item)
    Gets the collect mode that should be used for a collect directory or file.
    source code
     
    _getArchiveMode(config, item)
    Gets the archive mode that should be used for a collect directory or file.
    source code
     
    _getIgnoreFile(config, item)
    Gets the ignore file that should be used for a collect directory or file.
    source code
     
    _getLinkDepth(item)
    Gets the link depth that should be used for a collect directory.
    source code
     
    _getDereference(item)
    Gets the dereference flag that should be used for a collect directory.
    source code
     
    _getRecursionLevel(item)
    Gets the recursion level that should be used for a collect directory.
    source code
     
    _getDigestPath(config, absolutePath)
    Gets the digest path associated with a collect directory or file.
    source code
     
    _getTarfilePath(config, absolutePath, archiveMode)
    Gets the tarfile path (including correct extension) associated with a collect directory.
    source code
     
    _getExclusions(config, collectDir)
    Gets exclusions (file and patterns) associated with a collect directory.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.actions.collect")
      __package__ = 'CedarBackup3.actions'
    Function Details [hide private]

    executeCollect(configPath, options, config)

    source code 

    Executes the collect backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • TarError - If there is a problem creating a tar file

    Note: When the collect action is complete, we will write a collect indicator to the collect directory, so it's obvious that the collect action has completed. The stage process uses this indicator to decide whether a peer is ready to be staged.

    _collectFile(config, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath)

    source code 

    Collects a configured collect file.

    The indicated collect file is collected into the indicated tarfile. For files that are collected incrementally, we'll use the indicated digest path and pay attention to the reset digest flag (basically, the reset digest flag ignores any existing digest, but a new digest is always rewritten).

    The caller must decide what the collect and archive modes are, since they can be on both the collect configuration and the collect file itself.

    Parameters:
    • config - Config object.
    • absolutePath - Absolute path of file to collect.
    • tarfilePath - Path to tarfile that should be created.
    • collectMode - Collect mode to use.
    • archiveMode - Archive mode to use.
    • resetDigest - Reset digest flag.
    • digestPath - Path to digest file on disk, if needed.

    _collectDirectory(config, absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, recursionLevel)

    source code 

    Collects a configured collect directory.

    The indicated collect directory is collected into the indicated tarfile. For directories that are collected incrementally, we'll use the indicated digest path and pay attention to the reset digest flag (basically, the reset digest flag ignores any existing digest, but a new digest is always rewritten).

    The caller must decide what the collect and archive modes are, since they can be on both the collect configuration and the collect directory itself.

    Parameters:
    • config - Config object.
    • absolutePath - Absolute path of directory to collect.
    • collectMode - Collect mode to use.
    • archiveMode - Archive mode to use.
    • ignoreFile - Ignore file to use.
    • linkDepth - Link depth value to use.
    • dereference - Dereference flag to use.
    • resetDigest - Reset digest flag.
    • excludePaths - List of absolute paths to exclude.
    • excludePatterns - List of patterns to exclude.
    • recursionLevel - Recursion level (zero for no recursion)

    _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath)

    source code 

    Execute the backup process for the indicated backup list.

    This function exists mainly to consolidate functionality between the _collectFile and _collectDirectory functions. Those functions build the backup list; this function causes the backup to execute properly and also manages usage of the digest file on disk as explained in their comments.

    For collect files, the digest file will always just contain the single file that is being backed up. This might little wasteful in terms of the number of files that we keep around, but it's consistent and easy to understand.

    Parameters:
    • config - Config object.
    • backupList - List to execute backup for
    • absolutePath - Absolute path of directory or file to collect.
    • tarfilePath - Path to tarfile that should be created.
    • collectMode - Collect mode to use.
    • archiveMode - Archive mode to use.
    • resetDigest - Reset digest flag.
    • digestPath - Path to digest file on disk, if needed.

    _loadDigest(digestPath)

    source code 

    Loads the indicated digest path from disk into a dictionary.

    If we can't load the digest successfully (either because it doesn't exist or for some other reason), then an empty dictionary will be returned - but the condition will be logged.

    Parameters:
    • digestPath - Path to the digest file on disk.
    Returns:
    Dictionary representing contents of digest path.

    _writeDigest(config, digest, digestPath)

    source code 

    Writes the digest dictionary to the indicated digest path on disk.

    If we can't write the digest successfully for any reason, we'll log the condition but won't throw an exception.

    Parameters:
    • config - Config object.
    • digest - Digest dictionary to write to disk.
    • digestPath - Path to the digest file on disk.

    _getCollectMode(config, item)

    source code 

    Gets the collect mode that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section.

    Parameters:
    • config - Config object.
    • item - CollectFile or CollectDir object
    Returns:
    Collect mode to use.

    _getArchiveMode(config, item)

    source code 

    Gets the archive mode that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section.

    Parameters:
    • config - Config object.
    • item - CollectFile or CollectDir object
    Returns:
    Archive mode to use.

    _getIgnoreFile(config, item)

    source code 

    Gets the ignore file that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section.

    Parameters:
    • config - Config object.
    • item - CollectFile or CollectDir object
    Returns:
    Ignore file to use.

    _getLinkDepth(item)

    source code 

    Gets the link depth that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of 0 (zero).

    Parameters:
    • item - CollectDir object
    Returns:
    Link depth to use.

    _getDereference(item)

    source code 

    Gets the dereference flag that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of False.

    Parameters:
    • item - CollectDir object
    Returns:
    Dereference flag to use.

    _getRecursionLevel(item)

    source code 

    Gets the recursion level that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of 0 (zero).

    Parameters:
    • item - CollectDir object
    Returns:
    Recursion level to use.

    _getDigestPath(config, absolutePath)

    source code 

    Gets the digest path associated with a collect directory or file.

    Parameters:
    • config - Config object.
    • absolutePath - Absolute path to generate digest for
    Returns:
    Absolute path to the digest associated with the collect directory or file.

    _getTarfilePath(config, absolutePath, archiveMode)

    source code 

    Gets the tarfile path (including correct extension) associated with a collect directory.

    Parameters:
    • config - Config object.
    • absolutePath - Absolute path to generate tarfile for
    • archiveMode - Archive mode to use for this tarfile.
    Returns:
    Absolute path to the tarfile associated with the collect directory.

    _getExclusions(config, collectDir)

    source code 

    Gets exclusions (file and patterns) associated with a collect directory.

    The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the collect configuration absolute exclude paths and the collect directory's absolute and relative exclude paths.

    The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the list of patterns from the collect configuration and from the collect directory itself.

    Parameters:
    • config - Config object.
    • collectDir - Collect directory object.
    Returns:
    Tuple (files, patterns) indicating what to exclude.

    CedarBackup3-3.1.6/doc/interface/redirect.html0000664000175000017500000001256512657665550022720 0ustar pronovicpronovic00000000000000Epydoc Redirect Page

    Epydoc Auto-redirect page

    When javascript is enabled, this page will redirect URLs of the form redirect.html#dotted.name to the documentation for the object with the given fully-qualified dotted name.

     

    CedarBackup3-3.1.6/doc/interface/index.html0000664000175000017500000000111512657665550022213 0ustar pronovicpronovic00000000000000 CedarBackup3 CedarBackup3-3.1.6/doc/interface/CedarBackup3.writers.cdwriter-pysrc.html0000664000175000017500000145364012657665546030035 0ustar pronovicpronovic00000000000000 CedarBackup3.writers.cdwriter
    Package CedarBackup3 :: Package writers :: Module cdwriter
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.writers.cdwriter

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2008,2010,2015 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python 3 (>= 3.4) 
      29  # Project  : Cedar Backup, release 3 
      30  # Purpose  : Provides functionality related to CD writer devices. 
      31  # 
      32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      33   
      34  ######################################################################## 
      35  # Module documentation 
      36  ######################################################################## 
      37   
      38  """ 
      39  Provides functionality related to CD writer devices. 
      40   
      41  @sort: MediaDefinition, MediaCapacity, CdWriter, 
      42         MEDIA_CDRW_74, MEDIA_CDR_74, MEDIA_CDRW_80, MEDIA_CDR_80 
      43   
      44  @var MEDIA_CDRW_74: Constant representing 74-minute CD-RW media. 
      45  @var MEDIA_CDR_74: Constant representing 74-minute CD-R media. 
      46  @var MEDIA_CDRW_80: Constant representing 80-minute CD-RW media. 
      47  @var MEDIA_CDR_80: Constant representing 80-minute CD-R media. 
      48   
      49  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      50  """ 
      51   
      52  ######################################################################## 
      53  # Imported modules 
      54  ######################################################################## 
      55   
      56  # System modules 
      57  import os 
      58  import re 
      59  import logging 
      60  import tempfile 
      61  import time 
      62   
      63  # Cedar Backup modules 
      64  from CedarBackup3.util import resolveCommand, executeCommand 
      65  from CedarBackup3.util import convertSize, displayBytes, encodePath 
      66  from CedarBackup3.util import UNIT_SECTORS, UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES 
      67  from CedarBackup3.writers.util import validateDevice, validateScsiId, validateDriveSpeed 
      68  from CedarBackup3.writers.util import IsoImage 
      69   
      70   
      71  ######################################################################## 
      72  # Module-wide constants and variables 
      73  ######################################################################## 
      74   
      75  logger = logging.getLogger("CedarBackup3.log.writers.cdwriter") 
      76   
      77  MEDIA_CDRW_74  = 1 
      78  MEDIA_CDR_74   = 2 
      79  MEDIA_CDRW_80  = 3 
      80  MEDIA_CDR_80   = 4 
      81   
      82  CDRECORD_COMMAND = [ "cdrecord", ] 
      83  EJECT_COMMAND    = [ "eject", ] 
      84  MKISOFS_COMMAND  = [ "mkisofs", ] 
    
    85 86 87 ######################################################################## 88 # MediaDefinition class definition 89 ######################################################################## 90 91 -class MediaDefinition(object):
    92 93 """ 94 Class encapsulating information about CD media definitions. 95 96 The following media types are accepted: 97 98 - C{MEDIA_CDR_74}: 74-minute CD-R media (650 MB capacity) 99 - C{MEDIA_CDRW_74}: 74-minute CD-RW media (650 MB capacity) 100 - C{MEDIA_CDR_80}: 80-minute CD-R media (700 MB capacity) 101 - C{MEDIA_CDRW_80}: 80-minute CD-RW media (700 MB capacity) 102 103 Note that all of the capacities associated with a media definition are in 104 terms of ISO sectors (C{util.ISO_SECTOR_SIZE)}. 105 106 @sort: __init__, mediaType, rewritable, initialLeadIn, leadIn, capacity 107 """ 108
    109 - def __init__(self, mediaType):
    110 """ 111 Creates a media definition for the indicated media type. 112 @param mediaType: Type of the media, as discussed above. 113 @raise ValueError: If the media type is unknown or unsupported. 114 """ 115 self._mediaType = None 116 self._rewritable = False 117 self._initialLeadIn = 0. 118 self._leadIn = 0.0 119 self._capacity = 0.0 120 self._setValues(mediaType)
    121
    122 - def _setValues(self, mediaType):
    123 """ 124 Sets values based on media type. 125 @param mediaType: Type of the media, as discussed above. 126 @raise ValueError: If the media type is unknown or unsupported. 127 """ 128 if mediaType not in [MEDIA_CDR_74, MEDIA_CDRW_74, MEDIA_CDR_80, MEDIA_CDRW_80]: 129 raise ValueError("Invalid media type %d." % mediaType) 130 self._mediaType = mediaType 131 self._initialLeadIn = 11400.0 # per cdrecord's documentation 132 self._leadIn = 6900.0 # per cdrecord's documentation 133 if self._mediaType == MEDIA_CDR_74: 134 self._rewritable = False 135 self._capacity = convertSize(650.0, UNIT_MBYTES, UNIT_SECTORS) 136 elif self._mediaType == MEDIA_CDRW_74: 137 self._rewritable = True 138 self._capacity = convertSize(650.0, UNIT_MBYTES, UNIT_SECTORS) 139 elif self._mediaType == MEDIA_CDR_80: 140 self._rewritable = False 141 self._capacity = convertSize(700.0, UNIT_MBYTES, UNIT_SECTORS) 142 elif self._mediaType == MEDIA_CDRW_80: 143 self._rewritable = True 144 self._capacity = convertSize(700.0, UNIT_MBYTES, UNIT_SECTORS)
    145
    146 - def _getMediaType(self):
    147 """ 148 Property target used to get the media type value. 149 """ 150 return self._mediaType
    151
    152 - def _getRewritable(self):
    153 """ 154 Property target used to get the rewritable flag value. 155 """ 156 return self._rewritable
    157
    158 - def _getInitialLeadIn(self):
    159 """ 160 Property target used to get the initial lead-in value. 161 """ 162 return self._initialLeadIn
    163
    164 - def _getLeadIn(self):
    165 """ 166 Property target used to get the lead-in value. 167 """ 168 return self._leadIn
    169
    170 - def _getCapacity(self):
    171 """ 172 Property target used to get the capacity value. 173 """ 174 return self._capacity
    175 176 mediaType = property(_getMediaType, None, None, doc="Configured media type.") 177 rewritable = property(_getRewritable, None, None, doc="Boolean indicating whether the media is rewritable.") 178 initialLeadIn = property(_getInitialLeadIn, None, None, doc="Initial lead-in required for first image written to media.") 179 leadIn = property(_getLeadIn, None, None, doc="Lead-in required on successive images written to media.") 180 capacity = property(_getCapacity, None, None, doc="Total capacity of the media before any required lead-in.")
    181
    182 183 ######################################################################## 184 # MediaCapacity class definition 185 ######################################################################## 186 187 -class MediaCapacity(object):
    188 189 """ 190 Class encapsulating information about CD media capacity. 191 192 Space used includes the required media lead-in (unless the disk is unused). 193 Space available attempts to provide a picture of how many bytes are 194 available for data storage, including any required lead-in. 195 196 The boundaries value is either C{None} (if multisession discs are not 197 supported or if the disc has no boundaries) or in exactly the form provided 198 by C{cdrecord -msinfo}. It can be passed as-is to the C{IsoImage} class. 199 200 @sort: __init__, bytesUsed, bytesAvailable, boundaries, totalCapacity, utilized 201 """ 202
    203 - def __init__(self, bytesUsed, bytesAvailable, boundaries):
    204 """ 205 Initializes a capacity object. 206 @raise IndexError: If the boundaries tuple does not have enough elements. 207 @raise ValueError: If the boundaries values are not integers. 208 @raise ValueError: If the bytes used and available values are not floats. 209 """ 210 self._bytesUsed = float(bytesUsed) 211 self._bytesAvailable = float(bytesAvailable) 212 if boundaries is None: 213 self._boundaries = None 214 else: 215 self._boundaries = (int(boundaries[0]), int(boundaries[1]))
    216
    217 - def __str__(self):
    218 """ 219 Informal string representation for class instance. 220 """ 221 return "utilized %s of %s (%.2f%%)" % (displayBytes(self.bytesUsed), displayBytes(self.totalCapacity), self.utilized)
    222
    223 - def _getBytesUsed(self):
    224 """ 225 Property target to get the bytes-used value. 226 """ 227 return self._bytesUsed
    228
    229 - def _getBytesAvailable(self):
    230 """ 231 Property target to get the bytes-available value. 232 """ 233 return self._bytesAvailable
    234
    235 - def _getBoundaries(self):
    236 """ 237 Property target to get the boundaries tuple. 238 """ 239 return self._boundaries
    240
    241 - def _getTotalCapacity(self):
    242 """ 243 Property target to get the total capacity (used + available). 244 """ 245 return self.bytesUsed + self.bytesAvailable
    246
    247 - def _getUtilized(self):
    248 """ 249 Property target to get the percent of capacity which is utilized. 250 """ 251 if self.bytesAvailable <= 0.0: 252 return 100.0 253 elif self.bytesUsed <= 0.0: 254 return 0.0 255 return (self.bytesUsed / self.totalCapacity) * 100.0
    256 257 bytesUsed = property(_getBytesUsed, None, None, doc="Space used on disc, in bytes.") 258 bytesAvailable = property(_getBytesAvailable, None, None, doc="Space available on disc, in bytes.") 259 boundaries = property(_getBoundaries, None, None, doc="Session disc boundaries, in terms of ISO sectors.") 260 totalCapacity = property(_getTotalCapacity, None, None, doc="Total capacity of the disc, in bytes.") 261 utilized = property(_getUtilized, None, None, "Percentage of the total capacity which is utilized.")
    262
    263 264 ######################################################################## 265 # _ImageProperties class definition 266 ######################################################################## 267 268 -class _ImageProperties(object):
    269 """ 270 Simple value object to hold image properties for C{DvdWriter}. 271 """
    272 - def __init__(self):
    273 self.newDisc = False 274 self.tmpdir = None 275 self.mediaLabel = None 276 self.entries = None # dict mapping path to graft point
    277
    278 279 ######################################################################## 280 # CdWriter class definition 281 ######################################################################## 282 283 -class CdWriter(object):
    284 285 ###################### 286 # Class documentation 287 ###################### 288 289 """ 290 Class representing a device that knows how to write CD media. 291 292 Summary 293 ======= 294 295 This is a class representing a device that knows how to write CD media. It 296 provides common operations for the device, such as ejecting the media, 297 writing an ISO image to the media, or checking for the current media 298 capacity. It also provides a place to store device attributes, such as 299 whether the device supports writing multisession discs, etc. 300 301 This class is implemented in terms of the C{eject} and C{cdrecord} 302 programs, both of which should be available on most UN*X platforms. 303 304 Image Writer Interface 305 ====================== 306 307 The following methods make up the "image writer" interface shared 308 with other kinds of writers (such as DVD writers):: 309 310 __init__ 311 initializeImage() 312 addImageEntry() 313 writeImage() 314 setImageNewDisc() 315 retrieveCapacity() 316 getEstimatedImageSize() 317 318 Only these methods will be used by other Cedar Backup functionality 319 that expects a compatible image writer. 320 321 The media attribute is also assumed to be available. 322 323 Media Types 324 =========== 325 326 This class knows how to write to two different kinds of media, represented 327 by the following constants: 328 329 - C{MEDIA_CDR_74}: 74-minute CD-R media (650 MB capacity) 330 - C{MEDIA_CDRW_74}: 74-minute CD-RW media (650 MB capacity) 331 - C{MEDIA_CDR_80}: 80-minute CD-R media (700 MB capacity) 332 - C{MEDIA_CDRW_80}: 80-minute CD-RW media (700 MB capacity) 333 334 Most hardware can read and write both 74-minute and 80-minute CD-R and 335 CD-RW media. Some older drives may only be able to write CD-R media. 336 The difference between the two is that CD-RW media can be rewritten 337 (erased), while CD-R media cannot be. 338 339 I do not support any other configurations for a couple of reasons. The 340 first is that I've never tested any other kind of media. The second is 341 that anything other than 74 or 80 minute is apparently non-standard. 342 343 Device Attributes vs. Media Attributes 344 ====================================== 345 346 A given writer instance has two different kinds of attributes associated 347 with it, which I call device attributes and media attributes. Device 348 attributes are things which can be determined without looking at the 349 media, such as whether the drive supports writing multisession disks or 350 has a tray. Media attributes are attributes which vary depending on the 351 state of the media, such as the remaining capacity on a disc. In 352 general, device attributes are available via instance variables and are 353 constant over the life of an object, while media attributes can be 354 retrieved through method calls. 355 356 Talking to Hardware 357 =================== 358 359 This class needs to talk to CD writer hardware in two different ways: 360 through cdrecord to actually write to the media, and through the 361 filesystem to do things like open and close the tray. 362 363 Historically, CdWriter has interacted with cdrecord using the scsiId 364 attribute, and with most other utilities using the device attribute. 365 This changed somewhat in Cedar Backup 2.9.0. 366 367 When Cedar Backup was first written, the only way to interact with 368 cdrecord was by using a SCSI device id. IDE devices were mapped to 369 pseudo-SCSI devices through the kernel. Later, extended SCSI "methods" 370 arrived, and it became common to see C{ATA:1,0,0} or C{ATAPI:0,0,0} as a 371 way to address IDE hardware. By late 2006, C{ATA} and C{ATAPI} had 372 apparently been deprecated in favor of just addressing the IDE device 373 directly by name, i.e. C{/dev/cdrw}. 374 375 Because of this latest development, it no longer makes sense to require a 376 CdWriter to be created with a SCSI id -- there might not be one. So, the 377 passed-in SCSI id is now optional. Also, there is now a hardwareId 378 attribute. This attribute is filled in with either the SCSI id (if 379 provided) or the device (otherwise). The hardware id is the value that 380 will be passed to cdrecord in the C{dev=} argument. 381 382 Testing 383 ======= 384 385 It's rather difficult to test this code in an automated fashion, even if 386 you have access to a physical CD writer drive. It's even more difficult 387 to test it if you are running on some build daemon (think of a Debian 388 autobuilder) which can't be expected to have any hardware or any media 389 that you could write to. 390 391 Because of this, much of the implementation below is in terms of static 392 methods that are supposed to take defined actions based on their 393 arguments. Public methods are then implemented in terms of a series of 394 calls to simplistic static methods. This way, we can test as much as 395 possible of the functionality via testing the static methods, while 396 hoping that if the static methods are called appropriately, things will 397 work properly. It's not perfect, but it's much better than no testing at 398 all. 399 400 @sort: __init__, isRewritable, _retrieveProperties, retrieveCapacity, _getBoundaries, 401 _calculateCapacity, openTray, closeTray, refreshMedia, writeImage, 402 _blankMedia, _parsePropertiesOutput, _parseBoundariesOutput, 403 _buildOpenTrayArgs, _buildCloseTrayArgs, _buildPropertiesArgs, 404 _buildBoundariesArgs, _buildBlankArgs, _buildWriteArgs, 405 device, scsiId, hardwareId, driveSpeed, media, deviceType, deviceVendor, 406 deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject, 407 initializeImage, addImageEntry, writeImage, setImageNewDisc, getEstimatedImageSize 408 """ 409 410 ############## 411 # Constructor 412 ############## 413
    414 - def __init__(self, device, scsiId=None, driveSpeed=None, 415 mediaType=MEDIA_CDRW_74, noEject=False, 416 refreshMediaDelay=0, ejectDelay=0, unittest=False):
    417 """ 418 Initializes a CD writer object. 419 420 The current user must have write access to the device at the time the 421 object is instantiated, or an exception will be thrown. However, no 422 media-related validation is done, and in fact there is no need for any 423 media to be in the drive until one of the other media attribute-related 424 methods is called. 425 426 The various instance variables such as C{deviceType}, C{deviceVendor}, 427 etc. might be C{None}, if we're unable to parse this specific information 428 from the C{cdrecord} output. This information is just for reference. 429 430 The SCSI id is optional, but the device path is required. If the SCSI id 431 is passed in, then the hardware id attribute will be taken from the SCSI 432 id. Otherwise, the hardware id will be taken from the device. 433 434 If cdrecord improperly detects whether your writer device has a tray and 435 can be safely opened and closed, then pass in C{noEject=False}. This 436 will override the properties and the device will never be ejected. 437 438 @note: The C{unittest} parameter should never be set to C{True} 439 outside of Cedar Backup code. It is intended for use in unit testing 440 Cedar Backup internals and has no other sensible purpose. 441 442 @param device: Filesystem device associated with this writer. 443 @type device: Absolute path to a filesystem device, i.e. C{/dev/cdrw} 444 445 @param scsiId: SCSI id for the device (optional). 446 @type scsiId: If provided, SCSI id in the form C{[<method>:]scsibus,target,lun} 447 448 @param driveSpeed: Speed at which the drive writes. 449 @type driveSpeed: Use C{2} for 2x device, etc. or C{None} to use device default. 450 451 @param mediaType: Type of the media that is assumed to be in the drive. 452 @type mediaType: One of the valid media type as discussed above. 453 454 @param noEject: Overrides properties to indicate that the device does not support eject. 455 @type noEject: Boolean true/false 456 457 @param refreshMediaDelay: Refresh media delay to use, if any 458 @type refreshMediaDelay: Number of seconds, an integer >= 0 459 460 @param ejectDelay: Eject delay to use, if any 461 @type ejectDelay: Number of seconds, an integer >= 0 462 463 @param unittest: Turns off certain validations, for use in unit testing. 464 @type unittest: Boolean true/false 465 466 @raise ValueError: If the device is not valid for some reason. 467 @raise ValueError: If the SCSI id is not in a valid form. 468 @raise ValueError: If the drive speed is not an integer >= 1. 469 @raise IOError: If device properties could not be read for some reason. 470 """ 471 self._image = None # optionally filled in by initializeImage() 472 self._device = validateDevice(device, unittest) 473 self._scsiId = validateScsiId(scsiId) 474 self._driveSpeed = validateDriveSpeed(driveSpeed) 475 self._media = MediaDefinition(mediaType) 476 self._noEject = noEject 477 self._refreshMediaDelay = refreshMediaDelay 478 self._ejectDelay = ejectDelay 479 if not unittest: 480 (self._deviceType, 481 self._deviceVendor, 482 self._deviceId, 483 self._deviceBufferSize, 484 self._deviceSupportsMulti, 485 self._deviceHasTray, 486 self._deviceCanEject) = self._retrieveProperties()
    487 488 489 ############# 490 # Properties 491 ############# 492
    493 - def _getDevice(self):
    494 """ 495 Property target used to get the device value. 496 """ 497 return self._device
    498
    499 - def _getScsiId(self):
    500 """ 501 Property target used to get the SCSI id value. 502 """ 503 return self._scsiId
    504
    505 - def _getHardwareId(self):
    506 """ 507 Property target used to get the hardware id value. 508 """ 509 if self._scsiId is None: 510 return self._device 511 return self._scsiId
    512
    513 - def _getDriveSpeed(self):
    514 """ 515 Property target used to get the drive speed. 516 """ 517 return self._driveSpeed
    518
    519 - def _getMedia(self):
    520 """ 521 Property target used to get the media description. 522 """ 523 return self._media
    524
    525 - def _getDeviceType(self):
    526 """ 527 Property target used to get the device type. 528 """ 529 return self._deviceType
    530
    531 - def _getDeviceVendor(self):
    532 """ 533 Property target used to get the device vendor. 534 """ 535 return self._deviceVendor
    536
    537 - def _getDeviceId(self):
    538 """ 539 Property target used to get the device id. 540 """ 541 return self._deviceId
    542
    543 - def _getDeviceBufferSize(self):
    544 """ 545 Property target used to get the device buffer size. 546 """ 547 return self._deviceBufferSize
    548
    549 - def _getDeviceSupportsMulti(self):
    550 """ 551 Property target used to get the device-support-multi flag. 552 """ 553 return self._deviceSupportsMulti
    554
    555 - def _getDeviceHasTray(self):
    556 """ 557 Property target used to get the device-has-tray flag. 558 """ 559 return self._deviceHasTray
    560
    561 - def _getDeviceCanEject(self):
    562 """ 563 Property target used to get the device-can-eject flag. 564 """ 565 return self._deviceCanEject
    566
    567 - def _getRefreshMediaDelay(self):
    568 """ 569 Property target used to get the configured refresh media delay, in seconds. 570 """ 571 return self._refreshMediaDelay
    572
    573 - def _getEjectDelay(self):
    574 """ 575 Property target used to get the configured eject delay, in seconds. 576 """ 577 return self._ejectDelay
    578 579 device = property(_getDevice, None, None, doc="Filesystem device name for this writer.") 580 scsiId = property(_getScsiId, None, None, doc="SCSI id for the device, in the form C{[<method>:]scsibus,target,lun}.") 581 hardwareId = property(_getHardwareId, None, None, doc="Hardware id for this writer, either SCSI id or device path.") 582 driveSpeed = property(_getDriveSpeed, None, None, doc="Speed at which the drive writes.") 583 media = property(_getMedia, None, None, doc="Definition of media that is expected to be in the device.") 584 deviceType = property(_getDeviceType, None, None, doc="Type of the device, as returned from C{cdrecord -prcap}.") 585 deviceVendor = property(_getDeviceVendor, None, None, doc="Vendor of the device, as returned from C{cdrecord -prcap}.") 586 deviceId = property(_getDeviceId, None, None, doc="Device identification, as returned from C{cdrecord -prcap}.") 587 deviceBufferSize = property(_getDeviceBufferSize, None, None, doc="Size of the device's write buffer, in bytes.") 588 deviceSupportsMulti = property(_getDeviceSupportsMulti, None, None, doc="Indicates whether device supports multisession discs.") 589 deviceHasTray = property(_getDeviceHasTray, None, None, doc="Indicates whether the device has a media tray.") 590 deviceCanEject = property(_getDeviceCanEject, None, None, doc="Indicates whether the device supports ejecting its media.") 591 refreshMediaDelay = property(_getRefreshMediaDelay, None, None, doc="Refresh media delay, in seconds.") 592 ejectDelay = property(_getEjectDelay, None, None, doc="Eject delay, in seconds.") 593 594 595 ################################################# 596 # Methods related to device and media attributes 597 ################################################# 598
    599 - def isRewritable(self):
    600 """Indicates whether the media is rewritable per configuration.""" 601 return self._media.rewritable
    602
    603 - def _retrieveProperties(self):
    604 """ 605 Retrieves properties for a device from C{cdrecord}. 606 607 The results are returned as a tuple of the object device attributes as 608 returned from L{_parsePropertiesOutput}: C{(deviceType, deviceVendor, 609 deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, 610 deviceCanEject)}. 611 612 @return: Results tuple as described above. 613 @raise IOError: If there is a problem talking to the device. 614 """ 615 args = CdWriter._buildPropertiesArgs(self.hardwareId) 616 command = resolveCommand(CDRECORD_COMMAND) 617 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) 618 if result != 0: 619 raise IOError("Error (%d) executing cdrecord command to get properties." % result) 620 return CdWriter._parsePropertiesOutput(output)
    621
    622 - def retrieveCapacity(self, entireDisc=False, useMulti=True):
    623 """ 624 Retrieves capacity for the current media in terms of a C{MediaCapacity} 625 object. 626 627 If C{entireDisc} is passed in as C{True} the capacity will be for the 628 entire disc, as if it were to be rewritten from scratch. If the drive 629 does not support writing multisession discs or if C{useMulti} is passed 630 in as C{False}, the capacity will also be as if the disc were to be 631 rewritten from scratch, but the indicated boundaries value will be 632 C{None}. The same will happen if the disc cannot be read for some 633 reason. Otherwise, the capacity (including the boundaries) will 634 represent whatever space remains on the disc to be filled by future 635 sessions. 636 637 @param entireDisc: Indicates whether to return capacity for entire disc. 638 @type entireDisc: Boolean true/false 639 640 @param useMulti: Indicates whether a multisession disc should be assumed, if possible. 641 @type useMulti: Boolean true/false 642 643 @return: C{MediaCapacity} object describing the capacity of the media. 644 @raise IOError: If the media could not be read for some reason. 645 """ 646 boundaries = self._getBoundaries(entireDisc, useMulti) 647 return CdWriter._calculateCapacity(self._media, boundaries)
    648
    649 - def _getBoundaries(self, entireDisc=False, useMulti=True):
    650 """ 651 Gets the ISO boundaries for the media. 652 653 If C{entireDisc} is passed in as C{True} the boundaries will be C{None}, 654 as if the disc were to be rewritten from scratch. If the drive does not 655 support writing multisession discs, the returned value will be C{None}. 656 The same will happen if the disc can't be read for some reason. 657 Otherwise, the returned value will be represent the boundaries of the 658 disc's current contents. 659 660 The results are returned as a tuple of (lower, upper) as needed by the 661 C{IsoImage} class. Note that these values are in terms of ISO sectors, 662 not bytes. Clients should generally consider the boundaries value 663 opaque, however. 664 665 @param entireDisc: Indicates whether to return capacity for entire disc. 666 @type entireDisc: Boolean true/false 667 668 @param useMulti: Indicates whether a multisession disc should be assumed, if possible. 669 @type useMulti: Boolean true/false 670 671 @return: Boundaries tuple or C{None}, as described above. 672 @raise IOError: If the media could not be read for some reason. 673 """ 674 if not self._deviceSupportsMulti: 675 logger.debug("Device does not support multisession discs; returning boundaries None.") 676 return None 677 elif not useMulti: 678 logger.debug("Use multisession flag is False; returning boundaries None.") 679 return None 680 elif entireDisc: 681 logger.debug("Entire disc flag is True; returning boundaries None.") 682 return None 683 else: 684 args = CdWriter._buildBoundariesArgs(self.hardwareId) 685 command = resolveCommand(CDRECORD_COMMAND) 686 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) 687 if result != 0: 688 logger.debug("Error (%d) executing cdrecord command to get capacity.", result) 689 logger.warning("Unable to read disc (might not be initialized); returning boundaries of None.") 690 return None 691 boundaries = CdWriter._parseBoundariesOutput(output) 692 if boundaries is None: 693 logger.debug("Returning disc boundaries: None") 694 else: 695 logger.debug("Returning disc boundaries: (%d, %d)", boundaries[0], boundaries[1]) 696 return boundaries
    697 698 @staticmethod
    699 - def _calculateCapacity(media, boundaries):
    700 """ 701 Calculates capacity for the media in terms of boundaries. 702 703 If C{boundaries} is C{None} or the lower bound is 0 (zero), then the 704 capacity will be for the entire disc minus the initial lead in. 705 Otherwise, capacity will be as if the caller wanted to add an additional 706 session to the end of the existing data on the disc. 707 708 @param media: MediaDescription object describing the media capacity. 709 @param boundaries: Session boundaries as returned from L{_getBoundaries}. 710 711 @return: C{MediaCapacity} object describing the capacity of the media. 712 """ 713 if boundaries is None or boundaries[1] == 0: 714 logger.debug("Capacity calculations are based on a complete disc rewrite.") 715 sectorsAvailable = media.capacity - media.initialLeadIn 716 if sectorsAvailable < 0: sectorsAvailable = 0.0 717 bytesUsed = 0.0 718 bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) 719 else: 720 logger.debug("Capacity calculations are based on a new ISO session.") 721 sectorsAvailable = media.capacity - boundaries[1] - media.leadIn 722 if sectorsAvailable < 0: sectorsAvailable = 0.0 723 bytesUsed = convertSize(boundaries[1], UNIT_SECTORS, UNIT_BYTES) 724 bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) 725 logger.debug("Used [%s], available [%s].", displayBytes(bytesUsed), displayBytes(bytesAvailable)) 726 return MediaCapacity(bytesUsed, bytesAvailable, boundaries)
    727 728 729 ####################################################### 730 # Methods used for working with the internal ISO image 731 ####################################################### 732
    733 - def initializeImage(self, newDisc, tmpdir, mediaLabel=None):
    734 """ 735 Initializes the writer's associated ISO image. 736 737 This method initializes the C{image} instance variable so that the caller 738 can use the C{addImageEntry} method. Once entries have been added, the 739 C{writeImage} method can be called with no arguments. 740 741 @param newDisc: Indicates whether the disc should be re-initialized 742 @type newDisc: Boolean true/false. 743 744 @param tmpdir: Temporary directory to use if needed 745 @type tmpdir: String representing a directory path on disk 746 747 @param mediaLabel: Media label to be applied to the image, if any 748 @type mediaLabel: String, no more than 25 characters long 749 """ 750 self._image = _ImageProperties() 751 self._image.newDisc = newDisc 752 self._image.tmpdir = encodePath(tmpdir) 753 self._image.mediaLabel = mediaLabel 754 self._image.entries = {} # mapping from path to graft point (if any)
    755
    756 - def addImageEntry(self, path, graftPoint):
    757 """ 758 Adds a filepath entry to the writer's associated ISO image. 759 760 The contents of the filepath -- but not the path itself -- will be added 761 to the image at the indicated graft point. If you don't want to use a 762 graft point, just pass C{None}. 763 764 @note: Before calling this method, you must call L{initializeImage}. 765 766 @param path: File or directory to be added to the image 767 @type path: String representing a path on disk 768 769 @param graftPoint: Graft point to be used when adding this entry 770 @type graftPoint: String representing a graft point path, as described above 771 772 @raise ValueError: If initializeImage() was not previously called 773 """ 774 if self._image is None: 775 raise ValueError("Must call initializeImage() before using this method.") 776 if not os.path.exists(path): 777 raise ValueError("Path [%s] does not exist." % path) 778 self._image.entries[path] = graftPoint
    779
    780 - def setImageNewDisc(self, newDisc):
    781 """ 782 Resets (overrides) the newDisc flag on the internal image. 783 @param newDisc: New disc flag to set 784 @raise ValueError: If initializeImage() was not previously called 785 """ 786 if self._image is None: 787 raise ValueError("Must call initializeImage() before using this method.") 788 self._image.newDisc = newDisc
    789
    790 - def getEstimatedImageSize(self):
    791 """ 792 Gets the estimated size of the image associated with the writer. 793 @return: Estimated size of the image, in bytes. 794 @raise IOError: If there is a problem calling C{mkisofs}. 795 @raise ValueError: If initializeImage() was not previously called 796 """ 797 if self._image is None: 798 raise ValueError("Must call initializeImage() before using this method.") 799 image = IsoImage() 800 for path in list(self._image.entries.keys()): 801 image.addEntry(path, self._image.entries[path], override=False, contentsOnly=True) 802 return image.getEstimatedSize()
    803 804 805 ###################################### 806 # Methods which expose device actions 807 ###################################### 808
    809 - def openTray(self):
    810 """ 811 Opens the device's tray and leaves it open. 812 813 This only works if the device has a tray and supports ejecting its media. 814 We have no way to know if the tray is currently open or closed, so we 815 just send the appropriate command and hope for the best. If the device 816 does not have a tray or does not support ejecting its media, then we do 817 nothing. 818 819 If the writer was constructed with C{noEject=True}, then this is a no-op. 820 821 Starting with Debian wheezy on my backup hardware, I started seeing 822 consistent problems with the eject command. I couldn't tell whether 823 these problems were due to the device management system or to the new 824 kernel (3.2.0). Initially, I saw simple eject failures, possibly because 825 I was opening and closing the tray too quickly. I worked around that 826 behavior with the new ejectDelay flag. 827 828 Later, I sometimes ran into issues after writing an image to a disc: 829 eject would give errors like "unable to eject, last error: Inappropriate 830 ioctl for device". Various sources online (like Ubuntu bug #875543) 831 suggested that the drive was being locked somehow, and that the 832 workaround was to run 'eject -i off' to unlock it. Sure enough, that 833 fixed the problem for me, so now it's a normal error-handling strategy. 834 835 @raise IOError: If there is an error talking to the device. 836 """ 837 if not self._noEject: 838 if self._deviceHasTray and self._deviceCanEject: 839 args = CdWriter._buildOpenTrayArgs(self._device) 840 result = executeCommand(EJECT_COMMAND, args)[0] 841 if result != 0: 842 logger.debug("Eject failed; attempting kludge of unlocking the tray before retrying.") 843 self.unlockTray() 844 result = executeCommand(EJECT_COMMAND, args)[0] 845 if result != 0: 846 raise IOError("Error (%d) executing eject command to open tray (failed even after unlocking tray)." % result) 847 logger.debug("Kludge was apparently successful.") 848 if self.ejectDelay is not None: 849 logger.debug("Per configuration, sleeping %d seconds after opening tray.", self.ejectDelay) 850 time.sleep(self.ejectDelay)
    851
    852 - def unlockTray(self):
    853 """ 854 Unlocks the device's tray. 855 @raise IOError: If there is an error talking to the device. 856 """ 857 args = CdWriter._buildUnlockTrayArgs(self._device) 858 command = resolveCommand(EJECT_COMMAND) 859 result = executeCommand(command, args)[0] 860 if result != 0: 861 raise IOError("Error (%d) executing eject command to unlock tray." % result)
    862
    863 - def closeTray(self):
    864 """ 865 Closes the device's tray. 866 867 This only works if the device has a tray and supports ejecting its media. 868 We have no way to know if the tray is currently open or closed, so we 869 just send the appropriate command and hope for the best. If the device 870 does not have a tray or does not support ejecting its media, then we do 871 nothing. 872 873 If the writer was constructed with C{noEject=True}, then this is a no-op. 874 875 @raise IOError: If there is an error talking to the device. 876 """ 877 if not self._noEject: 878 if self._deviceHasTray and self._deviceCanEject: 879 args = CdWriter._buildCloseTrayArgs(self._device) 880 command = resolveCommand(EJECT_COMMAND) 881 result = executeCommand(command, args)[0] 882 if result != 0: 883 raise IOError("Error (%d) executing eject command to close tray." % result)
    884
    885 - def refreshMedia(self):
    886 """ 887 Opens and then immediately closes the device's tray, to refresh the 888 device's idea of the media. 889 890 Sometimes, a device gets confused about the state of its media. Often, 891 all it takes to solve the problem is to eject the media and then 892 immediately reload it. (There are also configurable eject and refresh 893 media delays which can be applied, for situations where this makes a 894 difference.) 895 896 This only works if the device has a tray and supports ejecting its media. 897 We have no way to know if the tray is currently open or closed, so we 898 just send the appropriate command and hope for the best. If the device 899 does not have a tray or does not support ejecting its media, then we do 900 nothing. The configured delays still apply, though. 901 902 @raise IOError: If there is an error talking to the device. 903 """ 904 self.openTray() 905 self.closeTray() 906 self.unlockTray() # on some systems, writing a disc leaves the tray locked, yikes! 907 if self.refreshMediaDelay is not None: 908 logger.debug("Per configuration, sleeping %d seconds to stabilize media state.", self.refreshMediaDelay) 909 time.sleep(self.refreshMediaDelay) 910 logger.debug("Media refresh complete; hopefully media state is stable now.")
    911
    912 - def writeImage(self, imagePath=None, newDisc=False, writeMulti=True):
    913 """ 914 Writes an ISO image to the media in the device. 915 916 If C{newDisc} is passed in as C{True}, we assume that the entire disc 917 will be overwritten, and the media will be blanked before writing it if 918 possible (i.e. if the media is rewritable). 919 920 If C{writeMulti} is passed in as C{True}, then a multisession disc will 921 be written if possible (i.e. if the drive supports writing multisession 922 discs). 923 924 if C{imagePath} is passed in as C{None}, then the existing image 925 configured with C{initializeImage} will be used. Under these 926 circumstances, the passed-in C{newDisc} flag will be ignored. 927 928 By default, we assume that the disc can be written multisession and that 929 we should append to the current contents of the disc. In any case, the 930 ISO image must be generated appropriately (i.e. must take into account 931 any existing session boundaries, etc.) 932 933 @param imagePath: Path to an ISO image on disk, or C{None} to use writer's image 934 @type imagePath: String representing a path on disk 935 936 @param newDisc: Indicates whether the entire disc will overwritten. 937 @type newDisc: Boolean true/false. 938 939 @param writeMulti: Indicates whether a multisession disc should be written, if possible. 940 @type writeMulti: Boolean true/false 941 942 @raise ValueError: If the image path is not absolute. 943 @raise ValueError: If some path cannot be encoded properly. 944 @raise IOError: If the media could not be written to for some reason. 945 @raise ValueError: If no image is passed in and initializeImage() was not previously called 946 """ 947 if imagePath is None: 948 if self._image is None: 949 raise ValueError("Must call initializeImage() before using this method with no image path.") 950 try: 951 imagePath = self._createImage() 952 self._writeImage(imagePath, writeMulti, self._image.newDisc) 953 finally: 954 if imagePath is not None and os.path.exists(imagePath): 955 try: os.unlink(imagePath) 956 except: pass 957 else: 958 imagePath = encodePath(imagePath) 959 if not os.path.isabs(imagePath): 960 raise ValueError("Image path must be absolute.") 961 self._writeImage(imagePath, writeMulti, newDisc)
    962
    963 - def _createImage(self):
    964 """ 965 Creates an ISO image based on configuration in self._image. 966 @return: Path to the newly-created ISO image on disk. 967 @raise IOError: If there is an error writing the image to disk. 968 @raise ValueError: If there are no filesystem entries in the image 969 @raise ValueError: If a path cannot be encoded properly. 970 """ 971 path = None 972 capacity = self.retrieveCapacity(entireDisc=self._image.newDisc) 973 image = IsoImage(self.device, capacity.boundaries) 974 image.volumeId = self._image.mediaLabel # may be None, which is also valid 975 for key in list(self._image.entries.keys()): 976 image.addEntry(key, self._image.entries[key], override=False, contentsOnly=True) 977 size = image.getEstimatedSize() 978 logger.info("Image size will be %s.", displayBytes(size)) 979 available = capacity.bytesAvailable 980 logger.debug("Media capacity: %s", displayBytes(available)) 981 if size > available: 982 logger.error("Image [%s] does not fit in available capacity [%s].", displayBytes(size), displayBytes(available)) 983 raise IOError("Media does not contain enough capacity to store image.") 984 try: 985 (handle, path) = tempfile.mkstemp(dir=self._image.tmpdir) 986 try: os.close(handle) 987 except: pass 988 image.writeImage(path) 989 logger.debug("Completed creating image [%s].", path) 990 return path 991 except Exception as e: 992 if path is not None and os.path.exists(path): 993 try: os.unlink(path) 994 except: pass 995 raise e
    996
    997 - def _writeImage(self, imagePath, writeMulti, newDisc):
    998 """ 999 Write an ISO image to disc using cdrecord. 1000 The disc is blanked first if C{newDisc} is C{True}. 1001 @param imagePath: Path to an ISO image on disk 1002 @param writeMulti: Indicates whether a multisession disc should be written, if possible. 1003 @param newDisc: Indicates whether the entire disc will overwritten. 1004 """ 1005 if newDisc: 1006 self._blankMedia() 1007 args = CdWriter._buildWriteArgs(self.hardwareId, imagePath, self._driveSpeed, writeMulti and self._deviceSupportsMulti) 1008 command = resolveCommand(CDRECORD_COMMAND) 1009 result = executeCommand(command, args)[0] 1010 if result != 0: 1011 raise IOError("Error (%d) executing command to write disc." % result) 1012 self.refreshMedia()
    1013
    1014 - def _blankMedia(self):
    1015 """ 1016 Blanks the media in the device, if the media is rewritable. 1017 @raise IOError: If the media could not be written to for some reason. 1018 """ 1019 if self.isRewritable(): 1020 args = CdWriter._buildBlankArgs(self.hardwareId) 1021 command = resolveCommand(CDRECORD_COMMAND) 1022 result = executeCommand(command, args)[0] 1023 if result != 0: 1024 raise IOError("Error (%d) executing command to blank disc." % result) 1025 self.refreshMedia()
    1026 1027 1028 ####################################### 1029 # Methods used to parse command output 1030 ####################################### 1031 1032 @staticmethod
    1033 - def _parsePropertiesOutput(output):
    1034 """ 1035 Parses the output from a C{cdrecord} properties command. 1036 1037 The C{output} parameter should be a list of strings as returned from 1038 C{executeCommand} for a C{cdrecord} command with arguments as from 1039 C{_buildPropertiesArgs}. The list of strings will be parsed to yield 1040 information about the properties of the device. 1041 1042 The output is expected to be a huge long list of strings. Unfortunately, 1043 the strings aren't in a completely regular format. However, the format 1044 of individual lines seems to be regular enough that we can look for 1045 specific values. Two kinds of parsing take place: one kind of parsing 1046 picks out out specific values like the device id, device vendor, etc. 1047 The other kind of parsing just sets a boolean flag C{True} if a matching 1048 line is found. All of the parsing is done with regular expressions. 1049 1050 Right now, pretty much nothing in the output is required and we should 1051 parse an empty document successfully (albeit resulting in a device that 1052 can't eject, doesn't have a tray and doesnt't support multisession 1053 discs). I had briefly considered erroring out if certain lines weren't 1054 found or couldn't be parsed, but that seems like a bad idea given that 1055 most of the information is just for reference. 1056 1057 The results are returned as a tuple of the object device attributes: 1058 C{(deviceType, deviceVendor, deviceId, deviceBufferSize, 1059 deviceSupportsMulti, deviceHasTray, deviceCanEject)}. 1060 1061 @param output: Output from a C{cdrecord -prcap} command. 1062 1063 @return: Results tuple as described above. 1064 @raise IOError: If there is problem parsing the output. 1065 """ 1066 deviceType = None 1067 deviceVendor = None 1068 deviceId = None 1069 deviceBufferSize = None 1070 deviceSupportsMulti = False 1071 deviceHasTray = False 1072 deviceCanEject = False 1073 typePattern = re.compile(r"(^Device type\s*:\s*)(.*)(\s*)(.*$)") 1074 vendorPattern = re.compile(r"(^Vendor_info\s*:\s*'\s*)(.*?)(\s*')(.*$)") 1075 idPattern = re.compile(r"(^Identifikation\s*:\s*'\s*)(.*?)(\s*')(.*$)") 1076 bufferPattern = re.compile(r"(^\s*Buffer size in KB:\s*)(.*?)(\s*$)") 1077 multiPattern = re.compile(r"^\s*Does read multi-session.*$") 1078 trayPattern = re.compile(r"^\s*Loading mechanism type: tray.*$") 1079 ejectPattern = re.compile(r"^\s*Does support ejection.*$") 1080 for line in output: 1081 if typePattern.search(line): 1082 deviceType = typePattern.search(line).group(2) 1083 logger.info("Device type is [%s].", deviceType) 1084 elif vendorPattern.search(line): 1085 deviceVendor = vendorPattern.search(line).group(2) 1086 logger.info("Device vendor is [%s].", deviceVendor) 1087 elif idPattern.search(line): 1088 deviceId = idPattern.search(line).group(2) 1089 logger.info("Device id is [%s].", deviceId) 1090 elif bufferPattern.search(line): 1091 try: 1092 sectors = int(bufferPattern.search(line).group(2)) 1093 deviceBufferSize = convertSize(sectors, UNIT_KBYTES, UNIT_BYTES) 1094 logger.info("Device buffer size is [%d] bytes.", deviceBufferSize) 1095 except TypeError: pass 1096 elif multiPattern.search(line): 1097 deviceSupportsMulti = True 1098 logger.info("Device does support multisession discs.") 1099 elif trayPattern.search(line): 1100 deviceHasTray = True 1101 logger.info("Device has a tray.") 1102 elif ejectPattern.search(line): 1103 deviceCanEject = True 1104 logger.info("Device can eject its media.") 1105 return (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject)
    1106 1107 @staticmethod
    1108 - def _parseBoundariesOutput(output):
    1109 """ 1110 Parses the output from a C{cdrecord} capacity command. 1111 1112 The C{output} parameter should be a list of strings as returned from 1113 C{executeCommand} for a C{cdrecord} command with arguments as from 1114 C{_buildBoundaryArgs}. The list of strings will be parsed to yield 1115 information about the capacity of the media in the device. 1116 1117 Basically, we expect the list of strings to include just one line, a pair 1118 of values. There isn't supposed to be whitespace, but we allow it anyway 1119 in the regular expression. Any lines below the one line we parse are 1120 completely ignored. It would be a good idea to ignore C{stderr} when 1121 executing the C{cdrecord} command that generates output for this method, 1122 because sometimes C{cdrecord} spits out kernel warnings about the actual 1123 output. 1124 1125 The results are returned as a tuple of (lower, upper) as needed by the 1126 C{IsoImage} class. Note that these values are in terms of ISO sectors, 1127 not bytes. Clients should generally consider the boundaries value 1128 opaque, however. 1129 1130 @note: If the boundaries output can't be parsed, we return C{None}. 1131 1132 @param output: Output from a C{cdrecord -msinfo} command. 1133 1134 @return: Boundaries tuple as described above. 1135 @raise IOError: If there is problem parsing the output. 1136 """ 1137 if len(output) < 1: 1138 logger.warning("Unable to read disc (might not be initialized); returning full capacity.") 1139 return None 1140 boundaryPattern = re.compile(r"(^\s*)([0-9]*)(\s*,\s*)([0-9]*)(\s*$)") 1141 parsed = boundaryPattern.search(output[0]) 1142 if not parsed: 1143 raise IOError("Unable to parse output of boundaries command.") 1144 try: 1145 boundaries = ( int(parsed.group(2)), int(parsed.group(4)) ) 1146 except TypeError: 1147 raise IOError("Unable to parse output of boundaries command.") 1148 return boundaries
    1149 1150 1151 ################################# 1152 # Methods used to build commands 1153 ################################# 1154 1155 @staticmethod
    1156 - def _buildOpenTrayArgs(device):
    1157 """ 1158 Builds a list of arguments to be passed to a C{eject} command. 1159 1160 The arguments will cause the C{eject} command to open the tray and 1161 eject the media. No validation is done by this method as to whether 1162 this action actually makes sense. 1163 1164 @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. 1165 1166 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1167 """ 1168 args = [] 1169 args.append(device) 1170 return args
    1171 1172 @staticmethod
    1173 - def _buildUnlockTrayArgs(device):
    1174 """ 1175 Builds a list of arguments to be passed to a C{eject} command. 1176 1177 The arguments will cause the C{eject} command to unlock the tray. 1178 1179 @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. 1180 1181 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1182 """ 1183 args = [] 1184 args.append("-i") 1185 args.append("off") 1186 args.append(device) 1187 return args
    1188 1189 @staticmethod
    1190 - def _buildCloseTrayArgs(device):
    1191 """ 1192 Builds a list of arguments to be passed to a C{eject} command. 1193 1194 The arguments will cause the C{eject} command to close the tray and reload 1195 the media. No validation is done by this method as to whether this 1196 action actually makes sense. 1197 1198 @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. 1199 1200 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1201 """ 1202 args = [] 1203 args.append("-t") 1204 args.append(device) 1205 return args
    1206 1207 @staticmethod
    1208 - def _buildPropertiesArgs(hardwareId):
    1209 """ 1210 Builds a list of arguments to be passed to a C{cdrecord} command. 1211 1212 The arguments will cause the C{cdrecord} command to ask the device 1213 for a list of its capacities via the C{-prcap} switch. 1214 1215 @param hardwareId: Hardware id for the device (either SCSI id or device path) 1216 1217 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1218 """ 1219 args = [] 1220 args.append("-prcap") 1221 args.append("dev=%s" % hardwareId) 1222 return args
    1223 1224 @staticmethod
    1225 - def _buildBoundariesArgs(hardwareId):
    1226 """ 1227 Builds a list of arguments to be passed to a C{cdrecord} command. 1228 1229 The arguments will cause the C{cdrecord} command to ask the device for 1230 the current multisession boundaries of the media using the C{-msinfo} 1231 switch. 1232 1233 @param hardwareId: Hardware id for the device (either SCSI id or device path) 1234 1235 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1236 """ 1237 args = [] 1238 args.append("-msinfo") 1239 args.append("dev=%s" % hardwareId) 1240 return args
    1241 1242 @staticmethod
    1243 - def _buildBlankArgs(hardwareId, driveSpeed=None):
    1244 """ 1245 Builds a list of arguments to be passed to a C{cdrecord} command. 1246 1247 The arguments will cause the C{cdrecord} command to blank the media in 1248 the device identified by C{hardwareId}. No validation is done by this method 1249 as to whether the action makes sense (i.e. to whether the media even can 1250 be blanked). 1251 1252 @param hardwareId: Hardware id for the device (either SCSI id or device path) 1253 @param driveSpeed: Speed at which the drive writes. 1254 1255 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1256 """ 1257 args = [] 1258 args.append("-v") 1259 args.append("blank=fast") 1260 if driveSpeed is not None: 1261 args.append("speed=%d" % driveSpeed) 1262 args.append("dev=%s" % hardwareId) 1263 return args
    1264 1265 @staticmethod
    1266 - def _buildWriteArgs(hardwareId, imagePath, driveSpeed=None, writeMulti=True):
    1267 """ 1268 Builds a list of arguments to be passed to a C{cdrecord} command. 1269 1270 The arguments will cause the C{cdrecord} command to write the indicated 1271 ISO image (C{imagePath}) to the media in the device identified by 1272 C{hardwareId}. The C{writeMulti} argument controls whether to write a 1273 multisession disc. No validation is done by this method as to whether 1274 the action makes sense (i.e. to whether the device even can write 1275 multisession discs, for instance). 1276 1277 @param hardwareId: Hardware id for the device (either SCSI id or device path) 1278 @param imagePath: Path to an ISO image on disk. 1279 @param driveSpeed: Speed at which the drive writes. 1280 @param writeMulti: Indicates whether to write a multisession disc. 1281 1282 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1283 """ 1284 args = [] 1285 args.append("-v") 1286 if driveSpeed is not None: 1287 args.append("speed=%d" % driveSpeed) 1288 args.append("dev=%s" % hardwareId) 1289 if writeMulti: 1290 args.append("-multi") 1291 args.append("-data") 1292 args.append(imagePath) 1293 return args
    1294

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.peer-module.html0000664000175000017500000002436412657665544025566 0ustar pronovicpronovic00000000000000 CedarBackup3.peer
    Package CedarBackup3 :: Module peer
    [hide private]
    [frames] | no frames]

    Module peer

    source code

    Provides backup peer-related objects and utility functions.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      LocalPeer
    Backup peer representing a local peer in a backup pool.
      RemotePeer
    Backup peer representing a remote peer in a backup pool.
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.peer")
      DEF_RCP_COMMAND = ['/usr/bin/scp', '-B', '-q', '-C']
      DEF_RSH_COMMAND = ['/usr/bin/ssh']
      DEF_CBACK_COMMAND = '/usr/bin/cback3'
      DEF_COLLECT_INDICATOR = 'cback.collect'
    Name of the default collect indicator file.
      DEF_STAGE_INDICATOR = 'cback.stage'
    Name of the default stage indicator file.
      SU_COMMAND = ['su']
      __package__ = 'CedarBackup3'
    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.extend.encrypt-module.html0000664000175000017500000000506212657665544030362 0ustar pronovicpronovic00000000000000 encrypt

    Module encrypt


    Classes

    EncryptConfig
    LocalConfig

    Functions

    executeAction

    Variables

    ENCRYPT_INDICATOR
    GPG_COMMAND
    VALID_ENCRYPT_MODES
    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.split.SplitConfig-class.html0000664000175000017500000006504612657665545031317 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.split.SplitConfig
    Package CedarBackup3 :: Package extend :: Module split :: Class SplitConfig
    [hide private]
    [frames] | no frames]

    Class SplitConfig

    source code

    object --+
             |
            SplitConfig
    

    Class representing split configuration.

    Split configuration is used for splitting staging directories.

    The following restrictions exist on data in this class:

    • The size limit must be a ByteQuantity
    • The split size must be a ByteQuantity
    Instance Methods [hide private]
     
    __init__(self, sizeLimit=None, splitSize=None)
    Constructor for the SplitCOnfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    _setSizeLimit(self, value)
    Property target used to set the size limit.
    source code
     
    _getSizeLimit(self)
    Property target used to get the size limit.
    source code
     
    _setSplitSize(self, value)
    Property target used to set the split size.
    source code
     
    _getSplitSize(self)
    Property target used to get the split size.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      sizeLimit
    Size limit, as a ByteQuantity
      splitSize
    Split size, as a ByteQuantity

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, sizeLimit=None, splitSize=None)
    (Constructor)

    source code 

    Constructor for the SplitCOnfig class.

    Parameters:
    • sizeLimit - Size limit of the files, in bytes
    • splitSize - Size that files exceeding the limit will be split into, in bytes
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setSizeLimit(self, value)

    source code 

    Property target used to set the size limit. If not None, the value must be a ByteQuantity object.

    Raises:
    • ValueError - If the value is not a ByteQuantity

    _setSplitSize(self, value)

    source code 

    Property target used to set the split size. If not None, the value must be a ByteQuantity object.

    Raises:
    • ValueError - If the value is not a ByteQuantity

    Property Details [hide private]

    sizeLimit

    Size limit, as a ByteQuantity

    Get Method:
    _getSizeLimit(self) - Property target used to get the size limit.
    Set Method:
    _setSizeLimit(self, value) - Property target used to set the size limit.

    splitSize

    Split size, as a ByteQuantity

    Get Method:
    _getSplitSize(self) - Property target used to get the split size.
    Set Method:
    _setSplitSize(self, value) - Property target used to set the split size.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.encrypt-module.html0000664000175000017500000005725512657665544027612 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.encrypt
    Package CedarBackup3 :: Package extend :: Module encrypt
    [hide private]
    [frames] | no frames]

    Module encrypt

    source code

    Provides an extension to encrypt staging directories.

    When this extension is executed, all backed-up files in the configured Cedar Backup staging directory will be encrypted using gpg. Any directory which has already been encrypted (as indicated by the cback.encrypt file) will be ignored.

    This extension requires a new configuration section <encrypt> and is intended to be run immediately after the standard stage action or immediately before the standard store action. Aside from its own configuration, it requires the options and staging configuration sections in the standard Cedar Backup configuration file.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      EncryptConfig
    Class representing encrypt configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the encrypt backup action.
    source code
     
    _encryptDailyDir(dailyDir, encryptMode, encryptTarget, backupUser, backupGroup)
    Encrypts the contents of a daily staging directory.
    source code
     
    _encryptFile(sourcePath, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=False)
    Encrypts the source file using the indicated mode.
    source code
     
    _encryptFileWithGpg(sourcePath, recipient)
    Encrypts the indicated source file using GPG.
    source code
     
    _confirmGpgRecipient(recipient)
    Confirms that a recipient's public key is known to GPG.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.extend.encrypt")
      GPG_COMMAND = ['gpg']
      VALID_ENCRYPT_MODES = ['gpg']
      ENCRYPT_INDICATOR = 'cback.encrypt'
      __package__ = 'CedarBackup3.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the encrypt backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are I/O problems reading or writing files

    _encryptDailyDir(dailyDir, encryptMode, encryptTarget, backupUser, backupGroup)

    source code 

    Encrypts the contents of a daily staging directory.

    Indicator files are ignored. All other files are encrypted. The only valid encrypt mode is "gpg".

    Parameters:
    • dailyDir - Daily directory to encrypt
    • encryptMode - Encryption mode (only "gpg" is allowed)
    • encryptTarget - Encryption target (GPG recipient for "gpg" mode)
    • backupUser - User that target files should be owned by
    • backupGroup - Group that target files should be owned by
    Raises:
    • ValueError - If the encrypt mode is not supported.
    • ValueError - If the daily staging directory does not exist.

    _encryptFile(sourcePath, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=False)

    source code 

    Encrypts the source file using the indicated mode.

    The encrypted file will be owned by the indicated backup user and group. If removeSource is True, then the source file will be removed after it is successfully encrypted.

    Currently, only the "gpg" encrypt mode is supported.

    Parameters:
    • sourcePath - Absolute path of the source file to encrypt
    • encryptMode - Encryption mode (only "gpg" is allowed)
    • encryptTarget - Encryption target (GPG recipient)
    • backupUser - User that target files should be owned by
    • backupGroup - Group that target files should be owned by
    • removeSource - Indicates whether to remove the source file
    Returns:
    Path to the newly-created encrypted file.
    Raises:
    • ValueError - If an invalid encrypt mode is passed in.
    • IOError - If there is a problem accessing, encrypting or removing the source file.

    _encryptFileWithGpg(sourcePath, recipient)

    source code 

    Encrypts the indicated source file using GPG.

    The encrypted file will be in GPG's binary output format and will have the same name as the source file plus a ".gpg" extension. The source file will not be modified or removed by this function call.

    Parameters:
    • sourcePath - Absolute path of file to be encrypted.
    • recipient - Recipient name to be passed to GPG's "-r" option
    Returns:
    Path to the newly-created encrypted file.
    Raises:
    • IOError - If there is a problem encrypting the file.

    _confirmGpgRecipient(recipient)

    source code 

    Confirms that a recipient's public key is known to GPG. Throws an exception if there is a problem, or returns normally otherwise.

    Parameters:
    • recipient - Recipient name
    Raises:
    • IOError - If the recipient's public key is not known to GPG.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.postgresql.LocalConfig-class.html0000664000175000017500000010510612657665545032316 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.postgresql.LocalConfig
    Package CedarBackup3 :: Package extend :: Module postgresql :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit PostgreSQL-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds a <postgresql> configuration section as the next child of a parent.
    source code
     
    _setPostgresql(self, value)
    Property target used to set the postgresql configuration value.
    source code
     
    _getPostgresql(self)
    Property target used to get the postgresql configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parsePostgresql(parent)
    Parses a postgresql configuration section.
    source code
    Properties [hide private]
      postgresql
    Postgresql configuration in terms of a PostgresqlConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    The compress mode must be filled in. Then, if the 'all' flag is set, no databases are allowed, and if the 'all' flag is not set, at least one database is required.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds a <postgresql> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      user           //cb_config/postgresql/user
      compressMode   //cb_config/postgresql/compress_mode
      all            //cb_config/postgresql/all
    

    We also add groups of the following items, one list element per item:

      database       //cb_config/postgresql/database
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setPostgresql(self, value)

    source code 

    Property target used to set the postgresql configuration value. If not None, the value must be a PostgresqlConfig object.

    Raises:
    • ValueError - If the value is not a PostgresqlConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the postgresql configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parsePostgresql(parent)
    Static Method

    source code 

    Parses a postgresql configuration section.

    We read the following fields:

      user           //cb_config/postgresql/user
      compressMode   //cb_config/postgresql/compress_mode
      all            //cb_config/postgresql/all
    

    We also read groups of the following item, one list element per item:

      databases      //cb_config/postgresql/database
    
    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    PostgresqlConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    Property Details [hide private]

    postgresql

    Postgresql configuration in terms of a PostgresqlConfig object.

    Get Method:
    _getPostgresql(self) - Property target used to get the postgresql configuration value.
    Set Method:
    _setPostgresql(self, value) - Property target used to set the postgresql configuration value.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.store-pysrc.html0000664000175000017500000046625312657665545027311 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.store
    Package CedarBackup3 :: Package actions :: Module store
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.actions.store

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2007,2010,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Cedar Backup, release 3 
     30  # Purpose  : Implements the standard 'store' action. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Implements the standard 'store' action. 
     40  @sort: executeStore, writeImage, writeStoreIndicator, consistencyCheck 
     41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     42  @author: Dmitry Rutsky <rutsky@inbox.ru> 
     43  """ 
     44   
     45   
     46  ######################################################################## 
     47  # Imported modules 
     48  ######################################################################## 
     49   
     50  # System modules 
     51  import sys 
     52  import os 
     53  import logging 
     54  import datetime 
     55  import tempfile 
     56   
     57  # Cedar Backup modules 
     58  from CedarBackup3.filesystem import compareContents 
     59  from CedarBackup3.util import isStartOfWeek 
     60  from CedarBackup3.util import mount, unmount, displayBytes 
     61  from CedarBackup3.actions.util import createWriter, checkMediaState, buildMediaLabel, writeIndicatorFile 
     62  from CedarBackup3.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR, STORE_INDICATOR 
     63   
     64   
     65  ######################################################################## 
     66  # Module-wide constants and variables 
     67  ######################################################################## 
     68   
     69  logger = logging.getLogger("CedarBackup3.log.actions.store") 
     70   
     71   
     72  ######################################################################## 
     73  # Public functions 
     74  ######################################################################## 
     75   
     76  ########################## 
     77  # executeStore() function 
     78  ########################## 
     79   
    
    80 -def executeStore(configPath, options, config):
    81 """ 82 Executes the store backup action. 83 84 @note: The rebuild action and the store action are very similar. The 85 main difference is that while store only stores a single day's staging 86 directory, the rebuild action operates on multiple staging directories. 87 88 @note: When the store action is complete, we will write a store indicator to 89 the daily staging directory we used, so it's obvious that the store action 90 has completed. 91 92 @param configPath: Path to configuration file on disk. 93 @type configPath: String representing a path on disk. 94 95 @param options: Program command-line options. 96 @type options: Options object. 97 98 @param config: Program configuration. 99 @type config: Config object. 100 101 @raise ValueError: Under many generic error conditions 102 @raise IOError: If there are problems reading or writing files. 103 """ 104 logger.debug("Executing the 'store' action.") 105 if sys.platform == "darwin": 106 logger.warning("Warning: the store action is not fully supported on Mac OS X.") 107 logger.warning("See the Cedar Backup software manual for further information.") 108 if config.options is None or config.store is None: 109 raise ValueError("Store configuration is not properly filled in.") 110 if config.store.checkMedia: 111 checkMediaState(config.store) # raises exception if media is not initialized 112 rebuildMedia = options.full 113 logger.debug("Rebuild media flag [%s]", rebuildMedia) 114 todayIsStart = isStartOfWeek(config.options.startingDay) 115 stagingDirs = _findCorrectDailyDir(options, config) 116 writeImageBlankSafe(config, rebuildMedia, todayIsStart, config.store.blankBehavior, stagingDirs) 117 if config.store.checkData: 118 if sys.platform == "darwin": 119 logger.warning("Warning: consistency check cannot be run successfully on Mac OS X.") 120 logger.warning("See the Cedar Backup software manual for further information.") 121 else: 122 logger.debug("Running consistency check of media.") 123 consistencyCheck(config, stagingDirs) 124 writeStoreIndicator(config, stagingDirs) 125 logger.info("Executed the 'store' action successfully.")
    126 127 128 ######################## 129 # writeImage() function 130 ######################## 131
    132 -def writeImage(config, newDisc, stagingDirs):
    133 """ 134 Builds and writes an ISO image containing the indicated stage directories. 135 136 The generated image will contain each of the staging directories listed in 137 C{stagingDirs}. The directories will be placed into the image at the root by 138 date, so staging directory C{/opt/stage/2005/02/10} will be placed into the 139 disc at C{/2005/02/10}. 140 141 @note: This function is implemented in terms of L{writeImageBlankSafe}. The 142 C{newDisc} flag is passed in for both C{rebuildMedia} and C{todayIsStart}. 143 144 @param config: Config object. 145 @param newDisc: Indicates whether the disc should be re-initialized 146 @param stagingDirs: Dictionary mapping directory path to date suffix. 147 148 @raise ValueError: Under many generic error conditions 149 @raise IOError: If there is a problem writing the image to disc. 150 """ 151 writeImageBlankSafe(config, newDisc, newDisc, None, stagingDirs)
    152 153 154 ################################# 155 # writeImageBlankSafe() function 156 ################################# 157
    158 -def writeImageBlankSafe(config, rebuildMedia, todayIsStart, blankBehavior, stagingDirs):
    159 """ 160 Builds and writes an ISO image containing the indicated stage directories. 161 162 The generated image will contain each of the staging directories listed in 163 C{stagingDirs}. The directories will be placed into the image at the root by 164 date, so staging directory C{/opt/stage/2005/02/10} will be placed into the 165 disc at C{/2005/02/10}. The media will always be written with a media 166 label specific to Cedar Backup. 167 168 This function is similar to L{writeImage}, but tries to implement a smarter 169 blanking strategy. 170 171 First, the media is always blanked if the C{rebuildMedia} flag is true. 172 Then, if C{rebuildMedia} is false, blanking behavior and C{todayIsStart} 173 come into effect:: 174 175 If no blanking behavior is specified, and it is the start of the week, 176 the disc will be blanked 177 178 If blanking behavior is specified, and either the blank mode is "daily" 179 or the blank mode is "weekly" and it is the start of the week, then 180 the disc will be blanked if it looks like the weekly backup will not 181 fit onto the media. 182 183 Otherwise, the disc will not be blanked 184 185 How do we decide whether the weekly backup will fit onto the media? That is 186 what the blanking factor is used for. The following formula is used:: 187 188 will backup fit? = (bytes available / (1 + bytes required) <= blankFactor 189 190 The blanking factor will vary from setup to setup, and will probably 191 require some experimentation to get it right. 192 193 @param config: Config object. 194 @param rebuildMedia: Indicates whether media should be rebuilt 195 @param todayIsStart: Indicates whether today is the starting day of the week 196 @param blankBehavior: Blank behavior from configuration, or C{None} to use default behavior 197 @param stagingDirs: Dictionary mapping directory path to date suffix. 198 199 @raise ValueError: Under many generic error conditions 200 @raise IOError: If there is a problem writing the image to disc. 201 """ 202 mediaLabel = buildMediaLabel() 203 writer = createWriter(config) 204 writer.initializeImage(True, config.options.workingDir, mediaLabel) # default value for newDisc 205 for stageDir in list(stagingDirs.keys()): 206 logger.debug("Adding stage directory [%s].", stageDir) 207 dateSuffix = stagingDirs[stageDir] 208 writer.addImageEntry(stageDir, dateSuffix) 209 newDisc = _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior) 210 writer.setImageNewDisc(newDisc) 211 writer.writeImage()
    212
    213 -def _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior):
    214 """ 215 Gets a value for the newDisc flag based on blanking factor rules. 216 217 The blanking factor rules are described above by L{writeImageBlankSafe}. 218 219 @param writer: Previously configured image writer containing image entries 220 @param rebuildMedia: Indicates whether media should be rebuilt 221 @param todayIsStart: Indicates whether today is the starting day of the week 222 @param blankBehavior: Blank behavior from configuration, or C{None} to use default behavior 223 224 @return: newDisc flag to be set on writer. 225 """ 226 newDisc = False 227 if rebuildMedia: 228 newDisc = True 229 logger.debug("Setting new disc flag based on rebuildMedia flag.") 230 else: 231 if blankBehavior is None: 232 logger.debug("Default media blanking behavior is in effect.") 233 if todayIsStart: 234 newDisc = True 235 logger.debug("Setting new disc flag based on todayIsStart.") 236 else: 237 # note: validation says we can assume that behavior is fully filled in if it exists at all 238 logger.debug("Optimized media blanking behavior is in effect based on configuration.") 239 if blankBehavior.blankMode == "daily" or (blankBehavior.blankMode == "weekly" and todayIsStart): 240 logger.debug("New disc flag will be set based on blank factor calculation.") 241 blankFactor = float(blankBehavior.blankFactor) 242 logger.debug("Configured blanking factor: %.2f", blankFactor) 243 available = writer.retrieveCapacity().bytesAvailable 244 logger.debug("Bytes available: %s", displayBytes(available)) 245 required = writer.getEstimatedImageSize() 246 logger.debug("Bytes required: %s", displayBytes(required)) 247 ratio = available / (1.0 + required) 248 logger.debug("Calculated ratio: %.2f", ratio) 249 newDisc = (ratio <= blankFactor) 250 logger.debug("%.2f <= %.2f ? %s", ratio, blankFactor, newDisc) 251 else: 252 logger.debug("No blank factor calculation is required based on configuration.") 253 logger.debug("New disc flag [%s].", newDisc) 254 return newDisc
    255 256 257 ################################# 258 # writeStoreIndicator() function 259 ################################# 260
    261 -def writeStoreIndicator(config, stagingDirs):
    262 """ 263 Writes a store indicator file into staging directories. 264 265 The store indicator is written into each of the staging directories when 266 either a store or rebuild action has written the staging directory to disc. 267 268 @param config: Config object. 269 @param stagingDirs: Dictionary mapping directory path to date suffix. 270 """ 271 for stagingDir in list(stagingDirs.keys()): 272 writeIndicatorFile(stagingDir, STORE_INDICATOR, 273 config.options.backupUser, 274 config.options.backupGroup)
    275 276 277 ############################## 278 # consistencyCheck() function 279 ############################## 280
    281 -def consistencyCheck(config, stagingDirs):
    282 """ 283 Runs a consistency check against media in the backup device. 284 285 It seems that sometimes, it's possible to create a corrupted multisession 286 disc (i.e. one that cannot be read) although no errors were encountered 287 while writing the disc. This consistency check makes sure that the data 288 read from disc matches the data that was used to create the disc. 289 290 The function mounts the device at a temporary mount point in the working 291 directory, and then compares the indicated staging directories in the 292 staging directory and on the media. The comparison is done via 293 functionality in C{filesystem.py}. 294 295 If no exceptions are thrown, there were no problems with the consistency 296 check. A positive confirmation of "no problems" is also written to the log 297 with C{info} priority. 298 299 @warning: The implementation of this function is very UNIX-specific. 300 301 @param config: Config object. 302 @param stagingDirs: Dictionary mapping directory path to date suffix. 303 304 @raise ValueError: If the two directories are not equivalent. 305 @raise IOError: If there is a problem working with the media. 306 """ 307 logger.debug("Running consistency check.") 308 mountPoint = tempfile.mkdtemp(dir=config.options.workingDir) 309 try: 310 mount(config.store.devicePath, mountPoint, "iso9660") 311 for stagingDir in list(stagingDirs.keys()): 312 discDir = os.path.join(mountPoint, stagingDirs[stagingDir]) 313 logger.debug("Checking [%s] vs. [%s].", stagingDir, discDir) 314 compareContents(stagingDir, discDir, verbose=True) 315 logger.info("Consistency check completed for [%s]. No problems found.", stagingDir) 316 finally: 317 unmount(mountPoint, True, 5, 1) # try 5 times, and remove mount point when done
    318 319 320 ######################################################################## 321 # Private utility functions 322 ######################################################################## 323 324 ######################### 325 # _findCorrectDailyDir() 326 ######################### 327
    328 -def _findCorrectDailyDir(options, config):
    329 """ 330 Finds the correct daily staging directory to be written to disk. 331 332 In Cedar Backup v1.0, we assumed that the correct staging directory matched 333 the current date. However, that has problems. In particular, it breaks 334 down if collect is on one side of midnite and stage is on the other, or if 335 certain processes span midnite. 336 337 For v2.0, I'm trying to be smarter. I'll first check the current day. If 338 that directory is found, it's good enough. If it's not found, I'll look for 339 a valid directory from the day before or day after I{which has not yet been 340 staged, according to the stage indicator file}. The first one I find, I'll 341 use. If I use a directory other than for the current day I{and} 342 C{config.store.warnMidnite} is set, a warning will be put in the log. 343 344 There is one exception to this rule. If the C{options.full} flag is set, 345 then the special "span midnite" logic will be disabled and any existing 346 store indicator will be ignored. I did this because I think that most users 347 who run C{cback3 --full store} twice in a row expect the command to generate 348 two identical discs. With the other rule in place, running that command 349 twice in a row could result in an error ("no unstored directory exists") or 350 could even cause a completely unexpected directory to be written to disc (if 351 some previous day's contents had not yet been written). 352 353 @note: This code is probably longer and more verbose than it needs to be, 354 but at least it's straightforward. 355 356 @param options: Options object. 357 @param config: Config object. 358 359 @return: Correct staging dir, as a dict mapping directory to date suffix. 360 @raise IOError: If the staging directory cannot be found. 361 """ 362 oneDay = datetime.timedelta(days=1) 363 today = datetime.date.today() 364 yesterday = today - oneDay 365 tomorrow = today + oneDay 366 todayDate = today.strftime(DIR_TIME_FORMAT) 367 yesterdayDate = yesterday.strftime(DIR_TIME_FORMAT) 368 tomorrowDate = tomorrow.strftime(DIR_TIME_FORMAT) 369 todayPath = os.path.join(config.stage.targetDir, todayDate) 370 yesterdayPath = os.path.join(config.stage.targetDir, yesterdayDate) 371 tomorrowPath = os.path.join(config.stage.targetDir, tomorrowDate) 372 todayStageInd = os.path.join(todayPath, STAGE_INDICATOR) 373 yesterdayStageInd = os.path.join(yesterdayPath, STAGE_INDICATOR) 374 tomorrowStageInd = os.path.join(tomorrowPath, STAGE_INDICATOR) 375 todayStoreInd = os.path.join(todayPath, STORE_INDICATOR) 376 yesterdayStoreInd = os.path.join(yesterdayPath, STORE_INDICATOR) 377 tomorrowStoreInd = os.path.join(tomorrowPath, STORE_INDICATOR) 378 if options.full: 379 if os.path.isdir(todayPath) and os.path.exists(todayStageInd): 380 logger.info("Store process will use current day's stage directory [%s]", todayPath) 381 return { todayPath:todayDate } 382 raise IOError("Unable to find staging directory to store (only tried today due to full option).") 383 else: 384 if os.path.isdir(todayPath) and os.path.exists(todayStageInd) and not os.path.exists(todayStoreInd): 385 logger.info("Store process will use current day's stage directory [%s]", todayPath) 386 return { todayPath:todayDate } 387 elif os.path.isdir(yesterdayPath) and os.path.exists(yesterdayStageInd) and not os.path.exists(yesterdayStoreInd): 388 logger.info("Store process will use previous day's stage directory [%s]", yesterdayPath) 389 if config.store.warnMidnite: 390 logger.warning("Warning: store process crossed midnite boundary to find data.") 391 return { yesterdayPath:yesterdayDate } 392 elif os.path.isdir(tomorrowPath) and os.path.exists(tomorrowStageInd) and not os.path.exists(tomorrowStoreInd): 393 logger.info("Store process will use next day's stage directory [%s]", tomorrowPath) 394 if config.store.warnMidnite: 395 logger.warning("Warning: store process crossed midnite boundary to find data.") 396 return { tomorrowPath:tomorrowDate } 397 raise IOError("Unable to find unused staging directory to store (tried today, yesterday, tomorrow).")
    398

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend-pysrc.html0000664000175000017500000002366012657665547025776 0ustar pronovicpronovic00000000000000 CedarBackup3.extend
    Package CedarBackup3 :: Package extend
    [hide private]
    [frames] | no frames]

    Source Code for Package CedarBackup3.extend

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 3 (>= 3.4) 
    13  # Project  : Official Cedar Backup Extensions 
    14  # Purpose  : Provides package initialization 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Official Cedar Backup Extensions 
    24   
    25  This package provides official Cedar Backup extensions.  These are Cedar Backup 
    26  actions that are not part of the "standard" set of Cedar Backup actions, but 
    27  are officially supported along with Cedar Backup. 
    28   
    29  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    30  """ 
    31   
    32   
    33  ######################################################################## 
    34  # Package initialization 
    35  ######################################################################## 
    36   
    37  # Using 'from CedarBackup3.extend import *' will just import the modules listed 
    38  # in the __all__ variable. 
    39   
    40  __all__ = [ 'amazons3', 'encrypt', 'mbox', 'mysql', 'postgresql', 'split', 'subversion', 'sysinfo', ] 
    41   
    

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.ExtendedAction-class.html0000664000175000017500000011634312657665544030614 0ustar pronovicpronovic00000000000000 CedarBackup3.config.ExtendedAction
    Package CedarBackup3 :: Module config :: Class ExtendedAction
    [hide private]
    [frames] | no frames]

    Class ExtendedAction

    source code

    object --+
             |
            ExtendedAction
    

    Class representing an extended action.

    Essentially, an extended action needs to allow the following to happen:

      exec("from %s import %s" % (module, function))
      exec("%s(action, configPath")" % function)
    

    The following restrictions exist on data in this class:

    • The action name must be a non-empty string consisting of lower-case letters and digits.
    • The module must be a non-empty string and a valid Python identifier.
    • The function must be an on-empty string and a valid Python identifier.
    • If set, the index must be a positive integer.
    • If set, the dependencies attribute must be an ActionDependencies object.
    Instance Methods [hide private]
     
    __init__(self, name=None, module=None, function=None, index=None, dependencies=None)
    Constructor for the ExtendedAction class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    _setName(self, value)
    Property target used to set the action name.
    source code
     
    _getName(self)
    Property target used to get the action name.
    source code
     
    _setModule(self, value)
    Property target used to set the module name.
    source code
     
    _getModule(self)
    Property target used to get the module name.
    source code
     
    _setFunction(self, value)
    Property target used to set the function name.
    source code
     
    _getFunction(self)
    Property target used to get the function name.
    source code
     
    _setIndex(self, value)
    Property target used to set the action index.
    source code
     
    _getIndex(self)
    Property target used to get the action index.
    source code
     
    _setDependencies(self, value)
    Property target used to set the action dependencies information.
    source code
     
    _getDependencies(self)
    Property target used to get action dependencies information.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      name
    Name of the extended action.
      module
    Name of the module containing the extended action function.
      function
    Name of the extended action function.
      index
    Index of action, used for execution ordering.
      dependencies
    Dependencies for action, used for execution ordering.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name=None, module=None, function=None, index=None, dependencies=None)
    (Constructor)

    source code 

    Constructor for the ExtendedAction class.

    Parameters:
    • name - Name of the extended action
    • module - Name of the module containing the extended action function
    • function - Name of the extended action function
    • index - Index of action, used for execution ordering
    • dependencies - Dependencies for action, used for execution ordering
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setName(self, value)

    source code 

    Property target used to set the action name. The value must be a non-empty string if it is not None. It must also consist only of lower-case letters and digits.

    Raises:
    • ValueError - If the value is an empty string.

    _setModule(self, value)

    source code 

    Property target used to set the module name. The value must be a non-empty string if it is not None. It must also be a valid Python identifier.

    Raises:
    • ValueError - If the value is an empty string.

    _setFunction(self, value)

    source code 

    Property target used to set the function name. The value must be a non-empty string if it is not None. It must also be a valid Python identifier.

    Raises:
    • ValueError - If the value is an empty string.

    _setIndex(self, value)

    source code 

    Property target used to set the action index. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    _setDependencies(self, value)

    source code 

    Property target used to set the action dependencies information. If not None, the value must be a ActionDependecies object.

    Raises:
    • ValueError - If the value is not a ActionDependencies object.

    Property Details [hide private]

    name

    Name of the extended action.

    Get Method:
    _getName(self) - Property target used to get the action name.
    Set Method:
    _setName(self, value) - Property target used to set the action name.

    module

    Name of the module containing the extended action function.

    Get Method:
    _getModule(self) - Property target used to get the module name.
    Set Method:
    _setModule(self, value) - Property target used to set the module name.

    function

    Name of the extended action function.

    Get Method:
    _getFunction(self) - Property target used to get the function name.
    Set Method:
    _setFunction(self, value) - Property target used to set the function name.

    index

    Index of action, used for execution ordering.

    Get Method:
    _getIndex(self) - Property target used to get the action index.
    Set Method:
    _setIndex(self, value) - Property target used to set the action index.

    dependencies

    Dependencies for action, used for execution ordering.

    Get Method:
    _getDependencies(self) - Property target used to get action dependencies information.
    Set Method:
    _setDependencies(self, value) - Property target used to set the action dependencies information.

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.actions.purge-module.html0000664000175000017500000000251212657665544030166 0ustar pronovicpronovic00000000000000 purge

    Module purge


    Functions

    executePurge

    Variables

    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.RemotePeer-class.html0000664000175000017500000015710312657665544027764 0ustar pronovicpronovic00000000000000 CedarBackup3.config.RemotePeer
    Package CedarBackup3 :: Module config :: Class RemotePeer
    [hide private]
    [frames] | no frames]

    Class RemotePeer

    source code

    object --+
             |
            RemotePeer
    

    Class representing a Cedar Backup peer.

    The following restrictions exist on data in this class:

    • The peer name must be a non-empty string.
    • The collect directory must be an absolute path.
    • The remote user must be a non-empty string.
    • The rcp command must be a non-empty string.
    • The rsh command must be a non-empty string.
    • The cback command must be a non-empty string.
    • Any managed action name must be a non-empty string matching ACTION_NAME_REGEX
    • The ignore failure mode must be one of the values in VALID_FAILURE_MODES.
    Instance Methods [hide private]
     
    __init__(self, name=None, collectDir=None, remoteUser=None, rcpCommand=None, rshCommand=None, cbackCommand=None, managed=False, managedActions=None, ignoreFailureMode=None)
    Constructor for the RemotePeer class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    _setName(self, value)
    Property target used to set the peer name.
    source code
     
    _getName(self)
    Property target used to get the peer name.
    source code
     
    _setCollectDir(self, value)
    Property target used to set the collect directory.
    source code
     
    _getCollectDir(self)
    Property target used to get the collect directory.
    source code
     
    _setRemoteUser(self, value)
    Property target used to set the remote user.
    source code
     
    _getRemoteUser(self)
    Property target used to get the remote user.
    source code
     
    _setRcpCommand(self, value)
    Property target used to set the rcp command.
    source code
     
    _getRcpCommand(self)
    Property target used to get the rcp command.
    source code
     
    _setRshCommand(self, value)
    Property target used to set the rsh command.
    source code
     
    _getRshCommand(self)
    Property target used to get the rsh command.
    source code
     
    _setCbackCommand(self, value)
    Property target used to set the cback command.
    source code
     
    _getCbackCommand(self)
    Property target used to get the cback command.
    source code
     
    _setManaged(self, value)
    Property target used to set the managed flag.
    source code
     
    _getManaged(self)
    Property target used to get the managed flag.
    source code
     
    _setManagedActions(self, value)
    Property target used to set the managed actions list.
    source code
     
    _getManagedActions(self)
    Property target used to get the managed actions list.
    source code
     
    _setIgnoreFailureMode(self, value)
    Property target used to set the ignoreFailure mode.
    source code
     
    _getIgnoreFailureMode(self)
    Property target used to get the ignoreFailure mode.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      name
    Name of the peer, must be a valid hostname.
      collectDir
    Collect directory to stage files from on peer.
      remoteUser
    Name of backup user on remote peer.
      rcpCommand
    Overridden rcp-compatible copy command for peer.
      rshCommand
    Overridden rsh-compatible remote shell command for peer.
      cbackCommand
    Overridden cback-compatible command to use on remote peer.
      managed
    Indicates whether this is a managed peer.
      managedActions
    Overridden set of actions that are managed on the peer.
      ignoreFailureMode
    Ignore failure mode for peer.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name=None, collectDir=None, remoteUser=None, rcpCommand=None, rshCommand=None, cbackCommand=None, managed=False, managedActions=None, ignoreFailureMode=None)
    (Constructor)

    source code 

    Constructor for the RemotePeer class.

    Parameters:
    • name - Name of the peer, must be a valid hostname.
    • collectDir - Collect directory to stage files from on peer.
    • remoteUser - Name of backup user on remote peer.
    • rcpCommand - Overridden rcp-compatible copy command for peer.
    • rshCommand - Overridden rsh-compatible remote shell command for peer.
    • cbackCommand - Overridden cback-compatible command to use on remote peer.
    • managed - Indicates whether this is a managed peer.
    • managedActions - Overridden set of actions that are managed on the peer.
    • ignoreFailureMode - Ignore failure mode for peer.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setName(self, value)

    source code 

    Property target used to set the peer name. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setCollectDir(self, value)

    source code 

    Property target used to set the collect directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setRemoteUser(self, value)

    source code 

    Property target used to set the remote user. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setRcpCommand(self, value)

    source code 

    Property target used to set the rcp command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setRshCommand(self, value)

    source code 

    Property target used to set the rsh command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setCbackCommand(self, value)

    source code 

    Property target used to set the cback command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setManaged(self, value)

    source code 

    Property target used to set the managed flag. No validations, but we normalize the value to True or False.

    _setManagedActions(self, value)

    source code 

    Property target used to set the managed actions list. Elements do not have to exist on disk at the time of assignment.

    _setIgnoreFailureMode(self, value)

    source code 

    Property target used to set the ignoreFailure mode. If not None, the mode must be one of the values in VALID_FAILURE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    name

    Name of the peer, must be a valid hostname.

    Get Method:
    _getName(self) - Property target used to get the peer name.
    Set Method:
    _setName(self, value) - Property target used to set the peer name.

    collectDir

    Collect directory to stage files from on peer.

    Get Method:
    _getCollectDir(self) - Property target used to get the collect directory.
    Set Method:
    _setCollectDir(self, value) - Property target used to set the collect directory.

    remoteUser

    Name of backup user on remote peer.

    Get Method:
    _getRemoteUser(self) - Property target used to get the remote user.
    Set Method:
    _setRemoteUser(self, value) - Property target used to set the remote user.

    rcpCommand

    Overridden rcp-compatible copy command for peer.

    Get Method:
    _getRcpCommand(self) - Property target used to get the rcp command.
    Set Method:
    _setRcpCommand(self, value) - Property target used to set the rcp command.

    rshCommand

    Overridden rsh-compatible remote shell command for peer.

    Get Method:
    _getRshCommand(self) - Property target used to get the rsh command.
    Set Method:
    _setRshCommand(self, value) - Property target used to set the rsh command.

    cbackCommand

    Overridden cback-compatible command to use on remote peer.

    Get Method:
    _getCbackCommand(self) - Property target used to get the cback command.
    Set Method:
    _setCbackCommand(self, value) - Property target used to set the cback command.

    managed

    Indicates whether this is a managed peer.

    Get Method:
    _getManaged(self) - Property target used to get the managed flag.
    Set Method:
    _setManaged(self, value) - Property target used to set the managed flag.

    managedActions

    Overridden set of actions that are managed on the peer.

    Get Method:
    _getManagedActions(self) - Property target used to get the managed actions list.
    Set Method:
    _setManagedActions(self, value) - Property target used to set the managed actions list.

    ignoreFailureMode

    Ignore failure mode for peer.

    Get Method:
    _getIgnoreFailureMode(self) - Property target used to get the ignoreFailure mode.
    Set Method:
    _setIgnoreFailureMode(self, value) - Property target used to set the ignoreFailure mode.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.CollectConfig-class.html0000664000175000017500000015064112657665544030430 0ustar pronovicpronovic00000000000000 CedarBackup3.config.CollectConfig
    Package CedarBackup3 :: Module config :: Class CollectConfig
    [hide private]
    [frames] | no frames]

    Class CollectConfig

    source code

    object --+
             |
            CollectConfig
    

    Class representing a Cedar Backup collect configuration.

    The following restrictions exist on data in this class:

    • The target directory must be an absolute path.
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The archive mode must be one of the values in VALID_ARCHIVE_MODES.
    • The ignore file must be a non-empty string.
    • Each of the paths in absoluteExcludePaths must be an absolute path
    • The collect file list must be a list of CollectFile objects.
    • The collect directory list must be a list of CollectDir objects.

    For the absoluteExcludePaths list, validation is accomplished through the util.AbsolutePathList list implementation that overrides common list methods and transparently does the absolute path validation for us.

    For the collectFiles and collectDirs list, validation is accomplished through the util.ObjectTypeList list implementation that overrides common list methods and transparently ensures that each element has an appropriate type.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, targetDir=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, excludePatterns=None, collectFiles=None, collectDirs=None)
    Constructor for the CollectConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    _setTargetDir(self, value)
    Property target used to set the target directory.
    source code
     
    _getTargetDir(self)
    Property target used to get the target directory.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setArchiveMode(self, value)
    Property target used to set the archive mode.
    source code
     
    _getArchiveMode(self)
    Property target used to get the archive mode.
    source code
     
    _setIgnoreFile(self, value)
    Property target used to set the ignore file.
    source code
     
    _getIgnoreFile(self)
    Property target used to get the ignore file.
    source code
     
    _setAbsoluteExcludePaths(self, value)
    Property target used to set the absolute exclude paths list.
    source code
     
    _getAbsoluteExcludePaths(self)
    Property target used to get the absolute exclude paths list.
    source code
     
    _setExcludePatterns(self, value)
    Property target used to set the exclude patterns list.
    source code
     
    _getExcludePatterns(self)
    Property target used to get the exclude patterns list.
    source code
     
    _setCollectFiles(self, value)
    Property target used to set the collect files list.
    source code
     
    _getCollectFiles(self)
    Property target used to get the collect files list.
    source code
     
    _setCollectDirs(self, value)
    Property target used to set the collect dirs list.
    source code
     
    _getCollectDirs(self)
    Property target used to get the collect dirs list.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      targetDir
    Directory to collect files into.
      collectMode
    Default collect mode.
      archiveMode
    Default archive mode for collect files.
      ignoreFile
    Default ignore file name.
      absoluteExcludePaths
    List of absolute paths to exclude.
      excludePatterns
    List of regular expressions patterns to exclude.
      collectFiles
    List of collect files.
      collectDirs
    List of collect directories.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, targetDir=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, excludePatterns=None, collectFiles=None, collectDirs=None)
    (Constructor)

    source code 

    Constructor for the CollectConfig class.

    Parameters:
    • targetDir - Directory to collect files into.
    • collectMode - Default collect mode.
    • archiveMode - Default archive mode for collect files.
    • ignoreFile - Default ignore file name.
    • absoluteExcludePaths - List of absolute paths to exclude.
    • excludePatterns - List of regular expression patterns to exclude.
    • collectFiles - List of collect files.
    • collectDirs - List of collect directories.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setTargetDir(self, value)

    source code 

    Property target used to set the target directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setArchiveMode(self, value)

    source code 

    Property target used to set the archive mode. If not None, the mode must be one of VALID_ARCHIVE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setIgnoreFile(self, value)

    source code 

    Property target used to set the ignore file. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.
    • ValueError - If the value cannot be encoded properly.

    _setAbsoluteExcludePaths(self, value)

    source code 

    Property target used to set the absolute exclude paths list. Either the value must be None or each element must be an absolute path. Elements do not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.

    _setCollectFiles(self, value)

    source code 

    Property target used to set the collect files list. Either the value must be None or each element must be a CollectFile.

    Raises:
    • ValueError - If the value is not a CollectFile

    _setCollectDirs(self, value)

    source code 

    Property target used to set the collect dirs list. Either the value must be None or each element must be a CollectDir.

    Raises:
    • ValueError - If the value is not a CollectDir

    Property Details [hide private]

    targetDir

    Directory to collect files into.

    Get Method:
    _getTargetDir(self) - Property target used to get the target directory.
    Set Method:
    _setTargetDir(self, value) - Property target used to set the target directory.

    collectMode

    Default collect mode.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    archiveMode

    Default archive mode for collect files.

    Get Method:
    _getArchiveMode(self) - Property target used to get the archive mode.
    Set Method:
    _setArchiveMode(self, value) - Property target used to set the archive mode.

    ignoreFile

    Default ignore file name.

    Get Method:
    _getIgnoreFile(self) - Property target used to get the ignore file.
    Set Method:
    _setIgnoreFile(self, value) - Property target used to set the ignore file.

    absoluteExcludePaths

    List of absolute paths to exclude.

    Get Method:
    _getAbsoluteExcludePaths(self) - Property target used to get the absolute exclude paths list.
    Set Method:
    _setAbsoluteExcludePaths(self, value) - Property target used to set the absolute exclude paths list.

    excludePatterns

    List of regular expressions patterns to exclude.

    Get Method:
    _getExcludePatterns(self) - Property target used to get the exclude patterns list.
    Set Method:
    _setExcludePatterns(self, value) - Property target used to set the exclude patterns list.

    collectFiles

    List of collect files.

    Get Method:
    _getCollectFiles(self) - Property target used to get the collect files list.
    Set Method:
    _setCollectFiles(self, value) - Property target used to set the collect files list.

    collectDirs

    List of collect directories.

    Get Method:
    _getCollectDirs(self) - Property target used to get the collect dirs list.
    Set Method:
    _setCollectDirs(self, value) - Property target used to set the collect dirs list.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.peer.LocalPeer-class.html0000664000175000017500000012504112657665545027246 0ustar pronovicpronovic00000000000000 CedarBackup3.peer.LocalPeer
    Package CedarBackup3 :: Module peer :: Class LocalPeer
    [hide private]
    [frames] | no frames]

    Class LocalPeer

    source code

    object --+
             |
            LocalPeer
    

    Backup peer representing a local peer in a backup pool.

    This is a class representing a local (non-network) peer in a backup pool. Local peers are backed up by simple filesystem copy operations. A local peer has associated with it a name (typically, but not necessarily, a hostname) and a collect directory.

    The public methods other than the constructor are part of a "backup peer" interface shared with the RemotePeer class.

    Instance Methods [hide private]
     
    __init__(self, name, collectDir, ignoreFailureMode=None)
    Initializes a local backup peer.
    source code
     
    stagePeer(self, targetDir, ownership=None, permissions=None)
    Stages data from the peer into the indicated local target directory.
    source code
     
    checkCollectIndicator(self, collectIndicator=None)
    Checks the collect indicator in the peer's staging directory.
    source code
     
    writeStageIndicator(self, stageIndicator=None, ownership=None, permissions=None)
    Writes the stage indicator in the peer's staging directory.
    source code
     
    _setName(self, value)
    Property target used to set the peer name.
    source code
     
    _getName(self)
    Property target used to get the peer name.
    source code
     
    _setCollectDir(self, value)
    Property target used to set the collect directory.
    source code
     
    _getCollectDir(self)
    Property target used to get the collect directory.
    source code
     
    _setIgnoreFailureMode(self, value)
    Property target used to set the ignoreFailure mode.
    source code
     
    _getIgnoreFailureMode(self)
    Property target used to get the ignoreFailure mode.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _copyLocalDir(sourceDir, targetDir, ownership=None, permissions=None)
    Copies files from the source directory to the target directory.
    source code
     
    _copyLocalFile(sourceFile=None, targetFile=None, ownership=None, permissions=None, overwrite=True)
    Copies a source file to a target file.
    source code
    Properties [hide private]
      name
    Name of the peer.
      collectDir
    Path to the peer's collect directory (an absolute local path).
      ignoreFailureMode
    Ignore failure mode for peer.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name, collectDir, ignoreFailureMode=None)
    (Constructor)

    source code 

    Initializes a local backup peer.

    Note that the collect directory must be an absolute path, but does not have to exist when the object is instantiated. We do a lazy validation on this value since we could (potentially) be creating peer objects before an ongoing backup completed.

    Parameters:
    • name (String, typically a hostname) - Name of the backup peer
    • collectDir (String representing an absolute local path on disk) - Path to the peer's collect directory
    • ignoreFailureMode (One of VALID_FAILURE_MODES) - Ignore failure mode for this peer
    Raises:
    • ValueError - If the name is empty.
    • ValueError - If collect directory is not an absolute path.
    Overrides: object.__init__

    stagePeer(self, targetDir, ownership=None, permissions=None)

    source code 

    Stages data from the peer into the indicated local target directory.

    The collect and target directories must both already exist before this method is called. If passed in, ownership and permissions will be applied to the files that are copied.

    Parameters:
    • targetDir (String representing a directory on disk) - Target directory to write data into
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the staged files should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    Returns:
    Number of files copied from the source directory to the target directory.
    Raises:
    • ValueError - If collect directory is not a directory or does not exist
    • ValueError - If target directory is not a directory, does not exist or is not absolute.
    • ValueError - If a path cannot be encoded properly.
    • IOError - If there were no files to stage (i.e. the directory was empty)
    • IOError - If there is an IO error copying a file.
    • OSError - If there is an OS error copying or changing permissions on a file
    Notes:
    • The caller is responsible for checking that the indicator exists, if they care. This function only stages the files within the directory.
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.

    checkCollectIndicator(self, collectIndicator=None)

    source code 

    Checks the collect indicator in the peer's staging directory.

    When a peer has completed collecting its backup files, it will write an empty indicator file into its collect directory. This method checks to see whether that indicator has been written. We're "stupid" here - if the collect directory doesn't exist, you'll naturally get back False.

    If you need to, you can override the name of the collect indicator file by passing in a different name.

    Parameters:
    • collectIndicator (String representing name of a file in the collect directory) - Name of the collect indicator file to check
    Returns:
    Boolean true/false depending on whether the indicator exists.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    writeStageIndicator(self, stageIndicator=None, ownership=None, permissions=None)

    source code 

    Writes the stage indicator in the peer's staging directory.

    When the master has completed collecting its backup files, it will write an empty indicator file into the peer's collect directory. The presence of this file implies that the staging process is complete.

    If you need to, you can override the name of the stage indicator file by passing in a different name.

    Parameters:
    • stageIndicator (String representing name of a file in the collect directory) - Name of the indicator file to write
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the indicator file should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the indicator file should have
    Raises:
    • ValueError - If collect directory is not a directory or does not exist
    • ValueError - If a path cannot be encoded properly.
    • IOError - If there is an IO error creating the file.
    • OSError - If there is an OS error creating or changing permissions on the file

    Note: If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.

    _copyLocalDir(sourceDir, targetDir, ownership=None, permissions=None)
    Static Method

    source code 

    Copies files from the source directory to the target directory.

    This function is not recursive. Only the files in the directory will be copied. Ownership and permissions will be left at their default values if new values are not specified. The source and target directories are allowed to be soft links to a directory, but besides that soft links are ignored.

    Parameters:
    • sourceDir (String representing a directory on disk) - Source directory
    • targetDir (String representing a directory on disk) - Target directory
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the copied files should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    Returns:
    Number of files copied from the source directory to the target directory.
    Raises:
    • ValueError - If source or target is not a directory or does not exist.
    • ValueError - If a path cannot be encoded properly.
    • IOError - If there is an IO error copying the files.
    • OSError - If there is an OS error copying or changing permissions on a files

    Note: If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.

    _copyLocalFile(sourceFile=None, targetFile=None, ownership=None, permissions=None, overwrite=True)
    Static Method

    source code 

    Copies a source file to a target file.

    If the source file is None then the target file will be created or overwritten as an empty file. If the target file is None, this method is a no-op. Attempting to copy a soft link or a directory will result in an exception.

    Parameters:
    • sourceFile (String representing a file on disk, as an absolute path) - Source file to copy
    • targetFile (String representing a file on disk, as an absolute path) - Target file to create
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the copied should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    • overwrite (Boolean true/false.) - Indicates whether it's OK to overwrite the target file.
    Raises:
    • ValueError - If the passed-in source file is not a regular file.
    • ValueError - If a path cannot be encoded properly.
    • IOError - If the target file already exists.
    • IOError - If there is an IO error copying the file
    • OSError - If there is an OS error copying or changing permissions on a file
    Notes:
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.
    • We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception.

    _setName(self, value)

    source code 

    Property target used to set the peer name. The value must be a non-empty string and cannot be None.

    Raises:
    • ValueError - If the value is an empty string or None.

    _setCollectDir(self, value)

    source code 

    Property target used to set the collect directory. The value must be an absolute path and cannot be None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is None or is not an absolute path.
    • ValueError - If a path cannot be encoded properly.

    _setIgnoreFailureMode(self, value)

    source code 

    Property target used to set the ignoreFailure mode. If not None, the mode must be one of the values in VALID_FAILURE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    name

    Name of the peer.

    Get Method:
    _getName(self) - Property target used to get the peer name.
    Set Method:
    _setName(self, value) - Property target used to set the peer name.

    collectDir

    Path to the peer's collect directory (an absolute local path).

    Get Method:
    _getCollectDir(self) - Property target used to get the collect directory.
    Set Method:
    _setCollectDir(self, value) - Property target used to set the collect directory.

    ignoreFailureMode

    Ignore failure mode for peer.

    Get Method:
    _getIgnoreFailureMode(self) - Property target used to get the ignoreFailure mode.
    Set Method:
    _setIgnoreFailureMode(self, value) - Property target used to set the ignoreFailure mode.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.writers.dvdwriter.MediaCapacity-class.html0000664000175000017500000004451412657665545032657 0ustar pronovicpronovic00000000000000 CedarBackup3.writers.dvdwriter.MediaCapacity
    Package CedarBackup3 :: Package writers :: Module dvdwriter :: Class MediaCapacity
    [hide private]
    [frames] | no frames]

    Class MediaCapacity

    source code

    object --+
             |
            MediaCapacity
    

    Class encapsulating information about DVD media capacity.

    Space used and space available do not include any information about media lead-in or other overhead.

    Instance Methods [hide private]
     
    __init__(self, bytesUsed, bytesAvailable)
    Initializes a capacity object.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    _getBytesUsed(self)
    Property target used to get the bytes-used value.
    source code
     
    _getBytesAvailable(self)
    Property target available to get the bytes-available value.
    source code
     
    _getTotalCapacity(self)
    Property target to get the total capacity (used + available).
    source code
     
    _getUtilized(self)
    Property target to get the percent of capacity which is utilized.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      bytesUsed
    Space used on disc, in bytes.
      bytesAvailable
    Space available on disc, in bytes.
      totalCapacity
    Total capacity of the disc, in bytes.
      utilized
    Percentage of the total capacity which is utilized.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, bytesUsed, bytesAvailable)
    (Constructor)

    source code 

    Initializes a capacity object.

    Raises:
    • ValueError - If the bytes used and available values are not floats.
    Overrides: object.__init__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    Property Details [hide private]

    bytesUsed

    Space used on disc, in bytes.

    Get Method:
    _getBytesUsed(self) - Property target used to get the bytes-used value.

    bytesAvailable

    Space available on disc, in bytes.

    Get Method:
    _getBytesAvailable(self) - Property target available to get the bytes-available value.

    totalCapacity

    Total capacity of the disc, in bytes.

    Get Method:
    _getTotalCapacity(self) - Property target to get the total capacity (used + available).

    utilized

    Percentage of the total capacity which is utilized.

    Get Method:
    _getUtilized(self) - Property target to get the percent of capacity which is utilized.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.postgresql-module.html0000664000175000017500000005530612657665544030324 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.postgresql
    Package CedarBackup3 :: Package extend :: Module postgresql
    [hide private]
    [frames] | no frames]

    Module postgresql

    source code

    Provides an extension to back up PostgreSQL databases.

    This is a Cedar Backup extension used to back up PostgreSQL databases via the Cedar Backup command line. It requires a new configurations section <postgresql> and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file.

    The backup is done via the pg_dump or pg_dumpall commands included with the PostgreSQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the pg_dump client. This can be accomplished using appropriate voodoo in the pg_hda.conf file.

    Note that this code always produces a full backup. There is currently no facility for making incremental backups.

    You should always make /etc/cback3.conf unreadble to non-root users once you place postgresql configuration into it, since postgresql configuration will contain information about available PostgreSQL databases and usernames.

    Use of this extension may expose usernames in the process listing (via ps) when the backup is running if the username is specified in the configuration.


    Authors:
    Kenneth J. Pronovici <pronovic@ieee.org>, Antoine Beaupre <anarcat@koumbit.org>
    Classes [hide private]
      PostgresqlConfig
    Class representing PostgreSQL configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the PostgreSQL backup action.
    source code
     
    _backupDatabase(targetDir, compressMode, user, backupUser, backupGroup, database=None)
    Backs up an individual PostgreSQL database, or all databases.
    source code
     
    _getOutputFile(targetDir, database, compressMode)
    Opens the output file used for saving the PostgreSQL dump.
    source code
     
    backupDatabase(user, backupFile, database=None)
    Backs up an individual PostgreSQL database, or all databases.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.extend.postgresql")
      POSTGRESQLDUMP_COMMAND = ['pg_dump']
      POSTGRESQLDUMPALL_COMMAND = ['pg_dumpall']
      __package__ = 'CedarBackup3.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the PostgreSQL backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If a backup could not be written for some reason.

    _backupDatabase(targetDir, compressMode, user, backupUser, backupGroup, database=None)

    source code 

    Backs up an individual PostgreSQL database, or all databases.

    This internal method wraps the public method and adds some functionality, like figuring out a filename, etc.

    Parameters:
    • targetDir - Directory into which backups should be written.
    • compressMode - Compress mode to be used for backed-up files.
    • user - User to use for connecting to the database.
    • backupUser - User to own resulting file.
    • backupGroup - Group to own resulting file.
    • database - Name of database, or None for all databases.
    Returns:
    Name of the generated backup file.
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the PostgreSQL dump.

    _getOutputFile(targetDir, database, compressMode)

    source code 

    Opens the output file used for saving the PostgreSQL dump.

    The filename is either "postgresqldump.txt" or "postgresqldump-<database>.txt". The ".gz" or ".bz2" extension is added if compress is True.

    Parameters:
    • targetDir - Target directory to write file in.
    • database - Name of the database (if any)
    • compressMode - Compress mode to be used for backed-up files.
    Returns:
    Tuple of (Output file object, filename), file opened in binary mode for use with executeCommand()

    backupDatabase(user, backupFile, database=None)

    source code 

    Backs up an individual PostgreSQL database, or all databases.

    This function backs up either a named local PostgreSQL database or all local PostgreSQL databases, using the passed in user for connectivity. This is always a full backup. There is no facility for incremental backups.

    The backup data will be written into the passed-in back file. Normally, this would be an object as returned from open(), but it is possible to use something like a GzipFile to write compressed output. The caller is responsible for closing the passed-in backup file.

    Parameters:
    • user (String representing PostgreSQL username.) - User to use for connecting to the database.
    • backupFile (Python file object as from open() or file().) - File use for writing backup.
    • database (String representing database name, or None for all databases.) - Name of the database to be backed up.
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the PostgreSQL dump.

    Note: Typically, you would use the root user to back up all databases.


    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.mbox.MboxDir-class.html0000664000175000017500000011462712657665545030254 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.mbox.MboxDir
    Package CedarBackup3 :: Package extend :: Module mbox :: Class MboxDir
    [hide private]
    [frames] | no frames]

    Class MboxDir

    source code

    object --+
             |
            MboxDir
    

    Class representing mbox directory configuration..

    The following restrictions exist on data in this class:

    • The absolute path must be absolute.
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.

    Unlike collect directory configuration, this is the only place exclusions are allowed (no global exclusions at the <mbox> configuration level). Also, we only allow relative exclusions and there is no configured ignore file. This is because mbox directory backups are not recursive.

    Instance Methods [hide private]
     
    __init__(self, absolutePath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None)
    Constructor for the MboxDir class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setRelativeExcludePaths(self, value)
    Property target used to set the relative exclude paths list.
    source code
     
    _getRelativeExcludePaths(self)
    Property target used to get the relative exclude paths list.
    source code
     
    _setExcludePatterns(self, value)
    Property target used to set the exclude patterns list.
    source code
     
    _getExcludePatterns(self)
    Property target used to get the exclude patterns list.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      absolutePath
    Absolute path to the mbox directory.
      collectMode
    Overridden collect mode for this mbox directory.
      compressMode
    Overridden compress mode for this mbox directory.
      relativeExcludePaths
    List of relative paths to exclude.
      excludePatterns
    List of regular expression patterns to exclude.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, absolutePath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None)
    (Constructor)

    source code 

    Constructor for the MboxDir class.

    You should never directly instantiate this class.

    Parameters:
    • absolutePath - Absolute path to a mbox file on disk.
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    • relativeExcludePaths - List of relative paths to exclude.
    • excludePatterns - List of regular expression patterns to exclude
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setRelativeExcludePaths(self, value)

    source code 

    Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment.


    Property Details [hide private]

    absolutePath

    Absolute path to the mbox directory.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    collectMode

    Overridden collect mode for this mbox directory.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Overridden compress mode for this mbox directory.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    relativeExcludePaths

    List of relative paths to exclude.

    Get Method:
    _getRelativeExcludePaths(self) - Property target used to get the relative exclude paths list.
    Set Method:
    _setRelativeExcludePaths(self, value) - Property target used to set the relative exclude paths list.

    excludePatterns

    List of regular expression patterns to exclude.

    Get Method:
    _getExcludePatterns(self) - Property target used to get the exclude patterns list.
    Set Method:
    _setExcludePatterns(self, value) - Property target used to set the exclude patterns list.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.writers.cdwriter.CdWriter-class.html0000664000175000017500000032543312657665545031520 0ustar pronovicpronovic00000000000000 CedarBackup3.writers.cdwriter.CdWriter
    Package CedarBackup3 :: Package writers :: Module cdwriter :: Class CdWriter
    [hide private]
    [frames] | no frames]

    Class CdWriter

    source code

    object --+
             |
            CdWriter
    

    Class representing a device that knows how to write CD media.

    Summary

    This is a class representing a device that knows how to write CD media. It provides common operations for the device, such as ejecting the media, writing an ISO image to the media, or checking for the current media capacity. It also provides a place to store device attributes, such as whether the device supports writing multisession discs, etc.

    This class is implemented in terms of the eject and cdrecord programs, both of which should be available on most UN*X platforms.

    Image Writer Interface

    The following methods make up the "image writer" interface shared with other kinds of writers (such as DVD writers):

      __init__
      initializeImage()
      addImageEntry()
      writeImage()
      setImageNewDisc()
      retrieveCapacity()
      getEstimatedImageSize()
    

    Only these methods will be used by other Cedar Backup functionality that expects a compatible image writer.

    The media attribute is also assumed to be available.

    Media Types

    This class knows how to write to two different kinds of media, represented by the following constants:

    • MEDIA_CDR_74: 74-minute CD-R media (650 MB capacity)
    • MEDIA_CDRW_74: 74-minute CD-RW media (650 MB capacity)
    • MEDIA_CDR_80: 80-minute CD-R media (700 MB capacity)
    • MEDIA_CDRW_80: 80-minute CD-RW media (700 MB capacity)

    Most hardware can read and write both 74-minute and 80-minute CD-R and CD-RW media. Some older drives may only be able to write CD-R media. The difference between the two is that CD-RW media can be rewritten (erased), while CD-R media cannot be.

    I do not support any other configurations for a couple of reasons. The first is that I've never tested any other kind of media. The second is that anything other than 74 or 80 minute is apparently non-standard.

    Device Attributes vs. Media Attributes

    A given writer instance has two different kinds of attributes associated with it, which I call device attributes and media attributes. Device attributes are things which can be determined without looking at the media, such as whether the drive supports writing multisession disks or has a tray. Media attributes are attributes which vary depending on the state of the media, such as the remaining capacity on a disc. In general, device attributes are available via instance variables and are constant over the life of an object, while media attributes can be retrieved through method calls.

    Talking to Hardware

    This class needs to talk to CD writer hardware in two different ways: through cdrecord to actually write to the media, and through the filesystem to do things like open and close the tray.

    Historically, CdWriter has interacted with cdrecord using the scsiId attribute, and with most other utilities using the device attribute. This changed somewhat in Cedar Backup 2.9.0.

    When Cedar Backup was first written, the only way to interact with cdrecord was by using a SCSI device id. IDE devices were mapped to pseudo-SCSI devices through the kernel. Later, extended SCSI "methods" arrived, and it became common to see ATA:1,0,0 or ATAPI:0,0,0 as a way to address IDE hardware. By late 2006, ATA and ATAPI had apparently been deprecated in favor of just addressing the IDE device directly by name, i.e. /dev/cdrw.

    Because of this latest development, it no longer makes sense to require a CdWriter to be created with a SCSI id -- there might not be one. So, the passed-in SCSI id is now optional. Also, there is now a hardwareId attribute. This attribute is filled in with either the SCSI id (if provided) or the device (otherwise). The hardware id is the value that will be passed to cdrecord in the dev= argument.

    Testing

    It's rather difficult to test this code in an automated fashion, even if you have access to a physical CD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to.

    Because of this, much of the implementation below is in terms of static methods that are supposed to take defined actions based on their arguments. Public methods are then implemented in terms of a series of calls to simplistic static methods. This way, we can test as much as possible of the functionality via testing the static methods, while hoping that if the static methods are called appropriately, things will work properly. It's not perfect, but it's much better than no testing at all.

    Instance Methods [hide private]
     
    __init__(self, device, scsiId=None, driveSpeed=None, mediaType=1, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False)
    Initializes a CD writer object.
    source code
     
    isRewritable(self)
    Indicates whether the media is rewritable per configuration.
    source code
     
    _retrieveProperties(self)
    Retrieves properties for a device from cdrecord.
    source code
     
    retrieveCapacity(self, entireDisc=False, useMulti=True)
    Retrieves capacity for the current media in terms of a MediaCapacity object.
    source code
     
    _getBoundaries(self, entireDisc=False, useMulti=True)
    Gets the ISO boundaries for the media.
    source code
     
    openTray(self)
    Opens the device's tray and leaves it open.
    source code
     
    closeTray(self)
    Closes the device's tray.
    source code
     
    refreshMedia(self)
    Opens and then immediately closes the device's tray, to refresh the device's idea of the media.
    source code
     
    writeImage(self, imagePath=None, newDisc=False, writeMulti=True)
    Writes an ISO image to the media in the device.
    source code
     
    _blankMedia(self)
    Blanks the media in the device, if the media is rewritable.
    source code
     
    initializeImage(self, newDisc, tmpdir, mediaLabel=None)
    Initializes the writer's associated ISO image.
    source code
     
    addImageEntry(self, path, graftPoint)
    Adds a filepath entry to the writer's associated ISO image.
    source code
     
    setImageNewDisc(self, newDisc)
    Resets (overrides) the newDisc flag on the internal image.
    source code
     
    getEstimatedImageSize(self)
    Gets the estimated size of the image associated with the writer.
    source code
     
    _getDevice(self)
    Property target used to get the device value.
    source code
     
    _getScsiId(self)
    Property target used to get the SCSI id value.
    source code
     
    _getHardwareId(self)
    Property target used to get the hardware id value.
    source code
     
    _getDriveSpeed(self)
    Property target used to get the drive speed.
    source code
     
    _getMedia(self)
    Property target used to get the media description.
    source code
     
    _getDeviceType(self)
    Property target used to get the device type.
    source code
     
    _getDeviceVendor(self)
    Property target used to get the device vendor.
    source code
     
    _getDeviceId(self)
    Property target used to get the device id.
    source code
     
    _getDeviceBufferSize(self)
    Property target used to get the device buffer size.
    source code
     
    _getDeviceSupportsMulti(self)
    Property target used to get the device-support-multi flag.
    source code
     
    _getDeviceHasTray(self)
    Property target used to get the device-has-tray flag.
    source code
     
    _getDeviceCanEject(self)
    Property target used to get the device-can-eject flag.
    source code
     
    _getRefreshMediaDelay(self)
    Property target used to get the configured refresh media delay, in seconds.
    source code
     
    _getEjectDelay(self)
    Property target used to get the configured eject delay, in seconds.
    source code
     
    unlockTray(self)
    Unlocks the device's tray.
    source code
     
    _createImage(self)
    Creates an ISO image based on configuration in self._image.
    source code
     
    _writeImage(self, imagePath, writeMulti, newDisc)
    Write an ISO image to disc using cdrecord.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _calculateCapacity(media, boundaries)
    Calculates capacity for the media in terms of boundaries.
    source code
     
    _parsePropertiesOutput(output)
    Parses the output from a cdrecord properties command.
    source code
     
    _parseBoundariesOutput(output)
    Parses the output from a cdrecord capacity command.
    source code
     
    _buildOpenTrayArgs(device)
    Builds a list of arguments to be passed to a eject command.
    source code
     
    _buildCloseTrayArgs(device)
    Builds a list of arguments to be passed to a eject command.
    source code
     
    _buildPropertiesArgs(hardwareId)
    Builds a list of arguments to be passed to a cdrecord command.
    source code
     
    _buildBoundariesArgs(hardwareId)
    Builds a list of arguments to be passed to a cdrecord command.
    source code
     
    _buildBlankArgs(hardwareId, driveSpeed=None)
    Builds a list of arguments to be passed to a cdrecord command.
    source code
     
    _buildWriteArgs(hardwareId, imagePath, driveSpeed=None, writeMulti=True)
    Builds a list of arguments to be passed to a cdrecord command.
    source code
     
    _buildUnlockTrayArgs(device)
    Builds a list of arguments to be passed to a eject command.
    source code
    Properties [hide private]
      device
    Filesystem device name for this writer.
      scsiId
    SCSI id for the device, in the form [<method>:]scsibus,target,lun.
      hardwareId
    Hardware id for this writer, either SCSI id or device path.
      driveSpeed
    Speed at which the drive writes.
      media
    Definition of media that is expected to be in the device.
      deviceType
    Type of the device, as returned from cdrecord -prcap.
      deviceVendor
    Vendor of the device, as returned from cdrecord -prcap.
      deviceId
    Device identification, as returned from cdrecord -prcap.
      deviceBufferSize
    Size of the device's write buffer, in bytes.
      deviceSupportsMulti
    Indicates whether device supports multisession discs.
      deviceHasTray
    Indicates whether the device has a media tray.
      deviceCanEject
    Indicates whether the device supports ejecting its media.
      refreshMediaDelay
    Refresh media delay, in seconds.
      ejectDelay
    Eject delay, in seconds.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, device, scsiId=None, driveSpeed=None, mediaType=1, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False)
    (Constructor)

    source code 

    Initializes a CD writer object.

    The current user must have write access to the device at the time the object is instantiated, or an exception will be thrown. However, no media-related validation is done, and in fact there is no need for any media to be in the drive until one of the other media attribute-related methods is called.

    The various instance variables such as deviceType, deviceVendor, etc. might be None, if we're unable to parse this specific information from the cdrecord output. This information is just for reference.

    The SCSI id is optional, but the device path is required. If the SCSI id is passed in, then the hardware id attribute will be taken from the SCSI id. Otherwise, the hardware id will be taken from the device.

    If cdrecord improperly detects whether your writer device has a tray and can be safely opened and closed, then pass in noEject=False. This will override the properties and the device will never be ejected.

    Parameters:
    • device (Absolute path to a filesystem device, i.e. /dev/cdrw) - Filesystem device associated with this writer.
    • scsiId (If provided, SCSI id in the form [<method>:]scsibus,target,lun) - SCSI id for the device (optional).
    • driveSpeed (Use 2 for 2x device, etc. or None to use device default.) - Speed at which the drive writes.
    • mediaType (One of the valid media type as discussed above.) - Type of the media that is assumed to be in the drive.
    • noEject (Boolean true/false) - Overrides properties to indicate that the device does not support eject.
    • refreshMediaDelay (Number of seconds, an integer >= 0) - Refresh media delay to use, if any
    • ejectDelay (Number of seconds, an integer >= 0) - Eject delay to use, if any
    • unittest (Boolean true/false) - Turns off certain validations, for use in unit testing.
    Raises:
    • ValueError - If the device is not valid for some reason.
    • ValueError - If the SCSI id is not in a valid form.
    • ValueError - If the drive speed is not an integer >= 1.
    • IOError - If device properties could not be read for some reason.
    Overrides: object.__init__

    Note: The unittest parameter should never be set to True outside of Cedar Backup code. It is intended for use in unit testing Cedar Backup internals and has no other sensible purpose.

    _retrieveProperties(self)

    source code 

    Retrieves properties for a device from cdrecord.

    The results are returned as a tuple of the object device attributes as returned from _parsePropertiesOutput: (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject).

    Returns:
    Results tuple as described above.
    Raises:
    • IOError - If there is a problem talking to the device.

    retrieveCapacity(self, entireDisc=False, useMulti=True)

    source code 

    Retrieves capacity for the current media in terms of a MediaCapacity object.

    If entireDisc is passed in as True the capacity will be for the entire disc, as if it were to be rewritten from scratch. If the drive does not support writing multisession discs or if useMulti is passed in as False, the capacity will also be as if the disc were to be rewritten from scratch, but the indicated boundaries value will be None. The same will happen if the disc cannot be read for some reason. Otherwise, the capacity (including the boundaries) will represent whatever space remains on the disc to be filled by future sessions.

    Parameters:
    • entireDisc (Boolean true/false) - Indicates whether to return capacity for entire disc.
    • useMulti (Boolean true/false) - Indicates whether a multisession disc should be assumed, if possible.
    Returns:
    MediaCapacity object describing the capacity of the media.
    Raises:
    • IOError - If the media could not be read for some reason.

    _getBoundaries(self, entireDisc=False, useMulti=True)

    source code 

    Gets the ISO boundaries for the media.

    If entireDisc is passed in as True the boundaries will be None, as if the disc were to be rewritten from scratch. If the drive does not support writing multisession discs, the returned value will be None. The same will happen if the disc can't be read for some reason. Otherwise, the returned value will be represent the boundaries of the disc's current contents.

    The results are returned as a tuple of (lower, upper) as needed by the IsoImage class. Note that these values are in terms of ISO sectors, not bytes. Clients should generally consider the boundaries value opaque, however.

    Parameters:
    • entireDisc (Boolean true/false) - Indicates whether to return capacity for entire disc.
    • useMulti (Boolean true/false) - Indicates whether a multisession disc should be assumed, if possible.
    Returns:
    Boundaries tuple or None, as described above.
    Raises:
    • IOError - If the media could not be read for some reason.

    _calculateCapacity(media, boundaries)
    Static Method

    source code 

    Calculates capacity for the media in terms of boundaries.

    If boundaries is None or the lower bound is 0 (zero), then the capacity will be for the entire disc minus the initial lead in. Otherwise, capacity will be as if the caller wanted to add an additional session to the end of the existing data on the disc.

    Parameters:
    • media - MediaDescription object describing the media capacity.
    • boundaries - Session boundaries as returned from _getBoundaries.
    Returns:
    MediaCapacity object describing the capacity of the media.

    openTray(self)

    source code 

    Opens the device's tray and leaves it open.

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing.

    If the writer was constructed with noEject=True, then this is a no-op.

    Starting with Debian wheezy on my backup hardware, I started seeing consistent problems with the eject command. I couldn't tell whether these problems were due to the device management system or to the new kernel (3.2.0). Initially, I saw simple eject failures, possibly because I was opening and closing the tray too quickly. I worked around that behavior with the new ejectDelay flag.

    Later, I sometimes ran into issues after writing an image to a disc: eject would give errors like "unable to eject, last error: Inappropriate ioctl for device". Various sources online (like Ubuntu bug #875543) suggested that the drive was being locked somehow, and that the workaround was to run 'eject -i off' to unlock it. Sure enough, that fixed the problem for me, so now it's a normal error-handling strategy.

    Raises:
    • IOError - If there is an error talking to the device.

    closeTray(self)

    source code 

    Closes the device's tray.

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing.

    If the writer was constructed with noEject=True, then this is a no-op.

    Raises:
    • IOError - If there is an error talking to the device.

    refreshMedia(self)

    source code 

    Opens and then immediately closes the device's tray, to refresh the device's idea of the media.

    Sometimes, a device gets confused about the state of its media. Often, all it takes to solve the problem is to eject the media and then immediately reload it. (There are also configurable eject and refresh media delays which can be applied, for situations where this makes a difference.)

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. The configured delays still apply, though.

    Raises:
    • IOError - If there is an error talking to the device.

    writeImage(self, imagePath=None, newDisc=False, writeMulti=True)

    source code 

    Writes an ISO image to the media in the device.

    If newDisc is passed in as True, we assume that the entire disc will be overwritten, and the media will be blanked before writing it if possible (i.e. if the media is rewritable).

    If writeMulti is passed in as True, then a multisession disc will be written if possible (i.e. if the drive supports writing multisession discs).

    if imagePath is passed in as None, then the existing image configured with initializeImage will be used. Under these circumstances, the passed-in newDisc flag will be ignored.

    By default, we assume that the disc can be written multisession and that we should append to the current contents of the disc. In any case, the ISO image must be generated appropriately (i.e. must take into account any existing session boundaries, etc.)

    Parameters:
    • imagePath (String representing a path on disk) - Path to an ISO image on disk, or None to use writer's image
    • newDisc (Boolean true/false.) - Indicates whether the entire disc will overwritten.
    • writeMulti (Boolean true/false) - Indicates whether a multisession disc should be written, if possible.
    Raises:
    • ValueError - If the image path is not absolute.
    • ValueError - If some path cannot be encoded properly.
    • IOError - If the media could not be written to for some reason.
    • ValueError - If no image is passed in and initializeImage() was not previously called

    _blankMedia(self)

    source code 

    Blanks the media in the device, if the media is rewritable.

    Raises:
    • IOError - If the media could not be written to for some reason.

    _parsePropertiesOutput(output)
    Static Method

    source code 

    Parses the output from a cdrecord properties command.

    The output parameter should be a list of strings as returned from executeCommand for a cdrecord command with arguments as from _buildPropertiesArgs. The list of strings will be parsed to yield information about the properties of the device.

    The output is expected to be a huge long list of strings. Unfortunately, the strings aren't in a completely regular format. However, the format of individual lines seems to be regular enough that we can look for specific values. Two kinds of parsing take place: one kind of parsing picks out out specific values like the device id, device vendor, etc. The other kind of parsing just sets a boolean flag True if a matching line is found. All of the parsing is done with regular expressions.

    Right now, pretty much nothing in the output is required and we should parse an empty document successfully (albeit resulting in a device that can't eject, doesn't have a tray and doesnt't support multisession discs). I had briefly considered erroring out if certain lines weren't found or couldn't be parsed, but that seems like a bad idea given that most of the information is just for reference.

    The results are returned as a tuple of the object device attributes: (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject).

    Parameters:
    • output - Output from a cdrecord -prcap command.
    Returns:
    Results tuple as described above.
    Raises:
    • IOError - If there is problem parsing the output.

    _parseBoundariesOutput(output)
    Static Method

    source code 

    Parses the output from a cdrecord capacity command.

    The output parameter should be a list of strings as returned from executeCommand for a cdrecord command with arguments as from _buildBoundaryArgs. The list of strings will be parsed to yield information about the capacity of the media in the device.

    Basically, we expect the list of strings to include just one line, a pair of values. There isn't supposed to be whitespace, but we allow it anyway in the regular expression. Any lines below the one line we parse are completely ignored. It would be a good idea to ignore stderr when executing the cdrecord command that generates output for this method, because sometimes cdrecord spits out kernel warnings about the actual output.

    The results are returned as a tuple of (lower, upper) as needed by the IsoImage class. Note that these values are in terms of ISO sectors, not bytes. Clients should generally consider the boundaries value opaque, however.

    Parameters:
    • output - Output from a cdrecord -msinfo command.
    Returns:
    Boundaries tuple as described above.
    Raises:
    • IOError - If there is problem parsing the output.

    Note: If the boundaries output can't be parsed, we return None.

    _buildOpenTrayArgs(device)
    Static Method

    source code 

    Builds a list of arguments to be passed to a eject command.

    The arguments will cause the eject command to open the tray and eject the media. No validation is done by this method as to whether this action actually makes sense.

    Parameters:
    • device - Filesystem device name for this writer, i.e. /dev/cdrw.
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildCloseTrayArgs(device)
    Static Method

    source code 

    Builds a list of arguments to be passed to a eject command.

    The arguments will cause the eject command to close the tray and reload the media. No validation is done by this method as to whether this action actually makes sense.

    Parameters:
    • device - Filesystem device name for this writer, i.e. /dev/cdrw.
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildPropertiesArgs(hardwareId)
    Static Method

    source code 

    Builds a list of arguments to be passed to a cdrecord command.

    The arguments will cause the cdrecord command to ask the device for a list of its capacities via the -prcap switch.

    Parameters:
    • hardwareId - Hardware id for the device (either SCSI id or device path)
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildBoundariesArgs(hardwareId)
    Static Method

    source code 

    Builds a list of arguments to be passed to a cdrecord command.

    The arguments will cause the cdrecord command to ask the device for the current multisession boundaries of the media using the -msinfo switch.

    Parameters:
    • hardwareId - Hardware id for the device (either SCSI id or device path)
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildBlankArgs(hardwareId, driveSpeed=None)
    Static Method

    source code 

    Builds a list of arguments to be passed to a cdrecord command.

    The arguments will cause the cdrecord command to blank the media in the device identified by hardwareId. No validation is done by this method as to whether the action makes sense (i.e. to whether the media even can be blanked).

    Parameters:
    • hardwareId - Hardware id for the device (either SCSI id or device path)
    • driveSpeed - Speed at which the drive writes.
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildWriteArgs(hardwareId, imagePath, driveSpeed=None, writeMulti=True)
    Static Method

    source code 

    Builds a list of arguments to be passed to a cdrecord command.

    The arguments will cause the cdrecord command to write the indicated ISO image (imagePath) to the media in the device identified by hardwareId. The writeMulti argument controls whether to write a multisession disc. No validation is done by this method as to whether the action makes sense (i.e. to whether the device even can write multisession discs, for instance).

    Parameters:
    • hardwareId - Hardware id for the device (either SCSI id or device path)
    • imagePath - Path to an ISO image on disk.
    • driveSpeed - Speed at which the drive writes.
    • writeMulti - Indicates whether to write a multisession disc.
    Returns:
    List suitable for passing to util.executeCommand as args.

    initializeImage(self, newDisc, tmpdir, mediaLabel=None)

    source code 

    Initializes the writer's associated ISO image.

    This method initializes the image instance variable so that the caller can use the addImageEntry method. Once entries have been added, the writeImage method can be called with no arguments.

    Parameters:
    • newDisc (Boolean true/false.) - Indicates whether the disc should be re-initialized
    • tmpdir (String representing a directory path on disk) - Temporary directory to use if needed
    • mediaLabel (String, no more than 25 characters long) - Media label to be applied to the image, if any

    addImageEntry(self, path, graftPoint)

    source code 

    Adds a filepath entry to the writer's associated ISO image.

    The contents of the filepath -- but not the path itself -- will be added to the image at the indicated graft point. If you don't want to use a graft point, just pass None.

    Parameters:
    • path (String representing a path on disk) - File or directory to be added to the image
    • graftPoint (String representing a graft point path, as described above) - Graft point to be used when adding this entry
    Raises:
    • ValueError - If initializeImage() was not previously called

    Note: Before calling this method, you must call initializeImage.

    setImageNewDisc(self, newDisc)

    source code 

    Resets (overrides) the newDisc flag on the internal image.

    Parameters:
    • newDisc - New disc flag to set
    Raises:
    • ValueError - If initializeImage() was not previously called

    getEstimatedImageSize(self)

    source code 

    Gets the estimated size of the image associated with the writer.

    Returns:
    Estimated size of the image, in bytes.
    Raises:
    • IOError - If there is a problem calling mkisofs.
    • ValueError - If initializeImage() was not previously called

    unlockTray(self)

    source code 

    Unlocks the device's tray.

    Raises:
    • IOError - If there is an error talking to the device.

    _createImage(self)

    source code 

    Creates an ISO image based on configuration in self._image.

    Returns:
    Path to the newly-created ISO image on disk.
    Raises:
    • IOError - If there is an error writing the image to disk.
    • ValueError - If there are no filesystem entries in the image
    • ValueError - If a path cannot be encoded properly.

    _writeImage(self, imagePath, writeMulti, newDisc)

    source code 

    Write an ISO image to disc using cdrecord. The disc is blanked first if newDisc is True.

    Parameters:
    • imagePath - Path to an ISO image on disk
    • writeMulti - Indicates whether a multisession disc should be written, if possible.
    • newDisc - Indicates whether the entire disc will overwritten.

    _buildUnlockTrayArgs(device)
    Static Method

    source code 

    Builds a list of arguments to be passed to a eject command.

    The arguments will cause the eject command to unlock the tray.

    Parameters:
    • device - Filesystem device name for this writer, i.e. /dev/cdrw.
    Returns:
    List suitable for passing to util.executeCommand as args.

    Property Details [hide private]

    device

    Filesystem device name for this writer.

    Get Method:
    _getDevice(self) - Property target used to get the device value.

    scsiId

    SCSI id for the device, in the form [<method>:]scsibus,target,lun.

    Get Method:
    _getScsiId(self) - Property target used to get the SCSI id value.

    hardwareId

    Hardware id for this writer, either SCSI id or device path.

    Get Method:
    _getHardwareId(self) - Property target used to get the hardware id value.

    driveSpeed

    Speed at which the drive writes.

    Get Method:
    _getDriveSpeed(self) - Property target used to get the drive speed.

    media

    Definition of media that is expected to be in the device.

    Get Method:
    _getMedia(self) - Property target used to get the media description.

    deviceType

    Type of the device, as returned from cdrecord -prcap.

    Get Method:
    _getDeviceType(self) - Property target used to get the device type.

    deviceVendor

    Vendor of the device, as returned from cdrecord -prcap.

    Get Method:
    _getDeviceVendor(self) - Property target used to get the device vendor.

    deviceId

    Device identification, as returned from cdrecord -prcap.

    Get Method:
    _getDeviceId(self) - Property target used to get the device id.

    deviceBufferSize

    Size of the device's write buffer, in bytes.

    Get Method:
    _getDeviceBufferSize(self) - Property target used to get the device buffer size.

    deviceSupportsMulti

    Indicates whether device supports multisession discs.

    Get Method:
    _getDeviceSupportsMulti(self) - Property target used to get the device-support-multi flag.

    deviceHasTray

    Indicates whether the device has a media tray.

    Get Method:
    _getDeviceHasTray(self) - Property target used to get the device-has-tray flag.

    deviceCanEject

    Indicates whether the device supports ejecting its media.

    Get Method:
    _getDeviceCanEject(self) - Property target used to get the device-can-eject flag.

    refreshMediaDelay

    Refresh media delay, in seconds.

    Get Method:
    _getRefreshMediaDelay(self) - Property target used to get the configured refresh media delay, in seconds.

    ejectDelay

    Eject delay, in seconds.

    Get Method:
    _getEjectDelay(self) - Property target used to get the configured eject delay, in seconds.

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.image-module.html0000664000175000017500000000211312657665544026464 0ustar pronovicpronovic00000000000000 image

    Module image


    Variables

    __package__

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.xmlutil-pysrc.html0000664000175000017500000060052012657665546026200 0ustar pronovicpronovic00000000000000 CedarBackup3.xmlutil
    Package CedarBackup3 :: Module xmlutil
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.xmlutil

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2006,2010,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # Portions Copyright (c) 2000 Fourthought Inc, USA. 
     15  # All Rights Reserved. 
     16  # 
     17  # This program is free software; you can redistribute it and/or 
     18  # modify it under the terms of the GNU General Public License, 
     19  # Version 2, as published by the Free Software Foundation. 
     20  # 
     21  # This program is distributed in the hope that it will be useful, 
     22  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     23  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     24  # 
     25  # Copies of the GNU General Public License are available from 
     26  # the Free Software Foundation website, http://www.gnu.org/. 
     27  # 
     28  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     29  # 
     30  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     31  # Language : Python 3 (>= 3.4) 
     32  # Project  : Cedar Backup, release 3 
     33  # Purpose  : Provides general XML-related functionality. 
     34  # 
     35  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     36   
     37  ######################################################################## 
     38  # Module documentation 
     39  ######################################################################## 
     40   
     41  """ 
     42  Provides general XML-related functionality. 
     43   
     44  What I'm trying to do here is abstract much of the functionality that directly 
     45  accesses the DOM tree.  This is not so much to "protect" the other code from 
     46  the DOM, but to standardize the way it's used.  It will also help extension 
     47  authors write code that easily looks more like the rest of Cedar Backup. 
     48   
     49  @sort: createInputDom, createOutputDom, serializeDom, isElement, readChildren, 
     50         readFirstChild, readStringList, readString, readInteger, readBoolean, 
     51         addContainerNode, addStringNode, addIntegerNode, addBooleanNode, 
     52         TRUE_BOOLEAN_VALUES, FALSE_BOOLEAN_VALUES, VALID_BOOLEAN_VALUES 
     53   
     54  @var TRUE_BOOLEAN_VALUES: List of boolean values in XML representing C{True}. 
     55  @var FALSE_BOOLEAN_VALUES: List of boolean values in XML representing C{False}. 
     56  @var VALID_BOOLEAN_VALUES: List of valid boolean values in XML. 
     57   
     58  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     59  """ 
     60  # pylint: disable=C0111,C0103,W0511,W0104,W0106 
     61   
     62  ######################################################################## 
     63  # Imported modules 
     64  ######################################################################## 
     65   
     66  # System modules 
     67  import sys 
     68  import re 
     69  import logging 
     70  from io import StringIO 
     71   
     72  # XML-related modules 
     73  from xml.parsers.expat import ExpatError 
     74  from xml.dom.minidom import Node 
     75  from xml.dom.minidom import getDOMImplementation 
     76  from xml.dom.minidom import parseString 
     77   
     78   
     79  ######################################################################## 
     80  # Module-wide constants and variables 
     81  ######################################################################## 
     82   
     83  logger = logging.getLogger("CedarBackup3.log.xml") 
     84   
     85  TRUE_BOOLEAN_VALUES   = [ "Y", "y", ] 
     86  FALSE_BOOLEAN_VALUES  = [ "N", "n", ] 
     87  VALID_BOOLEAN_VALUES  = TRUE_BOOLEAN_VALUES + FALSE_BOOLEAN_VALUES 
     88   
     89   
     90  ######################################################################## 
     91  # Functions for creating and parsing DOM trees 
     92  ######################################################################## 
     93   
    
    94 -def createInputDom(xmlData, name="cb_config"):
    95 """ 96 Creates a DOM tree based on reading an XML string. 97 @param name: Assumed base name of the document (root node name). 98 @return: Tuple (xmlDom, parentNode) for the parsed document 99 @raise ValueError: If the document can't be parsed. 100 """ 101 try: 102 xmlDom = parseString(xmlData) 103 parentNode = readFirstChild(xmlDom, name) 104 return (xmlDom, parentNode) 105 except (IOError, ExpatError) as e: 106 raise ValueError("Unable to parse XML document: %s" % e)
    107
    108 -def createOutputDom(name="cb_config"):
    109 """ 110 Creates a DOM tree used for writing an XML document. 111 @param name: Base name of the document (root node name). 112 @return: Tuple (xmlDom, parentNode) for the new document 113 """ 114 impl = getDOMImplementation() 115 xmlDom = impl.createDocument(None, name, None) 116 return (xmlDom, xmlDom.documentElement)
    117 118 119 ######################################################################## 120 # Functions for reading values out of XML documents 121 ######################################################################## 122
    123 -def isElement(node):
    124 """ 125 Returns True or False depending on whether the XML node is an element node. 126 """ 127 return node.nodeType == Node.ELEMENT_NODE
    128
    129 -def readChildren(parent, name):
    130 """ 131 Returns a list of nodes with a given name immediately beneath the 132 parent. 133 134 By "immediately beneath" the parent, we mean from among nodes that are 135 direct children of the passed-in parent node. 136 137 Underneath, we use the Python C{getElementsByTagName} method, which is 138 pretty cool, but which (surprisingly?) returns a list of all children 139 with a given name below the parent, at any level. We just prune that 140 list to include only children whose C{parentNode} matches the passed-in 141 parent. 142 143 @param parent: Parent node to search beneath. 144 @param name: Name of nodes to search for. 145 146 @return: List of child nodes with correct parent, or an empty list if 147 no matching nodes are found. 148 """ 149 lst = [] 150 if parent is not None: 151 result = parent.getElementsByTagName(name) 152 for entry in result: 153 if entry.parentNode is parent: 154 lst.append(entry) 155 return lst
    156
    157 -def readFirstChild(parent, name):
    158 """ 159 Returns the first child with a given name immediately beneath the parent. 160 161 By "immediately beneath" the parent, we mean from among nodes that are 162 direct children of the passed-in parent node. 163 164 @param parent: Parent node to search beneath. 165 @param name: Name of node to search for. 166 167 @return: First properly-named child of parent, or C{None} if no matching nodes are found. 168 """ 169 result = readChildren(parent, name) 170 if result is None or result == []: 171 return None 172 return result[0]
    173
    174 -def readStringList(parent, name):
    175 """ 176 Returns a list of the string contents associated with nodes with a given 177 name immediately beneath the parent. 178 179 By "immediately beneath" the parent, we mean from among nodes that are 180 direct children of the passed-in parent node. 181 182 First, we find all of the nodes using L{readChildren}, and then we 183 retrieve the "string contents" of each of those nodes. The returned list 184 has one entry per matching node. We assume that string contents of a 185 given node belong to the first C{TEXT_NODE} child of that node. Nodes 186 which have no C{TEXT_NODE} children are not represented in the returned 187 list. 188 189 @param parent: Parent node to search beneath. 190 @param name: Name of node to search for. 191 192 @return: List of strings as described above, or C{None} if no matching nodes are found. 193 """ 194 lst = [] 195 result = readChildren(parent, name) 196 for entry in result: 197 if entry.hasChildNodes(): 198 for child in entry.childNodes: 199 if child.nodeType == Node.TEXT_NODE: 200 lst.append(child.nodeValue) 201 break 202 if lst == []: 203 lst = None 204 return lst
    205
    206 -def readString(parent, name):
    207 """ 208 Returns string contents of the first child with a given name immediately 209 beneath the parent. 210 211 By "immediately beneath" the parent, we mean from among nodes that are 212 direct children of the passed-in parent node. We assume that string 213 contents of a given node belong to the first C{TEXT_NODE} child of that 214 node. 215 216 @param parent: Parent node to search beneath. 217 @param name: Name of node to search for. 218 219 @return: String contents of node or C{None} if no matching nodes are found. 220 """ 221 result = readStringList(parent, name) 222 if result is None: 223 return None 224 return result[0]
    225
    226 -def readInteger(parent, name):
    227 """ 228 Returns integer contents of the first child with a given name immediately 229 beneath the parent. 230 231 By "immediately beneath" the parent, we mean from among nodes that are 232 direct children of the passed-in parent node. 233 234 @param parent: Parent node to search beneath. 235 @param name: Name of node to search for. 236 237 @return: Integer contents of node or C{None} if no matching nodes are found. 238 @raise ValueError: If the string at the location can't be converted to an integer. 239 """ 240 result = readString(parent, name) 241 if result is None: 242 return None 243 else: 244 return int(result)
    245
    246 -def readLong(parent, name):
    247 """ 248 Returns long integer contents of the first child with a given name immediately 249 beneath the parent. 250 251 By "immediately beneath" the parent, we mean from among nodes that are 252 direct children of the passed-in parent node. 253 254 @param parent: Parent node to search beneath. 255 @param name: Name of node to search for. 256 257 @return: Long integer contents of node or C{None} if no matching nodes are found. 258 @raise ValueError: If the string at the location can't be converted to an integer. 259 """ 260 result = readString(parent, name) 261 if result is None: 262 return None 263 else: 264 return int(result)
    265
    266 -def readFloat(parent, name):
    267 """ 268 Returns float contents of the first child with a given name immediately 269 beneath the parent. 270 271 By "immediately beneath" the parent, we mean from among nodes that are 272 direct children of the passed-in parent node. 273 274 @param parent: Parent node to search beneath. 275 @param name: Name of node to search for. 276 277 @return: Float contents of node or C{None} if no matching nodes are found. 278 @raise ValueError: If the string at the location can't be converted to a 279 float value. 280 """ 281 result = readString(parent, name) 282 if result is None: 283 return None 284 else: 285 return float(result)
    286
    287 -def readBoolean(parent, name):
    288 """ 289 Returns boolean contents of the first child with a given name immediately 290 beneath the parent. 291 292 By "immediately beneath" the parent, we mean from among nodes that are 293 direct children of the passed-in parent node. 294 295 The string value of the node must be one of the values in L{VALID_BOOLEAN_VALUES}. 296 297 @param parent: Parent node to search beneath. 298 @param name: Name of node to search for. 299 300 @return: Boolean contents of node or C{None} if no matching nodes are found. 301 @raise ValueError: If the string at the location can't be converted to a boolean. 302 """ 303 result = readString(parent, name) 304 if result is None: 305 return None 306 else: 307 if result in TRUE_BOOLEAN_VALUES: 308 return True 309 elif result in FALSE_BOOLEAN_VALUES: 310 return False 311 else: 312 raise ValueError("Boolean values must be one of %s." % VALID_BOOLEAN_VALUES)
    313 314 315 ######################################################################## 316 # Functions for writing values into XML documents 317 ######################################################################## 318
    319 -def addContainerNode(xmlDom, parentNode, nodeName):
    320 """ 321 Adds a container node as the next child of a parent node. 322 323 @param xmlDom: DOM tree as from C{impl.createDocument()}. 324 @param parentNode: Parent node to create child for. 325 @param nodeName: Name of the new container node. 326 327 @return: Reference to the newly-created node. 328 """ 329 containerNode = xmlDom.createElement(nodeName) 330 parentNode.appendChild(containerNode) 331 return containerNode
    332
    333 -def addStringNode(xmlDom, parentNode, nodeName, nodeValue):
    334 """ 335 Adds a text node as the next child of a parent, to contain a string. 336 337 If the C{nodeValue} is None, then the node will be created, but will be 338 empty (i.e. will contain no text node child). 339 340 @param xmlDom: DOM tree as from C{impl.createDocument()}. 341 @param parentNode: Parent node to create child for. 342 @param nodeName: Name of the new container node. 343 @param nodeValue: The value to put into the node. 344 345 @return: Reference to the newly-created node. 346 """ 347 containerNode = addContainerNode(xmlDom, parentNode, nodeName) 348 if nodeValue is not None: 349 textNode = xmlDom.createTextNode(nodeValue) 350 containerNode.appendChild(textNode) 351 return containerNode
    352
    353 -def addIntegerNode(xmlDom, parentNode, nodeName, nodeValue):
    354 """ 355 Adds a text node as the next child of a parent, to contain an integer. 356 357 If the C{nodeValue} is None, then the node will be created, but will be 358 empty (i.e. will contain no text node child). 359 360 The integer will be converted to a string using "%d". The result will be 361 added to the document via L{addStringNode}. 362 363 @param xmlDom: DOM tree as from C{impl.createDocument()}. 364 @param parentNode: Parent node to create child for. 365 @param nodeName: Name of the new container node. 366 @param nodeValue: The value to put into the node. 367 368 @return: Reference to the newly-created node. 369 """ 370 if nodeValue is None: 371 return addStringNode(xmlDom, parentNode, nodeName, None) 372 else: 373 return addStringNode(xmlDom, parentNode, nodeName, "%d" % nodeValue) # %d works for both int and long
    374
    375 -def addLongNode(xmlDom, parentNode, nodeName, nodeValue):
    376 """ 377 Adds a text node as the next child of a parent, to contain a long integer. 378 379 If the C{nodeValue} is None, then the node will be created, but will be 380 empty (i.e. will contain no text node child). 381 382 The integer will be converted to a string using "%d". The result will be 383 added to the document via L{addStringNode}. 384 385 @param xmlDom: DOM tree as from C{impl.createDocument()}. 386 @param parentNode: Parent node to create child for. 387 @param nodeName: Name of the new container node. 388 @param nodeValue: The value to put into the node. 389 390 @return: Reference to the newly-created node. 391 """ 392 if nodeValue is None: 393 return addStringNode(xmlDom, parentNode, nodeName, None) 394 else: 395 return addStringNode(xmlDom, parentNode, nodeName, "%d" % nodeValue) # %d works for both int and long
    396
    397 -def addBooleanNode(xmlDom, parentNode, nodeName, nodeValue):
    398 """ 399 Adds a text node as the next child of a parent, to contain a boolean. 400 401 If the C{nodeValue} is None, then the node will be created, but will be 402 empty (i.e. will contain no text node child). 403 404 Boolean C{True}, or anything else interpreted as C{True} by Python, will 405 be converted to a string "Y". Anything else will be converted to a 406 string "N". The result is added to the document via L{addStringNode}. 407 408 @param xmlDom: DOM tree as from C{impl.createDocument()}. 409 @param parentNode: Parent node to create child for. 410 @param nodeName: Name of the new container node. 411 @param nodeValue: The value to put into the node. 412 413 @return: Reference to the newly-created node. 414 """ 415 if nodeValue is None: 416 return addStringNode(xmlDom, parentNode, nodeName, None) 417 else: 418 if nodeValue: 419 return addStringNode(xmlDom, parentNode, nodeName, "Y") 420 else: 421 return addStringNode(xmlDom, parentNode, nodeName, "N")
    422 423 424 ######################################################################## 425 # Functions for serializing DOM trees 426 ######################################################################## 427
    428 -def serializeDom(xmlDom, indent=3):
    429 """ 430 Serializes a DOM tree and returns the result in a string. 431 @param xmlDom: XML DOM tree to serialize 432 @param indent: Number of spaces to indent, as an integer 433 @return: String form of DOM tree, pretty-printed. 434 """ 435 xmlBuffer = StringIO() 436 serializer = Serializer(xmlBuffer, "UTF-8", indent=indent) 437 serializer.serialize(xmlDom) 438 xmlData = xmlBuffer.getvalue() 439 xmlBuffer.close() 440 return xmlData
    441
    442 -class Serializer(object):
    443 444 """ 445 XML serializer class. 446 447 This is a customized serializer that I hacked together based on what I found 448 in the PyXML distribution. Basically, around release 2.7.0, the only reason 449 I still had around a dependency on PyXML was for the PrettyPrint 450 functionality, and that seemed pointless. So, I stripped the PrettyPrint 451 code out of PyXML and hacked bits of it off until it did just what I needed 452 and no more. 453 454 This code started out being called PrintVisitor, but I decided it makes more 455 sense just calling it a serializer. I've made nearly all of the methods 456 private, and I've added a new high-level serialize() method rather than 457 having clients call C{visit()}. 458 459 Anyway, as a consequence of my hacking with it, this can't quite be called a 460 complete XML serializer any more. I ripped out support for HTML and XHTML, 461 and there is also no longer any support for namespaces (which I took out 462 because this dragged along a lot of extra code, and Cedar Backup doesn't use 463 namespaces). However, everything else should pretty much work as expected. 464 465 @copyright: This code, prior to customization, was part of the PyXML 466 codebase, and before that was part of the 4DOM suite developed by 467 Fourthought, Inc. It its original form, it was Copyright (c) 2000 468 Fourthought Inc, USA; All Rights Reserved. 469 """ 470
    471 - def __init__(self, stream=sys.stdout, encoding="UTF-8", indent=3):
    472 """ 473 Initialize a serializer. 474 @param stream: Stream to write output to. 475 @param encoding: Output encoding. 476 @param indent: Number of spaces to indent, as an integer 477 """ 478 self.stream = stream 479 self.encoding = encoding 480 self._indent = indent * " " 481 self._depth = 0 482 self._inText = 0
    483
    484 - def serialize(self, xmlDom):
    485 """ 486 Serialize the passed-in XML document. 487 @param xmlDom: XML DOM tree to serialize 488 @raise ValueError: If there's an unknown node type in the document. 489 """ 490 self._visit(xmlDom) 491 self.stream.write("\n")
    492
    493 - def _write(self, text):
    494 obj = _encodeText(text, self.encoding) 495 self.stream.write(obj) 496 return
    497
    498 - def _tryIndent(self):
    499 if not self._inText and self._indent: 500 self._write('\n' + self._indent*self._depth) 501 return
    502
    503 - def _visit(self, node):
    504 """ 505 @raise ValueError: If there's an unknown node type in the document. 506 """ 507 if node.nodeType == Node.ELEMENT_NODE: 508 return self._visitElement(node) 509 510 elif node.nodeType == Node.ATTRIBUTE_NODE: 511 return self._visitAttr(node) 512 513 elif node.nodeType == Node.TEXT_NODE: 514 return self._visitText(node) 515 516 elif node.nodeType == Node.CDATA_SECTION_NODE: 517 return self._visitCDATASection(node) 518 519 elif node.nodeType == Node.ENTITY_REFERENCE_NODE: 520 return self._visitEntityReference(node) 521 522 elif node.nodeType == Node.ENTITY_NODE: 523 return self._visitEntity(node) 524 525 elif node.nodeType == Node.PROCESSING_INSTRUCTION_NODE: 526 return self._visitProcessingInstruction(node) 527 528 elif node.nodeType == Node.COMMENT_NODE: 529 return self._visitComment(node) 530 531 elif node.nodeType == Node.DOCUMENT_NODE: 532 return self._visitDocument(node) 533 534 elif node.nodeType == Node.DOCUMENT_TYPE_NODE: 535 return self._visitDocumentType(node) 536 537 elif node.nodeType == Node.DOCUMENT_FRAGMENT_NODE: 538 return self._visitDocumentFragment(node) 539 540 elif node.nodeType == Node.NOTATION_NODE: 541 return self._visitNotation(node) 542 543 # It has a node type, but we don't know how to handle it 544 raise ValueError("Unknown node type: %s" % repr(node))
    545
    546 - def _visitNodeList(self, node, exclude=None):
    547 for curr in node: 548 curr is not exclude and self._visit(curr) 549 return
    550
    551 - def _visitNamedNodeMap(self, node):
    552 for item in list(node.values()): 553 self._visit(item) 554 return
    555
    556 - def _visitAttr(self, node):
    557 self._write(' ' + node.name) 558 value = node.value 559 text = _translateCDATA(value, self.encoding) 560 text, delimiter = _translateCDATAAttr(text) 561 self.stream.write("=%s%s%s" % (delimiter, text, delimiter)) 562 return
    563
    564 - def _visitProlog(self):
    565 self._write("<?xml version='1.0' encoding='%s'?>" % (self.encoding or 'utf-8')) 566 self._inText = 0 567 return
    568
    569 - def _visitDocument(self, node):
    570 self._visitProlog() 571 node.doctype and self._visitDocumentType(node.doctype) 572 self._visitNodeList(node.childNodes, exclude=node.doctype) 573 return
    574
    575 - def _visitDocumentFragment(self, node):
    576 self._visitNodeList(node.childNodes) 577 return
    578
    579 - def _visitElement(self, node):
    580 self._tryIndent() 581 self._write('<%s' % node.tagName) 582 for attr in list(node.attributes.values()): 583 self._visitAttr(attr) 584 if len(node.childNodes): 585 self._write('>') 586 self._depth = self._depth + 1 587 self._visitNodeList(node.childNodes) 588 self._depth = self._depth - 1 589 not (self._inText) and self._tryIndent() 590 self._write('</%s>' % node.tagName) 591 else: 592 self._write('/>') 593 self._inText = 0 594 return
    595
    596 - def _visitText(self, node):
    597 text = node.data 598 if self._indent: 599 text.strip() 600 if text: 601 text = _translateCDATA(text, self.encoding) 602 self.stream.write(text) 603 self._inText = 1 604 return
    605
    606 - def _visitDocumentType(self, doctype):
    607 if not doctype.systemId and not doctype.publicId: return 608 self._tryIndent() 609 self._write('<!DOCTYPE %s' % doctype.name) 610 if doctype.systemId and '"' in doctype.systemId: 611 system = "'%s'" % doctype.systemId 612 else: 613 system = '"%s"' % doctype.systemId 614 if doctype.publicId and '"' in doctype.publicId: 615 # We should probably throw an error 616 # Valid characters: <space> | <newline> | <linefeed> | 617 # [a-zA-Z0-9] | [-'()+,./:=?;!*#@$_%] 618 public = "'%s'" % doctype.publicId 619 else: 620 public = '"%s"' % doctype.publicId 621 if doctype.publicId and doctype.systemId: 622 self._write(' PUBLIC %s %s' % (public, system)) 623 elif doctype.systemId: 624 self._write(' SYSTEM %s' % system) 625 if doctype.entities or doctype.notations: 626 self._write(' [') 627 self._depth = self._depth + 1 628 self._visitNamedNodeMap(doctype.entities) 629 self._visitNamedNodeMap(doctype.notations) 630 self._depth = self._depth - 1 631 self._tryIndent() 632 self._write(']>') 633 else: 634 self._write('>') 635 self._inText = 0 636 return
    637
    638 - def _visitEntity(self, node):
    639 """Visited from a NamedNodeMap in DocumentType""" 640 self._tryIndent() 641 self._write('<!ENTITY %s' % (node.nodeName)) 642 node.publicId and self._write(' PUBLIC %s' % node.publicId) 643 node.systemId and self._write(' SYSTEM %s' % node.systemId) 644 node.notationName and self._write(' NDATA %s' % node.notationName) 645 self._write('>') 646 return
    647
    648 - def _visitNotation(self, node):
    649 """Visited from a NamedNodeMap in DocumentType""" 650 self._tryIndent() 651 self._write('<!NOTATION %s' % node.nodeName) 652 node.publicId and self._write(' PUBLIC %s' % node.publicId) 653 node.systemId and self._write(' SYSTEM %s' % node.systemId) 654 self._write('>') 655 return
    656
    657 - def _visitCDATASection(self, node):
    658 self._tryIndent() 659 self._write('<![CDATA[%s]]>' % (node.data)) 660 self._inText = 0 661 return
    662
    663 - def _visitComment(self, node):
    664 self._tryIndent() 665 self._write('<!--%s-->' % (node.data)) 666 self._inText = 0 667 return
    668
    669 - def _visitEntityReference(self, node):
    670 self._write('&%s;' % node.nodeName) 671 self._inText = 1 672 return
    673
    674 - def _visitProcessingInstruction(self, node):
    675 self._tryIndent() 676 self._write('<?%s %s?>' % (node.target, node.data)) 677 self._inText = 0 678 return
    679
    680 -def _encodeText(text, encoding):
    681 """Safely encodes the passed-in text as a Unicode string, converting bytes to UTF-8 if necessary.""" 682 if text is None: 683 return text 684 try: 685 if isinstance(text, bytes): 686 text = str(text, "utf-8") 687 return text 688 except UnicodeError: 689 raise ValueError("Path could not be safely encoded as utf-8.")
    690
    691 -def _translateCDATAAttr(characters):
    692 """ 693 Handles normalization and some intelligence about quoting. 694 695 @copyright: This code, prior to customization, was part of the PyXML 696 codebase, and before that was part of the 4DOM suite developed by 697 Fourthought, Inc. It its original form, it was Copyright (c) 2000 698 Fourthought Inc, USA; All Rights Reserved. 699 """ 700 if not characters: 701 return '', "'" 702 if "'" in characters: 703 delimiter = '"' 704 new_chars = re.sub('"', '&quot;', characters) 705 else: 706 delimiter = "'" 707 new_chars = re.sub("'", '&apos;', characters) 708 #FIXME: There's more to normalization 709 #Convert attribute new-lines to character entity 710 # characters is possibly shorter than new_chars (no entities) 711 if "\n" in characters: 712 new_chars = re.sub('\n', '&#10;', new_chars) 713 return new_chars, delimiter
    714 715 #Note: Unicode object only for now
    716 -def _translateCDATA(characters, encoding='UTF-8', prev_chars='', markupSafe=0):
    717 """ 718 @copyright: This code, prior to customization, was part of the PyXML 719 codebase, and before that was part of the 4DOM suite developed by 720 Fourthought, Inc. It its original form, it was Copyright (c) 2000 721 Fourthought Inc, USA; All Rights Reserved. 722 """ 723 CDATA_CHAR_PATTERN = re.compile('[&<]|]]>') 724 CHAR_TO_ENTITY = { '&': '&amp;', '<': '&lt;', ']]>': ']]&gt;', } 725 ILLEGAL_LOW_CHARS = '[\x01-\x08\x0B-\x0C\x0E-\x1F]' 726 ILLEGAL_HIGH_CHARS = '\xEF\xBF[\xBE\xBF]' 727 XML_ILLEGAL_CHAR_PATTERN = re.compile('%s|%s'%(ILLEGAL_LOW_CHARS, ILLEGAL_HIGH_CHARS)) 728 if not characters: 729 return '' 730 if not markupSafe: 731 if CDATA_CHAR_PATTERN.search(characters): 732 new_string = CDATA_CHAR_PATTERN.subn(lambda m, d=CHAR_TO_ENTITY: d[m.group()], characters)[0] 733 else: 734 new_string = characters 735 if prev_chars[-2:] == ']]' and characters[0] == '>': 736 new_string = '&gt;' + new_string[1:] 737 else: 738 new_string = characters 739 #Note: use decimal char entity rep because some browsers are broken 740 #FIXME: This will bomb for high characters. Should, for instance, detect 741 #The UTF-8 for 0xFFFE and put out &#xFFFE; 742 if XML_ILLEGAL_CHAR_PATTERN.search(new_string): 743 new_string = XML_ILLEGAL_CHAR_PATTERN.subn(lambda m: '&#%i;' % ord(m.group()), new_string)[0] 744 new_string = _encodeText(new_string, encoding) 745 return new_string
    746

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.util-pysrc.html0000664000175000017500000215565712657665547025502 0ustar pronovicpronovic00000000000000 CedarBackup3.util
    Package CedarBackup3 :: Module util
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.util

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2008,2010,2015 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # Portions copyright (c) 2001, 2002 Python Software Foundation. 
      15  # All Rights Reserved. 
      16  # 
      17  # This program is free software; you can redistribute it and/or 
      18  # modify it under the terms of the GNU General Public License, 
      19  # Version 2, as published by the Free Software Foundation. 
      20  # 
      21  # This program is distributed in the hope that it will be useful, 
      22  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      23  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      24  # 
      25  # Copies of the GNU General Public License are available from 
      26  # the Free Software Foundation website, http://www.gnu.org/. 
      27  # 
      28  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      29  # 
      30  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      31  # Language : Python 3 (>= 3.4) 
      32  # Project  : Cedar Backup, release 3 
      33  # Purpose  : Provides general-purpose utilities. 
      34  # 
      35  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      36   
      37  ######################################################################## 
      38  # Module documentation 
      39  ######################################################################## 
      40   
      41  """ 
      42  Provides general-purpose utilities. 
      43   
      44  @sort: AbsolutePathList, ObjectTypeList, RestrictedContentList, RegexMatchList, 
      45         RegexList, _Vertex, DirectedGraph, PathResolverSingleton, 
      46         sortDict, convertSize, getUidGid, changeOwnership, splitCommandLine, 
      47         resolveCommand, executeCommand, calculateFileAge, encodePath, nullDevice, 
      48         deriveDayOfWeek, isStartOfWeek, buildNormalizedPath, 
      49         ISO_SECTOR_SIZE, BYTES_PER_SECTOR, 
      50         BYTES_PER_KBYTE, BYTES_PER_MBYTE, BYTES_PER_GBYTE, KBYTES_PER_MBYTE, MBYTES_PER_GBYTE, 
      51         SECONDS_PER_MINUTE, MINUTES_PER_HOUR, HOURS_PER_DAY, SECONDS_PER_DAY, 
      52         UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES, UNIT_SECTORS 
      53   
      54  @var ISO_SECTOR_SIZE: Size of an ISO image sector, in bytes. 
      55  @var BYTES_PER_SECTOR: Number of bytes (B) per ISO sector. 
      56  @var BYTES_PER_KBYTE: Number of bytes (B) per kilobyte (kB). 
      57  @var BYTES_PER_MBYTE: Number of bytes (B) per megabyte (MB). 
      58  @var BYTES_PER_GBYTE: Number of bytes (B) per megabyte (GB). 
      59  @var KBYTES_PER_MBYTE: Number of kilobytes (kB) per megabyte (MB). 
      60  @var MBYTES_PER_GBYTE: Number of megabytes (MB) per gigabyte (GB). 
      61  @var SECONDS_PER_MINUTE: Number of seconds per minute. 
      62  @var MINUTES_PER_HOUR: Number of minutes per hour. 
      63  @var HOURS_PER_DAY: Number of hours per day. 
      64  @var SECONDS_PER_DAY: Number of seconds per day. 
      65  @var UNIT_BYTES: Constant representing the byte (B) unit for conversion. 
      66  @var UNIT_KBYTES: Constant representing the kilobyte (kB) unit for conversion. 
      67  @var UNIT_MBYTES: Constant representing the megabyte (MB) unit for conversion. 
      68  @var UNIT_GBYTES: Constant representing the gigabyte (GB) unit for conversion. 
      69  @var UNIT_SECTORS: Constant representing the ISO sector unit for conversion. 
      70   
      71  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      72  """ 
      73   
      74   
      75  ######################################################################## 
      76  # Imported modules 
      77  ######################################################################## 
      78   
      79  import sys 
      80  import math 
      81  import os 
      82  import re 
      83  import time 
      84  import logging 
      85  from subprocess import Popen, STDOUT, PIPE 
      86  from functools import total_ordering 
      87  from numbers import Real 
      88  from decimal import Decimal 
      89  import collections 
      90   
      91  try: 
      92     import pwd 
      93     import grp 
      94     _UID_GID_AVAILABLE = True 
      95  except ImportError: 
      96     _UID_GID_AVAILABLE = False 
      97   
      98  from CedarBackup3.release import VERSION, DATE 
      99   
     100   
     101  ######################################################################## 
     102  # Module-wide constants and variables 
     103  ######################################################################## 
     104   
     105  logger = logging.getLogger("CedarBackup3.log.util") 
     106  outputLogger = logging.getLogger("CedarBackup3.output") 
     107   
     108  ISO_SECTOR_SIZE    = 2048.0   # in bytes 
     109  BYTES_PER_SECTOR   = ISO_SECTOR_SIZE 
     110   
     111  BYTES_PER_KBYTE    = 1024.0 
     112  KBYTES_PER_MBYTE   = 1024.0 
     113  MBYTES_PER_GBYTE   = 1024.0 
     114  BYTES_PER_MBYTE    = BYTES_PER_KBYTE * KBYTES_PER_MBYTE 
     115  BYTES_PER_GBYTE    = BYTES_PER_MBYTE * MBYTES_PER_GBYTE 
     116   
     117  SECONDS_PER_MINUTE = 60.0 
     118  MINUTES_PER_HOUR   = 60.0 
     119  HOURS_PER_DAY      = 24.0 
     120  SECONDS_PER_DAY    = SECONDS_PER_MINUTE * MINUTES_PER_HOUR * HOURS_PER_DAY 
     121   
     122  UNIT_BYTES         = 0 
     123  UNIT_KBYTES        = 1 
     124  UNIT_MBYTES        = 2 
     125  UNIT_GBYTES        = 4 
     126  UNIT_SECTORS       = 3 
     127   
     128  MTAB_FILE          = "/etc/mtab" 
     129   
     130  MOUNT_COMMAND      = [ "mount", ] 
     131  UMOUNT_COMMAND     = [ "umount", ] 
     132   
     133  DEFAULT_LANGUAGE   = "C" 
     134  LANG_VAR           = "LANG" 
     135  LOCALE_VARS        = [ "LC_ADDRESS", "LC_ALL", "LC_COLLATE", 
     136                         "LC_CTYPE", "LC_IDENTIFICATION", 
     137                         "LC_MEASUREMENT", "LC_MESSAGES", 
     138                         "LC_MONETARY", "LC_NAME", "LC_NUMERIC", 
     139                         "LC_PAPER", "LC_TELEPHONE", "LC_TIME", ] 
    
    140 141 142 ######################################################################## 143 # UnorderedList class definition 144 ######################################################################## 145 146 -class UnorderedList(list):
    147 148 """ 149 Class representing an "unordered list". 150 151 An "unordered list" is a list in which only the contents matter, not the 152 order in which the contents appear in the list. 153 154 For instance, we might be keeping track of set of paths in a list, because 155 it's convenient to have them in that form. However, for comparison 156 purposes, we would only care that the lists contain exactly the same 157 contents, regardless of order. 158 159 I have come up with two reasonable ways of doing this, plus a couple more 160 that would work but would be a pain to implement. My first method is to 161 copy and sort each list, comparing the sorted versions. This will only work 162 if two lists with exactly the same members are guaranteed to sort in exactly 163 the same order. The second way would be to create two Sets and then compare 164 the sets. However, this would lose information about any duplicates in 165 either list. I've decided to go with option #1 for now. I'll modify this 166 code if I run into problems in the future. 167 168 We override the original C{__eq__}, C{__ne__}, C{__ge__}, C{__gt__}, 169 C{__le__} and C{__lt__} list methods to change the definition of the various 170 comparison operators. In all cases, the comparison is changed to return the 171 result of the original operation I{but instead comparing sorted lists}. 172 This is going to be quite a bit slower than a normal list, so you probably 173 only want to use it on small lists. 174 """ 175
    176 - def __eq__(self, other):
    177 """ 178 Definition of C{==} operator for this class. 179 @param other: Other object to compare to. 180 @return: True/false depending on whether C{self == other}. 181 """ 182 if other is None: 183 return False 184 selfSorted = UnorderedList.mixedsort(self[:]) 185 otherSorted = UnorderedList.mixedsort(other[:]) 186 return selfSorted.__eq__(otherSorted)
    187
    188 - def __ne__(self, other):
    189 """ 190 Definition of C{!=} operator for this class. 191 @param other: Other object to compare to. 192 @return: True/false depending on whether C{self != other}. 193 """ 194 if other is None: 195 return True 196 selfSorted = UnorderedList.mixedsort(self[:]) 197 otherSorted = UnorderedList.mixedsort(other[:]) 198 return selfSorted.__ne__(otherSorted)
    199
    200 - def __ge__(self, other):
    201 """ 202 Definition of S{>=} operator for this class. 203 @param other: Other object to compare to. 204 @return: True/false depending on whether C{self >= other}. 205 """ 206 if other is None: 207 return True 208 selfSorted = UnorderedList.mixedsort(self[:]) 209 otherSorted = UnorderedList.mixedsort(other[:]) 210 return selfSorted.__ge__(otherSorted)
    211
    212 - def __gt__(self, other):
    213 """ 214 Definition of C{>} operator for this class. 215 @param other: Other object to compare to. 216 @return: True/false depending on whether C{self > other}. 217 """ 218 if other is None: 219 return True 220 selfSorted = UnorderedList.mixedsort(self[:]) 221 otherSorted = UnorderedList.mixedsort(other[:]) 222 return selfSorted.__gt__(otherSorted)
    223
    224 - def __le__(self, other):
    225 """ 226 Definition of S{<=} operator for this class. 227 @param other: Other object to compare to. 228 @return: True/false depending on whether C{self <= other}. 229 """ 230 if other is None: 231 return False 232 selfSorted = UnorderedList.mixedsort(self[:]) 233 otherSorted = UnorderedList.mixedsort(other[:]) 234 return selfSorted.__le__(otherSorted)
    235
    236 - def __lt__(self, other):
    237 """ 238 Definition of C{<} operator for this class. 239 @param other: Other object to compare to. 240 @return: True/false depending on whether C{self < other}. 241 """ 242 if other is None: 243 return False 244 selfSorted = UnorderedList.mixedsort(self[:]) 245 otherSorted = UnorderedList.mixedsort(other[:]) 246 return selfSorted.__lt__(otherSorted)
    247 248 @staticmethod
    249 - def mixedsort(value):
    250 """ 251 Sort a list, making sure we don't blow up if the list happens to include mixed values. 252 @see: http://stackoverflow.com/questions/26575183/how-can-i-get-2-x-like-sorting-behaviour-in-python-3-x 253 """ 254 return sorted(value, key=UnorderedList.mixedkey)
    255 256 @staticmethod 257 #pylint: disable=R0204
    258 - def mixedkey(value):
    259 """Provide a key for use by mixedsort()""" 260 numeric = Real, Decimal 261 if isinstance(value, numeric): 262 typeinfo = numeric 263 else: 264 typeinfo = type(value) 265 try: 266 x = value < value 267 except TypeError: 268 value = repr(value) 269 return repr(typeinfo), value
    270
    271 272 ######################################################################## 273 # AbsolutePathList class definition 274 ######################################################################## 275 276 -class AbsolutePathList(UnorderedList):
    277 278 """ 279 Class representing a list of absolute paths. 280 281 This is an unordered list. 282 283 We override the C{append}, C{insert} and C{extend} methods to ensure that 284 any item added to the list is an absolute path. 285 286 Each item added to the list is encoded using L{encodePath}. If we don't do 287 this, we have problems trying certain operations between strings and unicode 288 objects, particularly for "odd" filenames that can't be encoded in standard 289 ASCII. 290 """ 291
    292 - def append(self, item):
    293 """ 294 Overrides the standard C{append} method. 295 @raise ValueError: If item is not an absolute path. 296 """ 297 if not os.path.isabs(item): 298 raise ValueError("Not an absolute path: [%s]" % item) 299 list.append(self, encodePath(item))
    300
    301 - def insert(self, index, item):
    302 """ 303 Overrides the standard C{insert} method. 304 @raise ValueError: If item is not an absolute path. 305 """ 306 if not os.path.isabs(item): 307 raise ValueError("Not an absolute path: [%s]" % item) 308 list.insert(self, index, encodePath(item))
    309
    310 - def extend(self, seq):
    311 """ 312 Overrides the standard C{insert} method. 313 @raise ValueError: If any item is not an absolute path. 314 """ 315 for item in seq: 316 if not os.path.isabs(item): 317 raise ValueError("Not an absolute path: [%s]" % item) 318 for item in seq: 319 list.append(self, encodePath(item))
    320
    321 322 ######################################################################## 323 # ObjectTypeList class definition 324 ######################################################################## 325 326 -class ObjectTypeList(UnorderedList):
    327 328 """ 329 Class representing a list containing only objects with a certain type. 330 331 This is an unordered list. 332 333 We override the C{append}, C{insert} and C{extend} methods to ensure that 334 any item added to the list matches the type that is requested. The 335 comparison uses the built-in C{isinstance}, which should allow subclasses of 336 of the requested type to be added to the list as well. 337 338 The C{objectName} value will be used in exceptions, i.e. C{"Item must be a 339 CollectDir object."} if C{objectName} is C{"CollectDir"}. 340 """ 341
    342 - def __init__(self, objectType, objectName):
    343 """ 344 Initializes a typed list for a particular type. 345 @param objectType: Type that the list elements must match. 346 @param objectName: Short string containing the "name" of the type. 347 """ 348 super(ObjectTypeList, self).__init__() 349 self.objectType = objectType 350 self.objectName = objectName
    351
    352 - def append(self, item):
    353 """ 354 Overrides the standard C{append} method. 355 @raise ValueError: If item does not match requested type. 356 """ 357 if not isinstance(item, self.objectType): 358 raise ValueError("Item must be a %s object." % self.objectName) 359 list.append(self, item)
    360
    361 - def insert(self, index, item):
    362 """ 363 Overrides the standard C{insert} method. 364 @raise ValueError: If item does not match requested type. 365 """ 366 if not isinstance(item, self.objectType): 367 raise ValueError("Item must be a %s object." % self.objectName) 368 list.insert(self, index, item)
    369
    370 - def extend(self, seq):
    371 """ 372 Overrides the standard C{insert} method. 373 @raise ValueError: If item does not match requested type. 374 """ 375 for item in seq: 376 if not isinstance(item, self.objectType): 377 raise ValueError("All items must be %s objects." % self.objectName) 378 list.extend(self, seq)
    379
    380 381 ######################################################################## 382 # RestrictedContentList class definition 383 ######################################################################## 384 385 -class RestrictedContentList(UnorderedList):
    386 387 """ 388 Class representing a list containing only object with certain values. 389 390 This is an unordered list. 391 392 We override the C{append}, C{insert} and C{extend} methods to ensure that 393 any item added to the list is among the valid values. We use a standard 394 comparison, so pretty much anything can be in the list of valid values. 395 396 The C{valuesDescr} value will be used in exceptions, i.e. C{"Item must be 397 one of values in VALID_ACTIONS"} if C{valuesDescr} is C{"VALID_ACTIONS"}. 398 399 @note: This class doesn't make any attempt to trap for nonsensical 400 arguments. All of the values in the values list should be of the same type 401 (i.e. strings). Then, all list operations also need to be of that type 402 (i.e. you should always insert or append just strings). If you mix types -- 403 for instance lists and strings -- you will likely see AttributeError 404 exceptions or other problems. 405 """ 406
    407 - def __init__(self, valuesList, valuesDescr, prefix=None):
    408 """ 409 Initializes a list restricted to containing certain values. 410 @param valuesList: List of valid values. 411 @param valuesDescr: Short string describing list of values. 412 @param prefix: Prefix to use in error messages (None results in prefix "Item") 413 """ 414 super(RestrictedContentList, self).__init__() 415 self.prefix = "Item" 416 if prefix is not None: self.prefix = prefix 417 self.valuesList = valuesList 418 self.valuesDescr = valuesDescr
    419
    420 - def append(self, item):
    421 """ 422 Overrides the standard C{append} method. 423 @raise ValueError: If item is not in the values list. 424 """ 425 if item not in self.valuesList: 426 raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) 427 list.append(self, item)
    428
    429 - def insert(self, index, item):
    430 """ 431 Overrides the standard C{insert} method. 432 @raise ValueError: If item is not in the values list. 433 """ 434 if item not in self.valuesList: 435 raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) 436 list.insert(self, index, item)
    437
    438 - def extend(self, seq):
    439 """ 440 Overrides the standard C{insert} method. 441 @raise ValueError: If item is not in the values list. 442 """ 443 for item in seq: 444 if item not in self.valuesList: 445 raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) 446 list.extend(self, seq)
    447
    448 449 ######################################################################## 450 # RegexMatchList class definition 451 ######################################################################## 452 453 -class RegexMatchList(UnorderedList):
    454 455 """ 456 Class representing a list containing only strings that match a regular expression. 457 458 If C{emptyAllowed} is passed in as C{False}, then empty strings are 459 explicitly disallowed, even if they happen to match the regular expression. 460 (C{None} values are always disallowed, since string operations are not 461 permitted on C{None}.) 462 463 This is an unordered list. 464 465 We override the C{append}, C{insert} and C{extend} methods to ensure that 466 any item added to the list matches the indicated regular expression. 467 468 @note: If you try to put values that are not strings into the list, you will 469 likely get either TypeError or AttributeError exceptions as a result. 470 """ 471
    472 - def __init__(self, valuesRegex, emptyAllowed=True, prefix=None):
    473 """ 474 Initializes a list restricted to containing certain values. 475 @param valuesRegex: Regular expression that must be matched, as a string 476 @param emptyAllowed: Indicates whether empty or None values are allowed. 477 @param prefix: Prefix to use in error messages (None results in prefix "Item") 478 """ 479 super(RegexMatchList, self).__init__() 480 self.prefix = "Item" 481 if prefix is not None: self.prefix = prefix 482 self.valuesRegex = valuesRegex 483 self.emptyAllowed = emptyAllowed 484 self.pattern = re.compile(self.valuesRegex)
    485
    486 - def append(self, item):
    487 """ 488 Overrides the standard C{append} method. 489 @raise ValueError: If item is None 490 @raise ValueError: If item is empty and empty values are not allowed 491 @raise ValueError: If item does not match the configured regular expression 492 """ 493 if item is None or (not self.emptyAllowed and item == ""): 494 raise ValueError("%s cannot be empty." % self.prefix) 495 if not self.pattern.search(item): 496 raise ValueError("%s is not valid: [%s]" % (self.prefix, item)) 497 list.append(self, item)
    498
    499 - def insert(self, index, item):
    500 """ 501 Overrides the standard C{insert} method. 502 @raise ValueError: If item is None 503 @raise ValueError: If item is empty and empty values are not allowed 504 @raise ValueError: If item does not match the configured regular expression 505 """ 506 if item is None or (not self.emptyAllowed and item == ""): 507 raise ValueError("%s cannot be empty." % self.prefix) 508 if not self.pattern.search(item): 509 raise ValueError("%s is not valid [%s]" % (self.prefix, item)) 510 list.insert(self, index, item)
    511
    512 - def extend(self, seq):
    513 """ 514 Overrides the standard C{insert} method. 515 @raise ValueError: If any item is None 516 @raise ValueError: If any item is empty and empty values are not allowed 517 @raise ValueError: If any item does not match the configured regular expression 518 """ 519 for item in seq: 520 if item is None or (not self.emptyAllowed and item == ""): 521 raise ValueError("%s cannot be empty." % self.prefix) 522 if not self.pattern.search(item): 523 raise ValueError("%s is not valid: [%s]" % (self.prefix, item)) 524 list.extend(self, seq)
    525
    526 527 ######################################################################## 528 # RegexList class definition 529 ######################################################################## 530 531 -class RegexList(UnorderedList):
    532 533 """ 534 Class representing a list of valid regular expression strings. 535 536 This is an unordered list. 537 538 We override the C{append}, C{insert} and C{extend} methods to ensure that 539 any item added to the list is a valid regular expression. 540 """ 541
    542 - def append(self, item):
    543 """ 544 Overrides the standard C{append} method. 545 @raise ValueError: If item is not an absolute path. 546 """ 547 try: 548 re.compile(item) 549 except re.error: 550 raise ValueError("Not a valid regular expression: [%s]" % item) 551 list.append(self, item)
    552
    553 - def insert(self, index, item):
    554 """ 555 Overrides the standard C{insert} method. 556 @raise ValueError: If item is not an absolute path. 557 """ 558 try: 559 re.compile(item) 560 except re.error: 561 raise ValueError("Not a valid regular expression: [%s]" % item) 562 list.insert(self, index, item)
    563
    564 - def extend(self, seq):
    565 """ 566 Overrides the standard C{insert} method. 567 @raise ValueError: If any item is not an absolute path. 568 """ 569 for item in seq: 570 try: 571 re.compile(item) 572 except re.error: 573 raise ValueError("Not a valid regular expression: [%s]" % item) 574 for item in seq: 575 list.append(self, item)
    576
    577 578 ######################################################################## 579 # Directed graph implementation 580 ######################################################################## 581 582 -class _Vertex(object):
    583 584 """ 585 Represents a vertex (or node) in a directed graph. 586 """ 587
    588 - def __init__(self, name):
    589 """ 590 Constructor. 591 @param name: Name of this graph vertex. 592 @type name: String value. 593 """ 594 self.name = name 595 self.endpoints = [] 596 self.state = None
    597
    598 @total_ordering 599 -class DirectedGraph(object):
    600 601 """ 602 Represents a directed graph. 603 604 A graph B{G=(V,E)} consists of a set of vertices B{V} together with a set 605 B{E} of vertex pairs or edges. In a directed graph, each edge also has an 606 associated direction (from vertext B{v1} to vertex B{v2}). A C{DirectedGraph} 607 object provides a way to construct a directed graph and execute a depth- 608 first search. 609 610 This data structure was designed based on the graphing chapter in 611 U{The Algorithm Design Manual<http://www2.toki.or.id/book/AlgDesignManual/>}, 612 by Steven S. Skiena. 613 614 This class is intended to be used by Cedar Backup for dependency ordering. 615 Because of this, it's not quite general-purpose. Unlike a "general" graph, 616 every vertex in this graph has at least one edge pointing to it, from a 617 special "start" vertex. This is so no vertices get "lost" either because 618 they have no dependencies or because nothing depends on them. 619 """ 620 621 _UNDISCOVERED = 0 622 _DISCOVERED = 1 623 _EXPLORED = 2 624
    625 - def __init__(self, name):
    626 """ 627 Directed graph constructor. 628 629 @param name: Name of this graph. 630 @type name: String value. 631 """ 632 if name is None or name == "": 633 raise ValueError("Graph name must be non-empty.") 634 self._name = name 635 self._vertices = {} 636 self._startVertex = _Vertex(None) # start vertex is only vertex with no name
    637
    638 - def __repr__(self):
    639 """ 640 Official string representation for class instance. 641 """ 642 return "DirectedGraph(%s)" % self.name
    643
    644 - def __str__(self):
    645 """ 646 Informal string representation for class instance. 647 """ 648 return self.__repr__()
    649
    650 - def __eq__(self, other):
    651 """Equals operator, implemented in terms of original Python 2 compare operator.""" 652 return self.__cmp__(other) == 0
    653
    654 - def __lt__(self, other):
    655 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 656 return self.__cmp__(other) < 0
    657
    658 - def __gt__(self, other):
    659 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 660 return self.__cmp__(other) > 0
    661
    662 - def __cmp__(self, other):
    663 """ 664 Original Python 2 comparison operator. 665 @param other: Other object to compare to. 666 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 667 """ 668 # pylint: disable=W0212 669 if other is None: 670 return 1 671 if self.name != other.name: 672 if str(self.name or "") < str(other.name or ""): 673 return -1 674 else: 675 return 1 676 if self._vertices != other._vertices: 677 if self._vertices < other._vertices: 678 return -1 679 else: 680 return 1 681 return 0
    682
    683 - def _getName(self):
    684 """ 685 Property target used to get the graph name. 686 """ 687 return self._name
    688 689 name = property(_getName, None, None, "Name of the graph.") 690
    691 - def createVertex(self, name):
    692 """ 693 Creates a named vertex. 694 @param name: vertex name 695 @raise ValueError: If the vertex name is C{None} or empty. 696 """ 697 if name is None or name == "": 698 raise ValueError("Vertex name must be non-empty.") 699 vertex = _Vertex(name) 700 self._startVertex.endpoints.append(vertex) # so every vertex is connected at least once 701 self._vertices[name] = vertex
    702
    703 - def createEdge(self, start, finish):
    704 """ 705 Adds an edge with an associated direction, from C{start} vertex to C{finish} vertex. 706 @param start: Name of start vertex. 707 @param finish: Name of finish vertex. 708 @raise ValueError: If one of the named vertices is unknown. 709 """ 710 try: 711 startVertex = self._vertices[start] 712 finishVertex = self._vertices[finish] 713 startVertex.endpoints.append(finishVertex) 714 except KeyError as e: 715 raise ValueError("Vertex [%s] could not be found." % e)
    716
    717 - def topologicalSort(self):
    718 """ 719 Implements a topological sort of the graph. 720 721 This method also enforces that the graph is a directed acyclic graph, 722 which is a requirement of a topological sort. 723 724 A directed acyclic graph (or "DAG") is a directed graph with no directed 725 cycles. A topological sort of a DAG is an ordering on the vertices such 726 that all edges go from left to right. Only an acyclic graph can have a 727 topological sort, but any DAG has at least one topological sort. 728 729 Since a topological sort only makes sense for an acyclic graph, this 730 method throws an exception if a cycle is found. 731 732 A depth-first search only makes sense if the graph is acyclic. If the 733 graph contains any cycles, it is not possible to determine a consistent 734 ordering for the vertices. 735 736 @note: If a particular vertex has no edges, then its position in the 737 final list depends on the order in which the vertices were created in the 738 graph. If you're using this method to determine a dependency order, this 739 makes sense: a vertex with no dependencies can go anywhere (and will). 740 741 @return: Ordering on the vertices so that all edges go from left to right. 742 743 @raise ValueError: If a cycle is found in the graph. 744 """ 745 ordering = [] 746 for key in self._vertices: 747 vertex = self._vertices[key] 748 vertex.state = self._UNDISCOVERED 749 for key in self._vertices: 750 vertex = self._vertices[key] 751 if vertex.state == self._UNDISCOVERED: 752 self._topologicalSort(self._startVertex, ordering) 753 return ordering
    754
    755 - def _topologicalSort(self, vertex, ordering):
    756 """ 757 Recursive depth first search function implementing topological sort. 758 @param vertex: Vertex to search 759 @param ordering: List of vertices in proper order 760 """ 761 vertex.state = self._DISCOVERED 762 for endpoint in vertex.endpoints: 763 if endpoint.state == self._UNDISCOVERED: 764 self._topologicalSort(endpoint, ordering) 765 elif endpoint.state != self._EXPLORED: 766 raise ValueError("Cycle found in graph (found '%s' while searching '%s')." % (vertex.name, endpoint.name)) 767 if vertex.name is not None: 768 ordering.insert(0, vertex.name) 769 vertex.state = self._EXPLORED
    770
    771 772 ######################################################################## 773 # PathResolverSingleton class definition 774 ######################################################################## 775 776 -class PathResolverSingleton(object):
    777 778 """ 779 Singleton used for resolving executable paths. 780 781 Various functions throughout Cedar Backup (including extensions) need a way 782 to resolve the path of executables that they use. For instance, the image 783 functionality needs to find the C{mkisofs} executable, and the Subversion 784 extension needs to find the C{svnlook} executable. Cedar Backup's original 785 behavior was to assume that the simple name (C{"svnlook"} or whatever) was 786 available on the caller's C{$PATH}, and to fail otherwise. However, this 787 turns out to be less than ideal, since for instance the root user might not 788 always have executables like C{svnlook} in its path. 789 790 One solution is to specify a path (either via an absolute path or some sort 791 of path insertion or path appending mechanism) that would apply to the 792 C{executeCommand()} function. This is not difficult to implement, but it 793 seem like kind of a "big hammer" solution. Besides that, it might also 794 represent a security flaw (for instance, I prefer not to mess with root's 795 C{$PATH} on the application level if I don't have to). 796 797 The alternative is to set up some sort of configuration for the path to 798 certain executables, i.e. "find C{svnlook} in C{/usr/local/bin/svnlook}" or 799 whatever. This PathResolverSingleton aims to provide a good solution to the 800 mapping problem. Callers of all sorts (extensions or not) can get an 801 instance of the singleton. Then, they call the C{lookup} method to try and 802 resolve the executable they are looking for. Through the C{lookup} method, 803 the caller can also specify a default to use if a mapping is not found. 804 This way, with no real effort on the part of the caller, behavior can neatly 805 degrade to something equivalent to the current behavior if there is no 806 special mapping or if the singleton was never initialized in the first 807 place. 808 809 Even better, extensions automagically get access to the same resolver 810 functionality, and they don't even need to understand how the mapping 811 happens. All extension authors need to do is document what executables 812 their code requires, and the standard resolver configuration section will 813 meet their needs. 814 815 The class should be initialized once through the constructor somewhere in 816 the main routine. Then, the main routine should call the L{fill} method to 817 fill in the resolver's internal structures. Everyone else who needs to 818 resolve a path will get an instance of the class using L{getInstance} and 819 will then just call the L{lookup} method. 820 821 @cvar _instance: Holds a reference to the singleton 822 @ivar _mapping: Internal mapping from resource name to path. 823 """ 824 825 _instance = None # Holds a reference to singleton instance 826
    827 - class _Helper:
    828 """Helper class to provide a singleton factory method."""
    829 - def __init__(self):
    830 pass
    831 - def __call__(self, *args, **kw):
    832 # pylint: disable=W0212,R0201 833 if PathResolverSingleton._instance is None: 834 obj = PathResolverSingleton() 835 PathResolverSingleton._instance = obj 836 return PathResolverSingleton._instance
    837 838 getInstance = _Helper() # Method that callers will use to get an instance 839
    840 - def __init__(self, ):
    841 """Singleton constructor, which just creates the singleton instance.""" 842 PathResolverSingleton._instance = self 843 self._mapping = { }
    844
    845 - def lookup(self, name, default=None):
    846 """ 847 Looks up name and returns the resolved path associated with the name. 848 @param name: Name of the path resource to resolve. 849 @param default: Default to return if resource cannot be resolved. 850 @return: Resolved path associated with name, or default if name can't be resolved. 851 """ 852 value = default 853 if name in list(self._mapping.keys()): 854 value = self._mapping[name] 855 logger.debug("Resolved command [%s] to [%s].", name, value) 856 return value
    857
    858 - def fill(self, mapping):
    859 """ 860 Fills in the singleton's internal mapping from name to resource. 861 @param mapping: Mapping from resource name to path. 862 @type mapping: Dictionary mapping name to path, both as strings. 863 """ 864 self._mapping = { } 865 for key in list(mapping.keys()): 866 self._mapping[key] = mapping[key]
    867
    868 869 ######################################################################## 870 # Pipe class definition 871 ######################################################################## 872 873 -class Pipe(Popen):
    874 """ 875 Specialized pipe class for use by C{executeCommand}. 876 877 The L{executeCommand} function needs a specialized way of interacting 878 with a pipe. First, C{executeCommand} only reads from the pipe, and 879 never writes to it. Second, C{executeCommand} needs a way to discard all 880 output written to C{stderr}, as a means of simulating the shell 881 C{2>/dev/null} construct. 882 """
    883 - def __init__(self, cmd, bufsize=-1, ignoreStderr=False):
    884 stderr = STDOUT 885 if ignoreStderr: 886 devnull = nullDevice() 887 stderr = os.open(devnull, os.O_RDWR) 888 Popen.__init__(self, shell=False, args=cmd, bufsize=bufsize, stdin=None, stdout=PIPE, stderr=stderr)
    889
    890 891 ######################################################################## 892 # Diagnostics class definition 893 ######################################################################## 894 895 -class Diagnostics(object):
    896 897 """ 898 Class holding runtime diagnostic information. 899 900 Diagnostic information is information that is useful to get from users for 901 debugging purposes. I'm consolidating it all here into one object. 902 903 @sort: __init__, __repr__, __str__ 904 """ 905 # pylint: disable=R0201 906
    907 - def __init__(self):
    908 """ 909 Constructor for the C{Diagnostics} class. 910 """
    911
    912 - def __repr__(self):
    913 """ 914 Official string representation for class instance. 915 """ 916 return "Diagnostics()"
    917
    918 - def __str__(self):
    919 """ 920 Informal string representation for class instance. 921 """ 922 return self.__repr__()
    923
    924 - def getValues(self):
    925 """ 926 Get a map containing all of the diagnostic values. 927 @return: Map from diagnostic name to diagnostic value. 928 """ 929 values = {} 930 values['version'] = self.version 931 values['interpreter'] = self.interpreter 932 values['platform'] = self.platform 933 values['encoding'] = self.encoding 934 values['locale'] = self.locale 935 values['timestamp'] = self.timestamp 936 return values
    937
    938 - def printDiagnostics(self, fd=sys.stdout, prefix=""):
    939 """ 940 Pretty-print diagnostic information to a file descriptor. 941 @param fd: File descriptor used to print information. 942 @param prefix: Prefix string (if any) to place onto printed lines 943 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 944 """ 945 lines = self._buildDiagnosticLines(prefix) 946 for line in lines: 947 fd.write("%s\n" % line)
    948
    949 - def logDiagnostics(self, method, prefix=""):
    950 """ 951 Pretty-print diagnostic information using a logger method. 952 @param method: Logger method to use for logging (i.e. logger.info) 953 @param prefix: Prefix string (if any) to place onto printed lines 954 """ 955 lines = self._buildDiagnosticLines(prefix) 956 for line in lines: 957 method("%s" % line)
    958
    959 - def _buildDiagnosticLines(self, prefix=""):
    960 """ 961 Build a set of pretty-printed diagnostic lines. 962 @param prefix: Prefix string (if any) to place onto printed lines 963 @return: List of strings, not terminated by newlines. 964 """ 965 values = self.getValues() 966 keys = list(values.keys()) 967 keys.sort() 968 tmax = Diagnostics._getMaxLength(keys) + 3 # three extra dots in output 969 lines = [] 970 for key in keys: 971 title = key.title() 972 title += (tmax - len(title)) * '.' 973 value = values[key] 974 line = "%s%s: %s" % (prefix, title, value) 975 lines.append(line) 976 return lines
    977 978 @staticmethod
    979 - def _getMaxLength(values):
    980 """ 981 Get the maximum length from among a list of strings. 982 """ 983 tmax = 0 984 for value in values: 985 if len(value) > tmax: 986 tmax = len(value) 987 return tmax
    988
    989 - def _getVersion(self):
    990 """ 991 Property target to get the Cedar Backup version. 992 """ 993 return "Cedar Backup %s (%s)" % (VERSION, DATE)
    994
    995 - def _getInterpreter(self):
    996 """ 997 Property target to get the Python interpreter version. 998 """ 999 version = sys.version_info 1000 return "Python %d.%d.%d (%s)" % (version[0], version[1], version[2], version[3])
    1001
    1002 - def _getEncoding(self):
    1003 """ 1004 Property target to get the filesystem encoding. 1005 """ 1006 return sys.getfilesystemencoding() or sys.getdefaultencoding()
    1007
    1008 - def _getPlatform(self):
    1009 """ 1010 Property target to get the operating system platform. 1011 """ 1012 try: 1013 uname = os.uname() 1014 sysname = uname[0] # i.e. Linux 1015 release = uname[2] # i.e. 2.16.18-2 1016 machine = uname[4] # i.e. i686 1017 return "%s (%s %s %s)" % (sys.platform, sysname, release, machine) 1018 except: 1019 return sys.platform
    1020
    1021 - def _getLocale(self):
    1022 """ 1023 Property target to get the default locale that is in effect. 1024 """ 1025 try: 1026 import locale 1027 return locale.getdefaultlocale()[0] 1028 except: 1029 return "(unknown)"
    1030
    1031 - def _getTimestamp(self):
    1032 """ 1033 Property target to get a current date/time stamp. 1034 """ 1035 try: 1036 import datetime 1037 return datetime.datetime.utcnow().ctime() + " UTC" 1038 except: 1039 return "(unknown)"
    1040 1041 version = property(_getVersion, None, None, "Cedar Backup version.") 1042 interpreter = property(_getInterpreter, None, None, "Python interpreter version.") 1043 platform = property(_getPlatform, None, None, "Platform identifying information.") 1044 encoding = property(_getEncoding, None, None, "Filesystem encoding that is in effect.") 1045 locale = property(_getLocale, None, None, "Locale that is in effect.") 1046 timestamp = property(_getTimestamp, None, None, "Current timestamp.")
    1047
    1048 1049 ######################################################################## 1050 # General utility functions 1051 ######################################################################## 1052 1053 ###################### 1054 # sortDict() function 1055 ###################### 1056 1057 -def sortDict(d):
    1058 """ 1059 Returns the keys of the dictionary sorted by value. 1060 @param d: Dictionary to operate on 1061 @return: List of dictionary keys sorted in order by dictionary value. 1062 """ 1063 items = list(d.items()) 1064 items.sort(key=lambda x: (x[1], x[0])) # sort by value and then by key 1065 return [key for key, value in items]
    1066
    1067 1068 ######################## 1069 # removeKeys() function 1070 ######################## 1071 1072 -def removeKeys(d, keys):
    1073 """ 1074 Removes all of the keys from the dictionary. 1075 The dictionary is altered in-place. 1076 Each key must exist in the dictionary. 1077 @param d: Dictionary to operate on 1078 @param keys: List of keys to remove 1079 @raise KeyError: If one of the keys does not exist 1080 """ 1081 for key in keys: 1082 del d[key]
    1083
    1084 1085 ######################### 1086 # convertSize() function 1087 ######################### 1088 1089 -def convertSize(size, fromUnit, toUnit):
    1090 """ 1091 Converts a size in one unit to a size in another unit. 1092 1093 This is just a convenience function so that the functionality can be 1094 implemented in just one place. Internally, we convert values to bytes and 1095 then to the final unit. 1096 1097 The available units are: 1098 1099 - C{UNIT_BYTES} - Bytes 1100 - C{UNIT_KBYTES} - Kilobytes, where 1 kB = 1024 B 1101 - C{UNIT_MBYTES} - Megabytes, where 1 MB = 1024 kB 1102 - C{UNIT_GBYTES} - Gigabytes, where 1 GB = 1024 MB 1103 - C{UNIT_SECTORS} - Sectors, where 1 sector = 2048 B 1104 1105 @param size: Size to convert 1106 @type size: Integer or float value in units of C{fromUnit} 1107 1108 @param fromUnit: Unit to convert from 1109 @type fromUnit: One of the units listed above 1110 1111 @param toUnit: Unit to convert to 1112 @type toUnit: One of the units listed above 1113 1114 @return: Number converted to new unit, as a float. 1115 @raise ValueError: If one of the units is invalid. 1116 """ 1117 if size is None: 1118 raise ValueError("Cannot convert size of None.") 1119 if fromUnit == UNIT_BYTES: 1120 byteSize = float(size) 1121 elif fromUnit == UNIT_KBYTES: 1122 byteSize = float(size) * BYTES_PER_KBYTE 1123 elif fromUnit == UNIT_MBYTES: 1124 byteSize = float(size) * BYTES_PER_MBYTE 1125 elif fromUnit == UNIT_GBYTES: 1126 byteSize = float(size) * BYTES_PER_GBYTE 1127 elif fromUnit == UNIT_SECTORS: 1128 byteSize = float(size) * BYTES_PER_SECTOR 1129 else: 1130 raise ValueError("Unknown 'from' unit %s." % fromUnit) 1131 if toUnit == UNIT_BYTES: 1132 return byteSize 1133 elif toUnit == UNIT_KBYTES: 1134 return byteSize / BYTES_PER_KBYTE 1135 elif toUnit == UNIT_MBYTES: 1136 return byteSize / BYTES_PER_MBYTE 1137 elif toUnit == UNIT_GBYTES: 1138 return byteSize / BYTES_PER_GBYTE 1139 elif toUnit == UNIT_SECTORS: 1140 return byteSize / BYTES_PER_SECTOR 1141 else: 1142 raise ValueError("Unknown 'to' unit %s." % toUnit)
    1143
    1144 1145 ########################## 1146 # displayBytes() function 1147 ########################## 1148 1149 -def displayBytes(bytes, digits=2): # pylint: disable=W0622
    1150 """ 1151 Format a byte quantity so it can be sensibly displayed. 1152 1153 It's rather difficult to look at a number like "72372224 bytes" and get any 1154 meaningful information out of it. It would be more useful to see something 1155 like "69.02 MB". That's what this function does. Any time you want to display 1156 a byte value, i.e.:: 1157 1158 print "Size: %s bytes" % bytes 1159 1160 Call this function instead:: 1161 1162 print "Size: %s" % displayBytes(bytes) 1163 1164 What comes out will be sensibly formatted. The indicated number of digits 1165 will be listed after the decimal point, rounded based on whatever rules are 1166 used by Python's standard C{%f} string format specifier. (Values less than 1 1167 kB will be listed in bytes and will not have a decimal point, since the 1168 concept of a fractional byte is nonsensical.) 1169 1170 @param bytes: Byte quantity. 1171 @type bytes: Integer number of bytes. 1172 1173 @param digits: Number of digits to display after the decimal point. 1174 @type digits: Integer value, typically 2-5. 1175 1176 @return: String, formatted for sensible display. 1177 """ 1178 if bytes is None: 1179 raise ValueError("Cannot display byte value of None.") 1180 bytes = float(bytes) 1181 if math.fabs(bytes) < BYTES_PER_KBYTE: 1182 fmt = "%.0f bytes" 1183 value = bytes 1184 elif math.fabs(bytes) < BYTES_PER_MBYTE: 1185 fmt = "%." + "%d" % digits + "f kB" 1186 value = bytes / BYTES_PER_KBYTE 1187 elif math.fabs(bytes) < BYTES_PER_GBYTE: 1188 fmt = "%." + "%d" % digits + "f MB" 1189 value = bytes / BYTES_PER_MBYTE 1190 else: 1191 fmt = "%." + "%d" % digits + "f GB" 1192 value = bytes / BYTES_PER_GBYTE 1193 return fmt % value 1194
    1195 1196 ################################## 1197 # getFunctionReference() function 1198 ################################## 1199 1200 -def getFunctionReference(module, function):
    1201 """ 1202 Gets a reference to a named function. 1203 1204 This does some hokey-pokey to get back a reference to a dynamically named 1205 function. For instance, say you wanted to get a reference to the 1206 C{os.path.isdir} function. You could use:: 1207 1208 myfunc = getFunctionReference("os.path", "isdir") 1209 1210 Although we won't bomb out directly, behavior is pretty much undefined if 1211 you pass in C{None} or C{""} for either C{module} or C{function}. 1212 1213 The only validation we enforce is that whatever we get back must be 1214 callable. 1215 1216 I derived this code based on the internals of the Python unittest 1217 implementation. I don't claim to completely understand how it works. 1218 1219 @param module: Name of module associated with function. 1220 @type module: Something like "os.path" or "CedarBackup3.util" 1221 1222 @param function: Name of function 1223 @type function: Something like "isdir" or "getUidGid" 1224 1225 @return: Reference to function associated with name. 1226 1227 @raise ImportError: If the function cannot be found. 1228 @raise ValueError: If the resulting reference is not callable. 1229 1230 @copyright: Some of this code, prior to customization, was originally part 1231 of the Python 2.3 codebase. Python code is copyright (c) 2001, 2002 Python 1232 Software Foundation; All Rights Reserved. 1233 """ 1234 parts = [] 1235 if module is not None and module != "": 1236 parts = module.split(".") 1237 if function is not None and function != "": 1238 parts.append(function) 1239 copy = parts[:] 1240 while copy: 1241 try: 1242 module = __import__(".".join(copy)) 1243 break 1244 except ImportError: 1245 del copy[-1] 1246 if not copy: raise 1247 parts = parts[1:] 1248 obj = module 1249 for part in parts: 1250 obj = getattr(obj, part) 1251 if not isinstance(obj, collections.Callable): 1252 raise ValueError("Reference to %s.%s is not callable." % (module, function)) 1253 return obj
    1254
    1255 1256 ####################### 1257 # getUidGid() function 1258 ####################### 1259 1260 -def getUidGid(user, group):
    1261 """ 1262 Get the uid/gid associated with a user/group pair 1263 1264 This is a no-op if user/group functionality is not available on the platform. 1265 1266 @param user: User name 1267 @type user: User name as a string 1268 1269 @param group: Group name 1270 @type group: Group name as a string 1271 1272 @return: Tuple C{(uid, gid)} matching passed-in user and group. 1273 @raise ValueError: If the ownership user/group values are invalid 1274 """ 1275 if _UID_GID_AVAILABLE: 1276 try: 1277 uid = pwd.getpwnam(user)[2] 1278 gid = grp.getgrnam(group)[2] 1279 return (uid, gid) 1280 except Exception as e: 1281 logger.debug("Error looking up uid and gid for [%s:%s]: %s", user, group, e) 1282 raise ValueError("Unable to lookup up uid and gid for passed in user/group.") 1283 else: 1284 return (0, 0)
    1285
    1286 1287 ############################# 1288 # changeOwnership() function 1289 ############################# 1290 1291 -def changeOwnership(path, user, group):
    1292 """ 1293 Changes ownership of path to match the user and group. 1294 1295 This is a no-op if user/group functionality is not available on the 1296 platform, or if the either passed-in user or group is C{None}. Further, we 1297 won't even try to do it unless running as root, since it's unlikely to work. 1298 1299 @param path: Path whose ownership to change. 1300 @param user: User which owns file. 1301 @param group: Group which owns file. 1302 """ 1303 if _UID_GID_AVAILABLE: 1304 if user is None or group is None: 1305 logger.debug("User or group is None, so not attempting to change owner on [%s].", path) 1306 elif not isRunningAsRoot(): 1307 logger.debug("Not root, so not attempting to change owner on [%s].", path) 1308 else: 1309 try: 1310 (uid, gid) = getUidGid(user, group) 1311 os.chown(path, uid, gid) 1312 except Exception as e: 1313 logger.error("Error changing ownership of [%s]: %s", path, e)
    1314
    1315 1316 ############################# 1317 # isRunningAsRoot() function 1318 ############################# 1319 1320 -def isRunningAsRoot():
    1321 """ 1322 Indicates whether the program is running as the root user. 1323 """ 1324 return os.getuid() == 0
    1325
    1326 1327 ############################## 1328 # splitCommandLine() function 1329 ############################## 1330 1331 -def splitCommandLine(commandLine):
    1332 """ 1333 Splits a command line string into a list of arguments. 1334 1335 Unfortunately, there is no "standard" way to parse a command line string, 1336 and it's actually not an easy problem to solve portably (essentially, we 1337 have to emulate the shell argument-processing logic). This code only 1338 respects double quotes (C{"}) for grouping arguments, not single quotes 1339 (C{'}). Make sure you take this into account when building your command 1340 line. 1341 1342 Incidentally, I found this particular parsing method while digging around in 1343 Google Groups, and I tweaked it for my own use. 1344 1345 @param commandLine: Command line string 1346 @type commandLine: String, i.e. "cback3 --verbose stage store" 1347 1348 @return: List of arguments, suitable for passing to C{popen2}. 1349 1350 @raise ValueError: If the command line is None. 1351 """ 1352 if commandLine is None: 1353 raise ValueError("Cannot split command line of None.") 1354 fields = re.findall('[^ "]+|"[^"]+"', commandLine) 1355 fields = [field.replace('"', '') for field in fields] 1356 return fields
    1357
    1358 1359 ############################ 1360 # resolveCommand() function 1361 ############################ 1362 1363 -def resolveCommand(command):
    1364 """ 1365 Resolves the real path to a command through the path resolver mechanism. 1366 1367 Both extensions and standard Cedar Backup functionality need a way to 1368 resolve the "real" location of various executables. Normally, they assume 1369 that these executables are on the system path, but some callers need to 1370 specify an alternate location. 1371 1372 Ideally, we want to handle this configuration in a central location. The 1373 Cedar Backup path resolver mechanism (a singleton called 1374 L{PathResolverSingleton}) provides the central location to store the 1375 mappings. This function wraps access to the singleton, and is what all 1376 functions (extensions or standard functionality) should call if they need to 1377 find a command. 1378 1379 The passed-in command must actually be a list, in the standard form used by 1380 all existing Cedar Backup code (something like C{["svnlook", ]}). The 1381 lookup will actually be done on the first element in the list, and the 1382 returned command will always be in list form as well. 1383 1384 If the passed-in command can't be resolved or no mapping exists, then the 1385 command itself will be returned unchanged. This way, we neatly fall back on 1386 default behavior if we have no sensible alternative. 1387 1388 @param command: Command to resolve. 1389 @type command: List form of command, i.e. C{["svnlook", ]}. 1390 1391 @return: Path to command or just command itself if no mapping exists. 1392 """ 1393 singleton = PathResolverSingleton.getInstance() 1394 name = command[0] 1395 result = command[:] 1396 result[0] = singleton.lookup(name, name) 1397 return result
    1398
    1399 1400 ############################ 1401 # executeCommand() function 1402 ############################ 1403 1404 -def executeCommand(command, args, returnOutput=False, ignoreStderr=False, doNotLog=False, outputFile=None):
    1405 """ 1406 Executes a shell command, hopefully in a safe way. 1407 1408 This function exists to replace direct calls to C{os.popen} in the Cedar 1409 Backup code. It's not safe to call a function such as C{os.popen()} with 1410 untrusted arguments, since that can cause problems if the string contains 1411 non-safe variables or other constructs (imagine that the argument is 1412 C{$WHATEVER}, but C{$WHATEVER} contains something like C{"; rm -fR ~/; 1413 echo"} in the current environment). 1414 1415 Instead, it's safer to pass a list of arguments in the style supported bt 1416 C{popen2} or C{popen4}. This function actually uses a specialized C{Pipe} 1417 class implemented using either C{subprocess.Popen} or C{popen2.Popen4}. 1418 1419 Under the normal case, this function will return a tuple of C{(status, 1420 None)} where the status is the wait-encoded return status of the call per 1421 the C{popen2.Popen4} documentation. If C{returnOutput} is passed in as 1422 C{True}, the function will return a tuple of C{(status, output)} where 1423 C{output} is a list of strings, one entry per line in the output from the 1424 command. Output is always logged to the C{outputLogger.info()} target, 1425 regardless of whether it's returned. 1426 1427 By default, C{stdout} and C{stderr} will be intermingled in the output. 1428 However, if you pass in C{ignoreStderr=True}, then only C{stdout} will be 1429 included in the output. 1430 1431 The C{doNotLog} parameter exists so that callers can force the function to 1432 not log command output to the debug log. Normally, you would want to log. 1433 However, if you're using this function to write huge output files (i.e. 1434 database backups written to C{stdout}) then you might want to avoid putting 1435 all that information into the debug log. 1436 1437 The C{outputFile} parameter exists to make it easier for a caller to push 1438 output into a file, i.e. as a substitute for redirection to a file. If this 1439 value is passed in, each time a line of output is generated, it will be 1440 written to the file using C{outputFile.write()}. At the end, the file 1441 descriptor will be flushed using C{outputFile.flush()}. The caller 1442 maintains responsibility for closing the file object appropriately. 1443 1444 @note: I know that it's a bit confusing that the command and the arguments 1445 are both lists. I could have just required the caller to pass in one big 1446 list. However, I think it makes some sense to keep the command (the 1447 constant part of what we're executing, i.e. C{"scp -B"}) separate from its 1448 arguments, even if they both end up looking kind of similar. 1449 1450 @note: You cannot redirect output via shell constructs (i.e. C{>file}, 1451 C{2>/dev/null}, etc.) using this function. The redirection string would be 1452 passed to the command just like any other argument. However, you can 1453 implement the equivalent to redirection using C{ignoreStderr} and 1454 C{outputFile}, as discussed above. 1455 1456 @note: The operating system environment is partially sanitized before 1457 the command is invoked. See L{sanitizeEnvironment} for details. 1458 1459 @param command: Shell command to execute 1460 @type command: List of individual arguments that make up the command 1461 1462 @param args: List of arguments to the command 1463 @type args: List of additional arguments to the command 1464 1465 @param returnOutput: Indicates whether to return the output of the command 1466 @type returnOutput: Boolean C{True} or C{False} 1467 1468 @param ignoreStderr: Whether stderr should be discarded 1469 @type ignoreStderr: Boolean True or False 1470 1471 @param doNotLog: Indicates that output should not be logged. 1472 @type doNotLog: Boolean C{True} or C{False} 1473 1474 @param outputFile: File object that all output should be written to. 1475 @type outputFile: File object as returned from C{open()} or C{file()}, configured for binary write 1476 1477 @return: Tuple of C{(result, output)} as described above. 1478 """ 1479 logger.debug("Executing command %s with args %s.", command, args) 1480 outputLogger.info("Executing command %s with args %s.", command, args) 1481 if doNotLog: 1482 logger.debug("Note: output will not be logged, per the doNotLog flag.") 1483 outputLogger.info("Note: output will not be logged, per the doNotLog flag.") 1484 output = [] 1485 fields = command[:] # make sure to copy it so we don't destroy it 1486 fields.extend(args) 1487 try: 1488 sanitizeEnvironment() # make sure we have a consistent environment 1489 try: 1490 pipe = Pipe(fields, ignoreStderr=ignoreStderr) 1491 except OSError: 1492 # On some platforms (i.e. Cygwin) this intermittently fails the first time we do it. 1493 # So, we attempt it a second time and if that works, we just go on as usual. 1494 # The problem appears to be that we sometimes get a bad stderr file descriptor. 1495 pipe = Pipe(fields, ignoreStderr=ignoreStderr) 1496 while True: 1497 line = pipe.stdout.readline() 1498 if not line: break 1499 if returnOutput: output.append(line.decode("utf-8")) 1500 if outputFile is not None: outputFile.write(line) 1501 if not doNotLog: outputLogger.info(line.decode("utf-8")[:-1]) # this way the log will (hopefully) get updated in realtime 1502 if outputFile is not None: 1503 try: # note, not every file-like object can be flushed 1504 outputFile.flush() 1505 except: pass 1506 if returnOutput: 1507 return (pipe.wait(), output) 1508 else: 1509 return (pipe.wait(), None) 1510 except OSError as e: 1511 try: 1512 if returnOutput: 1513 if output != []: 1514 return (pipe.wait(), output) 1515 else: 1516 return (pipe.wait(), [ e, ]) 1517 else: 1518 return (pipe.wait(), None) 1519 except UnboundLocalError: # pipe not set 1520 if returnOutput: 1521 return (256, []) 1522 else: 1523 return (256, None)
    1524
    1525 1526 ############################## 1527 # calculateFileAge() function 1528 ############################## 1529 1530 -def calculateFileAge(path):
    1531 """ 1532 Calculates the age (in days) of a file. 1533 1534 The "age" of a file is the amount of time since the file was last used, per 1535 the most recent of the file's C{st_atime} and C{st_mtime} values. 1536 1537 Technically, we only intend this function to work with files, but it will 1538 probably work with anything on the filesystem. 1539 1540 @param path: Path to a file on disk. 1541 1542 @return: Age of the file in days (possibly fractional). 1543 @raise OSError: If the file doesn't exist. 1544 """ 1545 currentTime = int(time.time()) 1546 fileStats = os.stat(path) 1547 lastUse = max(fileStats.st_atime, fileStats.st_mtime) # "most recent" is "largest" 1548 ageInSeconds = currentTime - lastUse 1549 ageInDays = ageInSeconds / SECONDS_PER_DAY 1550 return ageInDays
    1551
    1552 1553 ################### 1554 # mount() function 1555 ################### 1556 1557 -def mount(devicePath, mountPoint, fsType):
    1558 """ 1559 Mounts the indicated device at the indicated mount point. 1560 1561 For instance, to mount a CD, you might use device path C{/dev/cdrw}, mount 1562 point C{/media/cdrw} and filesystem type C{iso9660}. You can safely use any 1563 filesystem type that is supported by C{mount} on your platform. If the type 1564 is C{None}, we'll attempt to let C{mount} auto-detect it. This may or may 1565 not work on all systems. 1566 1567 @note: This only works on platforms that have a concept of "mounting" a 1568 filesystem through a command-line C{"mount"} command, like UNIXes. It 1569 won't work on Windows. 1570 1571 @param devicePath: Path of device to be mounted. 1572 @param mountPoint: Path that device should be mounted at. 1573 @param fsType: Type of the filesystem assumed to be available via the device. 1574 1575 @raise IOError: If the device cannot be mounted. 1576 """ 1577 if fsType is None: 1578 args = [ devicePath, mountPoint ] 1579 else: 1580 args = [ "-t", fsType, devicePath, mountPoint ] 1581 command = resolveCommand(MOUNT_COMMAND) 1582 result = executeCommand(command, args, returnOutput=False, ignoreStderr=True)[0] 1583 if result != 0: 1584 raise IOError("Error [%d] mounting [%s] at [%s] as [%s]." % (result, devicePath, mountPoint, fsType))
    1585
    1586 1587 ##################### 1588 # unmount() function 1589 ##################### 1590 1591 -def unmount(mountPoint, removeAfter=False, attempts=1, waitSeconds=0):
    1592 """ 1593 Unmounts whatever device is mounted at the indicated mount point. 1594 1595 Sometimes, it might not be possible to unmount the mount point immediately, 1596 if there are still files open there. Use the C{attempts} and C{waitSeconds} 1597 arguments to indicate how many unmount attempts to make and how many seconds 1598 to wait between attempts. If you pass in zero attempts, no attempts will be 1599 made (duh). 1600 1601 If the indicated mount point is not really a mount point per 1602 C{os.path.ismount()}, then it will be ignored. This seems to be a safer 1603 check then looking through C{/etc/mtab}, since C{ismount()} is already in 1604 the Python standard library and is documented as working on all POSIX 1605 systems. 1606 1607 If C{removeAfter} is C{True}, then the mount point will be removed using 1608 C{os.rmdir()} after the unmount action succeeds. If for some reason the 1609 mount point is not a directory, then it will not be removed. 1610 1611 @note: This only works on platforms that have a concept of "mounting" a 1612 filesystem through a command-line C{"mount"} command, like UNIXes. It 1613 won't work on Windows. 1614 1615 @param mountPoint: Mount point to be unmounted. 1616 @param removeAfter: Remove the mount point after unmounting it. 1617 @param attempts: Number of times to attempt the unmount. 1618 @param waitSeconds: Number of seconds to wait between repeated attempts. 1619 1620 @raise IOError: If the mount point is still mounted after attempts are exhausted. 1621 """ 1622 if os.path.ismount(mountPoint): 1623 for attempt in range(0, attempts): 1624 logger.debug("Making attempt %d to unmount [%s].", attempt, mountPoint) 1625 command = resolveCommand(UMOUNT_COMMAND) 1626 result = executeCommand(command, [ mountPoint, ], returnOutput=False, ignoreStderr=True)[0] 1627 if result != 0: 1628 logger.error("Error [%d] unmounting [%s] on attempt %d.", result, mountPoint, attempt) 1629 elif os.path.ismount(mountPoint): 1630 logger.error("After attempt %d, [%s] is still mounted.", attempt, mountPoint) 1631 else: 1632 logger.debug("Successfully unmounted [%s] on attempt %d.", mountPoint, attempt) 1633 break # this will cause us to skip the loop else: clause 1634 if attempt+1 < attempts: # i.e. this isn't the last attempt 1635 if waitSeconds > 0: 1636 logger.info("Sleeping %d second(s) before next unmount attempt.", waitSeconds) 1637 time.sleep(waitSeconds) 1638 else: 1639 if os.path.ismount(mountPoint): 1640 raise IOError("Unable to unmount [%s] after %d attempts.", mountPoint, attempts) 1641 logger.info("Mount point [%s] seems to have finally gone away.", mountPoint) 1642 if os.path.isdir(mountPoint) and removeAfter: 1643 logger.debug("Removing mount point [%s].", mountPoint) 1644 os.rmdir(mountPoint)
    1645
    1646 1647 ########################### 1648 # deviceMounted() function 1649 ########################### 1650 1651 -def deviceMounted(devicePath):
    1652 """ 1653 Indicates whether a specific filesystem device is currently mounted. 1654 1655 We determine whether the device is mounted by looking through the system's 1656 C{mtab} file. This file shows every currently-mounted filesystem, ordered 1657 by device. We only do the check if the C{mtab} file exists and is readable. 1658 Otherwise, we assume that the device is not mounted. 1659 1660 @note: This only works on platforms that have a concept of an mtab file 1661 to show mounted volumes, like UNIXes. It won't work on Windows. 1662 1663 @param devicePath: Path of device to be checked 1664 1665 @return: True if device is mounted, false otherwise. 1666 """ 1667 if os.path.exists(MTAB_FILE) and os.access(MTAB_FILE, os.R_OK): 1668 realPath = os.path.realpath(devicePath) 1669 with open(MTAB_FILE) as f: 1670 lines = f.readlines() 1671 for line in lines: 1672 (mountDevice, mountPoint, remainder) = line.split(None, 2) 1673 if mountDevice in [ devicePath, realPath, ]: 1674 logger.debug("Device [%s] is mounted at [%s].", devicePath, mountPoint) 1675 return True 1676 return False
    1677
    1678 1679 ######################## 1680 # encodePath() function 1681 ######################## 1682 1683 -def encodePath(path):
    1684 """ 1685 Safely encodes a filesystem path as a Unicode string, converting bytes to fileystem encoding if necessary. 1686 @param path: Path to encode 1687 @return: Path, as a string, encoded appropriately 1688 @raise ValueError: If the path cannot be encoded properly. 1689 @see: http://lucumr.pocoo.org/2013/7/2/the-updated-guide-to-unicode/ 1690 """ 1691 if path is None: 1692 return path 1693 try: 1694 if isinstance(path, bytes): 1695 encoding = sys.getfilesystemencoding() or sys.getdefaultencoding() 1696 path = path.decode(encoding, "surrogateescape") # to match what os.listdir() does 1697 return path 1698 except UnicodeError as e: 1699 raise ValueError("Path could not be safely encoded as %s: %s" % (encoding, str(e)))
    1700
    1701 1702 ######################## 1703 # nullDevice() function 1704 ######################## 1705 1706 -def nullDevice():
    1707 """ 1708 Attempts to portably return the null device on this system. 1709 1710 The null device is something like C{/dev/null} on a UNIX system. The name 1711 varies on other platforms. 1712 """ 1713 return os.devnull
    1714
    1715 1716 ############################## 1717 # deriveDayOfWeek() function 1718 ############################## 1719 1720 -def deriveDayOfWeek(dayName):
    1721 """ 1722 Converts English day name to numeric day of week as from C{time.localtime}. 1723 1724 For instance, the day C{monday} would be converted to the number C{0}. 1725 1726 @param dayName: Day of week to convert 1727 @type dayName: string, i.e. C{"monday"}, C{"tuesday"}, etc. 1728 1729 @returns: Integer, where Monday is 0 and Sunday is 6; or -1 if no conversion is possible. 1730 """ 1731 if dayName.lower() == "monday": 1732 return 0 1733 elif dayName.lower() == "tuesday": 1734 return 1 1735 elif dayName.lower() == "wednesday": 1736 return 2 1737 elif dayName.lower() == "thursday": 1738 return 3 1739 elif dayName.lower() == "friday": 1740 return 4 1741 elif dayName.lower() == "saturday": 1742 return 5 1743 elif dayName.lower() == "sunday": 1744 return 6 1745 else: 1746 return -1 # What else can we do?? Thrown an exception, I guess.
    1747
    1748 1749 ########################### 1750 # isStartOfWeek() function 1751 ########################### 1752 1753 -def isStartOfWeek(startingDay):
    1754 """ 1755 Indicates whether "today" is the backup starting day per configuration. 1756 1757 If the current day's English name matches the indicated starting day, then 1758 today is a starting day. 1759 1760 @param startingDay: Configured starting day. 1761 @type startingDay: string, i.e. C{"monday"}, C{"tuesday"}, etc. 1762 1763 @return: Boolean indicating whether today is the starting day. 1764 """ 1765 value = time.localtime().tm_wday == deriveDayOfWeek(startingDay) 1766 if value: 1767 logger.debug("Today is the start of the week.") 1768 else: 1769 logger.debug("Today is NOT the start of the week.") 1770 return value
    1771
    1772 1773 ################################# 1774 # buildNormalizedPath() function 1775 ################################# 1776 1777 -def buildNormalizedPath(path):
    1778 """ 1779 Returns a "normalized" path based on a path name. 1780 1781 A normalized path is a representation of a path that is also a valid file 1782 name. To make a valid file name out of a complete path, we have to convert 1783 or remove some characters that are significant to the filesystem -- in 1784 particular, the path separator and any leading C{'.'} character (which would 1785 cause the file to be hidden in a file listing). 1786 1787 Note that this is a one-way transformation -- you can't safely derive the 1788 original path from the normalized path. 1789 1790 To normalize a path, we begin by looking at the first character. If the 1791 first character is C{'/'} or C{'\\'}, it gets removed. If the first 1792 character is C{'.'}, it gets converted to C{'_'}. Then, we look through the 1793 rest of the path and convert all remaining C{'/'} or C{'\\'} characters 1794 C{'-'}, and all remaining whitespace characters to C{'_'}. 1795 1796 As a special case, a path consisting only of a single C{'/'} or C{'\\'} 1797 character will be converted to C{'-'}. 1798 1799 @param path: Path to normalize 1800 1801 @return: Normalized path as described above. 1802 1803 @raise ValueError: If the path is None 1804 """ 1805 if path is None: 1806 raise ValueError("Cannot normalize path None.") 1807 elif len(path) == 0: 1808 return path 1809 elif path == "/" or path == "\\": 1810 return "-" 1811 else: 1812 normalized = path 1813 normalized = re.sub(r"^\/", "", normalized) # remove leading '/' 1814 normalized = re.sub(r"^\\", "", normalized) # remove leading '\' 1815 normalized = re.sub(r"^\.", "_", normalized) # convert leading '.' to '_' so file won't be hidden 1816 normalized = re.sub(r"\/", "-", normalized) # convert all '/' characters to '-' 1817 normalized = re.sub(r"\\", "-", normalized) # convert all '\' characters to '-' 1818 normalized = re.sub(r"\s", "_", normalized) # convert all whitespace to '_' 1819 return normalized
    1820
    1821 1822 ################################# 1823 # sanitizeEnvironment() function 1824 ################################# 1825 1826 -def sanitizeEnvironment():
    1827 """ 1828 Sanitizes the operating system environment. 1829 1830 The operating system environment is contained in C{os.environ}. This method 1831 sanitizes the contents of that dictionary. 1832 1833 Currently, all it does is reset the locale (removing C{$LC_*}) and set the 1834 default language (C{$LANG}) to L{DEFAULT_LANGUAGE}. This way, we can count 1835 on consistent localization regardless of what the end-user has configured. 1836 This is important for code that needs to parse program output. 1837 1838 The C{os.environ} dictionary is modifed in-place. If C{$LANG} is already 1839 set to the proper value, it is not re-set, so we can avoid the memory leaks 1840 that are documented to occur on BSD-based systems. 1841 1842 @return: Copy of the sanitized environment. 1843 """ 1844 for var in LOCALE_VARS: 1845 if var in os.environ: 1846 del os.environ[var] 1847 if LANG_VAR in os.environ: 1848 if os.environ[LANG_VAR] != DEFAULT_LANGUAGE: # no need to reset if it exists (avoid leaks on BSD systems) 1849 os.environ[LANG_VAR] = DEFAULT_LANGUAGE 1850 return os.environ.copy()
    1851 1870
    1871 1872 ######################### 1873 # checkUnique() function 1874 ######################### 1875 1876 -def checkUnique(prefix, values):
    1877 """ 1878 Checks that all values are unique. 1879 1880 The values list is checked for duplicate values. If there are 1881 duplicates, an exception is thrown. All duplicate values are listed in 1882 the exception. 1883 1884 @param prefix: Prefix to use in the thrown exception 1885 @param values: List of values to check 1886 1887 @raise ValueError: If there are duplicates in the list 1888 """ 1889 values.sort() 1890 duplicates = [] 1891 for i in range(1, len(values)): 1892 if values[i-1] == values[i]: 1893 duplicates.append(values[i]) 1894 if duplicates: 1895 raise ValueError("%s %s" % (prefix, duplicates))
    1896
    1897 1898 ####################################### 1899 # parseCommaSeparatedString() function 1900 ####################################### 1901 1902 -def parseCommaSeparatedString(commaString):
    1903 """ 1904 Parses a list of values out of a comma-separated string. 1905 1906 The items in the list are split by comma, and then have whitespace 1907 stripped. As a special case, if C{commaString} is C{None}, then C{None} 1908 will be returned. 1909 1910 @param commaString: List of values in comma-separated string format. 1911 @return: Values from commaString split into a list, or C{None}. 1912 """ 1913 if commaString is None: 1914 return None 1915 else: 1916 pass1 = commaString.split(",") 1917 pass2 = [] 1918 for item in pass1: 1919 item = item.strip() 1920 if len(item) > 0: 1921 pass2.append(item) 1922 return pass2
    1923

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.filesystem-module.html0000664000175000017500000004260412657665544027014 0ustar pronovicpronovic00000000000000 CedarBackup3.filesystem
    Package CedarBackup3 :: Module filesystem
    [hide private]
    [frames] | no frames]

    Module filesystem

    source code

    Provides filesystem-related objects.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      FilesystemList
    Represents a list of filesystem items.
      BackupFileList
    List of files to be backed up.
      PurgeItemList
    List of files and directories to be purged.
      SpanItem
    Item returned by BackupFileList.generateSpan.
    Functions [hide private]
     
    normalizeDir(path)
    Normalizes a directory name.
    source code
     
    compareContents(path1, path2, verbose=False)
    Compares the contents of two directories to see if they are equivalent.
    source code
     
    compareDigestMaps(digest1, digest2, verbose=False)
    Compares two digest maps and throws an exception if they differ.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.filesystem")
      __package__ = 'CedarBackup3'
    Function Details [hide private]

    normalizeDir(path)

    source code 

    Normalizes a directory name.

    For our purposes, a directory name is normalized by removing the trailing path separator, if any. This is important because we want directories to appear within lists in a consistent way, although from the user's perspective passing in /path/to/dir/ and /path/to/dir are equivalent.

    Parameters:
    • path (String representing a path on disk) - Path to be normalized.
    Returns:
    Normalized path, which should be equivalent to the original.

    compareContents(path1, path2, verbose=False)

    source code 

    Compares the contents of two directories to see if they are equivalent.

    The two directories are recursively compared. First, we check whether they contain exactly the same set of files. Then, we check to see every given file has exactly the same contents in both directories.

    This is all relatively simple to implement through the magic of BackupFileList.generateDigestMap, which knows how to strip a path prefix off the front of each entry in the mapping it generates. This makes our comparison as simple as creating a list for each path, then generating a digest map for each path and comparing the two.

    If no exception is thrown, the two directories are considered identical.

    If the verbose flag is True, then an alternate (but slower) method is used so that any thrown exception can indicate exactly which file caused the comparison to fail. The thrown ValueError exception distinguishes between the directories containing different files, and containing the same files with differing content.

    Parameters:
    • path1 (String representing a path on disk) - First path to compare.
    • path2 (String representing a path on disk) - First path to compare.
    • verbose (Boolean) - Indicates whether a verbose response should be given.
    Raises:
    • ValueError - If a directory doesn't exist or can't be read.
    • ValueError - If the two directories are not equivalent.
    • IOError - If there is an unusual problem reading the directories.

    Note: Symlinks are not followed for the purposes of this comparison.

    compareDigestMaps(digest1, digest2, verbose=False)

    source code 

    Compares two digest maps and throws an exception if they differ.

    Parameters:
    • digest1 (Digest as returned from BackupFileList.generateDigestMap()) - First digest to compare.
    • digest2 (Digest as returned from BackupFileList.generateDigestMap()) - Second digest to compare.
    • verbose (Boolean) - Indicates whether a verbose response should be given.
    Raises:
    • ValueError - If the two directories are not equivalent.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.cli._ManagedActionItem-class.html0000664000175000017500000004636712657665544030700 0ustar pronovicpronovic00000000000000 CedarBackup3.cli._ManagedActionItem
    Package CedarBackup3 :: Module cli :: Class _ManagedActionItem
    [hide private]
    [frames] | no frames]

    Class _ManagedActionItem

    source code

    object --+
             |
            _ManagedActionItem
    

    Class representing a single action to be executed on a managed peer.

    This class represents a single named action to be executed, and understands how to execute that action.

    Actions to be executed on a managed peer rely on peer configuration and on the full-backup flag. All other configuration takes place on the remote peer itself.


    Note: The comparison operators for this class have been implemented to only compare based on the index and SORT_ORDER value, and ignore all other values. This is so that the action set list can be easily sorted first by type (_ActionItem before _ManagedActionItem) and then by index within type.

    Instance Methods [hide private]
     
    __init__(self, index, name, remotePeers)
    Default constructor.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    executeAction(self, configPath, options, config)
    Executes the managed action associated with an item.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Class Variables [hide private]
      SORT_ORDER = 1
    Defines a sort order to order properly between types.
    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, index, name, remotePeers)
    (Constructor)

    source code 

    Default constructor.

    Parameters:
    • index - Index of the item (or None).
    • name - Name of the action that is being executed.
    • remotePeers - List of remote peers on which to execute the action.
    Overrides: object.__init__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. The only thing we compare is the item's index.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    executeAction(self, configPath, options, config)

    source code 

    Executes the managed action associated with an item.

    Parameters:
    • configPath - Path to configuration file on disk.
    • options - Command-line options to be passed to action.
    • config - Parsed configuration to be passed to action.
    Raises:
    • Exception - If there is a problem executing the action.
    Notes:
    • Only options.full is actually used. The rest of the arguments exist to satisfy the ActionItem iterface.
    • Errors here result in a message logged to ERROR, but no thrown exception. The analogy is the stage action where a problem with one host should not kill the entire backup. Since we're logging an error, the administrator will get an email.

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.extend.sysinfo-module.html0000664000175000017500000000500312657665544030363 0ustar pronovicpronovic00000000000000 sysinfo

    Module sysinfo


    Functions

    executeAction

    Variables

    DPKG_COMMAND
    DPKG_PATH
    FDISK_COMMAND
    FDISK_PATH
    LS_COMMAND
    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.capacity.LocalConfig-class.html0000664000175000017500000011517512657665544031716 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.capacity.LocalConfig
    Package CedarBackup3 :: Package extend :: Module capacity :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit specific configuration values to this extension. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds a <capacity> configuration section as the next child of a parent.
    source code
     
    _setCapacity(self, value)
    Property target used to set the capacity configuration value.
    source code
     
    _getCapacity(self)
    Property target used to get the capacity configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseCapacity(parentNode)
    Parses a capacity configuration section.
    source code
     
    _readPercentageQuantity(parent, name)
    Read a percentage quantity value from an XML document.
    source code
     
    _addPercentageQuantity(xmlDom, parentNode, nodeName, percentageQuantity)
    Adds a text node as the next child of a parent, to contain a percentage quantity.
    source code
    Properties [hide private]
      capacity
    Capacity configuration in terms of a CapacityConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object. THere must be either a percentage, or a byte capacity, but not both.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds a <capacity> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      maxPercentage  //cb_config/capacity/max_percentage
      minBytes       //cb_config/capacity/min_bytes
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setCapacity(self, value)

    source code 

    Property target used to set the capacity configuration value. If not None, the value must be a CapacityConfig object.

    Raises:
    • ValueError - If the value is not a CapacityConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the capacity configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseCapacity(parentNode)
    Static Method

    source code 

    Parses a capacity configuration section.

    We read the following fields:

      maxPercentage  //cb_config/capacity/max_percentage
      minBytes       //cb_config/capacity/min_bytes
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    CapacityConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _readPercentageQuantity(parent, name)
    Static Method

    source code 

    Read a percentage quantity value from an XML document.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    Percentage quantity parsed from XML document

    _addPercentageQuantity(xmlDom, parentNode, nodeName, percentageQuantity)
    Static Method

    source code 

    Adds a text node as the next child of a parent, to contain a percentage quantity.

    If the percentageQuantity is None, then no node will be created.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    • percentageQuantity - PercentageQuantity object to put into the XML document
    Returns:
    Reference to the newly-created node.

    Property Details [hide private]

    capacity

    Capacity configuration in terms of a CapacityConfig object.

    Get Method:
    _getCapacity(self) - Property target used to get the capacity configuration value.
    Set Method:
    _setCapacity(self, value) - Property target used to set the capacity configuration value.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.capacity-pysrc.html0000664000175000017500000066550112657665546027577 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.capacity
    Package CedarBackup3 :: Package extend :: Module capacity
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.extend.capacity

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2008,2010,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Cedar Backup, release 3 
     30  # Purpose  : Provides an extension to check remaining media capacity. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Provides an extension to check remaining media capacity. 
     40   
     41  Some users have asked for advance warning that their media is beginning to fill 
     42  up.  This is an extension that checks the current capacity of the media in the 
     43  writer, and prints a warning if the media is more than X% full, or has fewer 
     44  than X bytes of capacity remaining. 
     45   
     46  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     47  """ 
     48   
     49  ######################################################################## 
     50  # Imported modules 
     51  ######################################################################## 
     52   
     53  # System modules 
     54  import logging 
     55  from functools import total_ordering 
     56   
     57  # Cedar Backup modules 
     58  from CedarBackup3.util import displayBytes 
     59  from CedarBackup3.config import ByteQuantity, readByteQuantity, addByteQuantityNode 
     60  from CedarBackup3.xmlutil import createInputDom, addContainerNode, addStringNode 
     61  from CedarBackup3.xmlutil import readFirstChild, readString 
     62  from CedarBackup3.actions.util import createWriter, checkMediaState 
     63   
     64   
     65  ######################################################################## 
     66  # Module-wide constants and variables 
     67  ######################################################################## 
     68   
     69  logger = logging.getLogger("CedarBackup3.log.extend.capacity") 
    
    70 71 72 ######################################################################## 73 # Percentage class definition 74 ######################################################################## 75 76 @total_ordering 77 -class PercentageQuantity(object):
    78 79 """ 80 Class representing a percentage quantity. 81 82 The percentage is maintained internally as a string so that issues of 83 precision can be avoided. It really isn't possible to store a floating 84 point number here while being able to losslessly translate back and forth 85 between XML and object representations. (Perhaps the Python 2.4 Decimal 86 class would have been an option, but I originally wanted to stay compatible 87 with Python 2.3.) 88 89 Even though the quantity is maintained as a string, the string must be in a 90 valid floating point positive number. Technically, any floating point 91 string format supported by Python is allowble. However, it does not make 92 sense to have a negative percentage in this context. 93 94 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 95 quantity 96 """ 97
    98 - def __init__(self, quantity=None):
    99 """ 100 Constructor for the C{PercentageQuantity} class. 101 @param quantity: Percentage quantity, as a string (i.e. "99.9" or "12") 102 @raise ValueError: If the quantity value is invaid. 103 """ 104 self._quantity = None 105 self.quantity = quantity
    106
    107 - def __repr__(self):
    108 """ 109 Official string representation for class instance. 110 """ 111 return "PercentageQuantity(%s)" % (self.quantity)
    112
    113 - def __str__(self):
    114 """ 115 Informal string representation for class instance. 116 """ 117 return self.__repr__()
    118
    119 - def __eq__(self, other):
    120 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 121 return self.__cmp__(other) == 0
    122
    123 - def __lt__(self, other):
    124 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 125 return self.__cmp__(other) < 0
    126
    127 - def __gt__(self, other):
    128 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 129 return self.__cmp__(other) > 0
    130
    131 - def __cmp__(self, other):
    132 """ 133 Original Python 2 comparison operator. 134 Lists within this class are "unordered" for equality comparisons. 135 @param other: Other object to compare to. 136 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 137 """ 138 if other is None: 139 return 1 140 if self.quantity != other.quantity: 141 if float(self.quantity or 0.0) < float(other.quantity or 0.0): 142 return -1 143 else: 144 return 1 145 return 0
    146
    147 - def _setQuantity(self, value):
    148 """ 149 Property target used to set the quantity 150 The value must be a non-empty string if it is not C{None}. 151 @raise ValueError: If the value is an empty string. 152 @raise ValueError: If the value is not a valid floating point number 153 @raise ValueError: If the value is less than zero 154 """ 155 if value is not None: 156 if len(value) < 1: 157 raise ValueError("Percentage must be a non-empty string.") 158 floatValue = float(value) 159 if floatValue < 0.0 or floatValue > 100.0: 160 raise ValueError("Percentage must be a positive value from 0.0 to 100.0") 161 self._quantity = value # keep around string
    162
    163 - def _getQuantity(self):
    164 """ 165 Property target used to get the quantity. 166 """ 167 return self._quantity
    168
    169 - def _getPercentage(self):
    170 """ 171 Property target used to get the quantity as a floating point number. 172 If there is no quantity set, then a value of 0.0 is returned. 173 """ 174 if self.quantity is not None: 175 return float(self.quantity) 176 return 0.0
    177 178 quantity = property(_getQuantity, _setQuantity, None, doc="Percentage value, as a string") 179 percentage = property(_getPercentage, None, None, "Percentage value, as a floating point number.")
    180
    181 182 ######################################################################## 183 # CapacityConfig class definition 184 ######################################################################## 185 186 @total_ordering 187 -class CapacityConfig(object):
    188 189 """ 190 Class representing capacity configuration. 191 192 The following restrictions exist on data in this class: 193 194 - The maximum percentage utilized must be a PercentageQuantity 195 - The minimum bytes remaining must be a ByteQuantity 196 197 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 198 maxPercentage, minBytes 199 """ 200
    201 - def __init__(self, maxPercentage=None, minBytes=None):
    202 """ 203 Constructor for the C{CapacityConfig} class. 204 205 @param maxPercentage: Maximum percentage of the media that may be utilized 206 @param minBytes: Minimum number of free bytes that must be available 207 """ 208 self._maxPercentage = None 209 self._minBytes = None 210 self.maxPercentage = maxPercentage 211 self.minBytes = minBytes
    212
    213 - def __repr__(self):
    214 """ 215 Official string representation for class instance. 216 """ 217 return "CapacityConfig(%s, %s)" % (self.maxPercentage, self.minBytes)
    218
    219 - def __str__(self):
    220 """ 221 Informal string representation for class instance. 222 """ 223 return self.__repr__()
    224
    225 - def __eq__(self, other):
    226 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 227 return self.__cmp__(other) == 0
    228
    229 - def __lt__(self, other):
    230 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 231 return self.__cmp__(other) < 0
    232
    233 - def __gt__(self, other):
    234 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 235 return self.__cmp__(other) > 0
    236
    237 - def __cmp__(self, other):
    238 """ 239 Original Python 2 comparison operator. 240 @param other: Other object to compare to. 241 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 242 """ 243 if other is None: 244 return 1 245 if self.maxPercentage != other.maxPercentage: 246 if (self.maxPercentage or PercentageQuantity()) < (other.maxPercentage or PercentageQuantity()): 247 return -1 248 else: 249 return 1 250 if self.minBytes != other.minBytes: 251 if (self.minBytes or ByteQuantity()) < (other.minBytes or ByteQuantity()): 252 return -1 253 else: 254 return 1 255 return 0
    256
    257 - def _setMaxPercentage(self, value):
    258 """ 259 Property target used to set the maxPercentage value. 260 If not C{None}, the value must be a C{PercentageQuantity} object. 261 @raise ValueError: If the value is not a C{PercentageQuantity} 262 """ 263 if value is None: 264 self._maxPercentage = None 265 else: 266 if not isinstance(value, PercentageQuantity): 267 raise ValueError("Value must be a C{PercentageQuantity} object.") 268 self._maxPercentage = value
    269
    270 - def _getMaxPercentage(self):
    271 """ 272 Property target used to get the maxPercentage value 273 """ 274 return self._maxPercentage
    275
    276 - def _setMinBytes(self, value):
    277 """ 278 Property target used to set the bytes utilized value. 279 If not C{None}, the value must be a C{ByteQuantity} object. 280 @raise ValueError: If the value is not a C{ByteQuantity} 281 """ 282 if value is None: 283 self._minBytes = None 284 else: 285 if not isinstance(value, ByteQuantity): 286 raise ValueError("Value must be a C{ByteQuantity} object.") 287 self._minBytes = value
    288
    289 - def _getMinBytes(self):
    290 """ 291 Property target used to get the bytes remaining value. 292 """ 293 return self._minBytes
    294 295 maxPercentage = property(_getMaxPercentage, _setMaxPercentage, None, "Maximum percentage of the media that may be utilized.") 296 minBytes = property(_getMinBytes, _setMinBytes, None, "Minimum number of free bytes that must be available.")
    297
    298 299 ######################################################################## 300 # LocalConfig class definition 301 ######################################################################## 302 303 @total_ordering 304 -class LocalConfig(object):
    305 306 """ 307 Class representing this extension's configuration document. 308 309 This is not a general-purpose configuration object like the main Cedar 310 Backup configuration object. Instead, it just knows how to parse and emit 311 specific configuration values to this extension. Third parties who need to 312 read and write configuration related to this extension should access it 313 through the constructor, C{validate} and C{addConfig} methods. 314 315 @note: Lists within this class are "unordered" for equality comparisons. 316 317 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 318 capacity, validate, addConfig 319 """ 320
    321 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    322 """ 323 Initializes a configuration object. 324 325 If you initialize the object without passing either C{xmlData} or 326 C{xmlPath} then configuration will be empty and will be invalid until it 327 is filled in properly. 328 329 No reference to the original XML data or original path is saved off by 330 this class. Once the data has been parsed (successfully or not) this 331 original information is discarded. 332 333 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 334 method will be called (with its default arguments) against configuration 335 after successfully parsing any passed-in XML. Keep in mind that even if 336 C{validate} is C{False}, it might not be possible to parse the passed-in 337 XML document if lower-level validations fail. 338 339 @note: It is strongly suggested that the C{validate} option always be set 340 to C{True} (the default) unless there is a specific need to read in 341 invalid configuration from disk. 342 343 @param xmlData: XML data representing configuration. 344 @type xmlData: String data. 345 346 @param xmlPath: Path to an XML file on disk. 347 @type xmlPath: Absolute path to a file on disk. 348 349 @param validate: Validate the document after parsing it. 350 @type validate: Boolean true/false. 351 352 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 353 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 354 @raise ValueError: If the parsed configuration document is not valid. 355 """ 356 self._capacity = None 357 self.capacity = None 358 if xmlData is not None and xmlPath is not None: 359 raise ValueError("Use either xmlData or xmlPath, but not both.") 360 if xmlData is not None: 361 self._parseXmlData(xmlData) 362 if validate: 363 self.validate() 364 elif xmlPath is not None: 365 with open(xmlPath) as f: 366 xmlData = f.read() 367 self._parseXmlData(xmlData) 368 if validate: 369 self.validate()
    370
    371 - def __repr__(self):
    372 """ 373 Official string representation for class instance. 374 """ 375 return "LocalConfig(%s)" % (self.capacity)
    376
    377 - def __str__(self):
    378 """ 379 Informal string representation for class instance. 380 """ 381 return self.__repr__()
    382
    383 - def __eq__(self, other):
    384 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 385 return self.__cmp__(other) == 0
    386
    387 - def __lt__(self, other):
    388 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 389 return self.__cmp__(other) < 0
    390
    391 - def __gt__(self, other):
    392 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 393 return self.__cmp__(other) > 0
    394
    395 - def __cmp__(self, other):
    396 """ 397 Original Python 2 comparison operator. 398 Lists within this class are "unordered" for equality comparisons. 399 @param other: Other object to compare to. 400 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 401 """ 402 if other is None: 403 return 1 404 if self.capacity != other.capacity: 405 if self.capacity < other.capacity: 406 return -1 407 else: 408 return 1 409 return 0
    410
    411 - def _setCapacity(self, value):
    412 """ 413 Property target used to set the capacity configuration value. 414 If not C{None}, the value must be a C{CapacityConfig} object. 415 @raise ValueError: If the value is not a C{CapacityConfig} 416 """ 417 if value is None: 418 self._capacity = None 419 else: 420 if not isinstance(value, CapacityConfig): 421 raise ValueError("Value must be a C{CapacityConfig} object.") 422 self._capacity = value
    423
    424 - def _getCapacity(self):
    425 """ 426 Property target used to get the capacity configuration value. 427 """ 428 return self._capacity
    429 430 capacity = property(_getCapacity, _setCapacity, None, "Capacity configuration in terms of a C{CapacityConfig} object.") 431
    432 - def validate(self):
    433 """ 434 Validates configuration represented by the object. 435 THere must be either a percentage, or a byte capacity, but not both. 436 @raise ValueError: If one of the validations fails. 437 """ 438 if self.capacity is None: 439 raise ValueError("Capacity section is required.") 440 if self.capacity.maxPercentage is None and self.capacity.minBytes is None: 441 raise ValueError("Must provide either max percentage or min bytes.") 442 if self.capacity.maxPercentage is not None and self.capacity.minBytes is not None: 443 raise ValueError("Must provide either max percentage or min bytes, but not both.")
    444
    445 - def addConfig(self, xmlDom, parentNode):
    446 """ 447 Adds a <capacity> configuration section as the next child of a parent. 448 449 Third parties should use this function to write configuration related to 450 this extension. 451 452 We add the following fields to the document:: 453 454 maxPercentage //cb_config/capacity/max_percentage 455 minBytes //cb_config/capacity/min_bytes 456 457 @param xmlDom: DOM tree as from C{impl.createDocument()}. 458 @param parentNode: Parent that the section should be appended to. 459 """ 460 if self.capacity is not None: 461 sectionNode = addContainerNode(xmlDom, parentNode, "capacity") 462 LocalConfig._addPercentageQuantity(xmlDom, sectionNode, "max_percentage", self.capacity.maxPercentage) 463 if self.capacity.minBytes is not None: # because utility function fills in empty section on None 464 addByteQuantityNode(xmlDom, sectionNode, "min_bytes", self.capacity.minBytes)
    465
    466 - def _parseXmlData(self, xmlData):
    467 """ 468 Internal method to parse an XML string into the object. 469 470 This method parses the XML document into a DOM tree (C{xmlDom}) and then 471 calls a static method to parse the capacity configuration section. 472 473 @param xmlData: XML data to be parsed 474 @type xmlData: String data 475 476 @raise ValueError: If the XML cannot be successfully parsed. 477 """ 478 (xmlDom, parentNode) = createInputDom(xmlData) 479 self._capacity = LocalConfig._parseCapacity(parentNode)
    480 481 @staticmethod
    482 - def _parseCapacity(parentNode):
    483 """ 484 Parses a capacity configuration section. 485 486 We read the following fields:: 487 488 maxPercentage //cb_config/capacity/max_percentage 489 minBytes //cb_config/capacity/min_bytes 490 491 @param parentNode: Parent node to search beneath. 492 493 @return: C{CapacityConfig} object or C{None} if the section does not exist. 494 @raise ValueError: If some filled-in value is invalid. 495 """ 496 capacity = None 497 section = readFirstChild(parentNode, "capacity") 498 if section is not None: 499 capacity = CapacityConfig() 500 capacity.maxPercentage = LocalConfig._readPercentageQuantity(section, "max_percentage") 501 capacity.minBytes = readByteQuantity(section, "min_bytes") 502 return capacity
    503 504 @staticmethod
    505 - def _readPercentageQuantity(parent, name):
    506 """ 507 Read a percentage quantity value from an XML document. 508 @param parent: Parent node to search beneath. 509 @param name: Name of node to search for. 510 @return: Percentage quantity parsed from XML document 511 """ 512 quantity = readString(parent, name) 513 if quantity is None: 514 return None 515 return PercentageQuantity(quantity)
    516 517 @staticmethod
    518 - def _addPercentageQuantity(xmlDom, parentNode, nodeName, percentageQuantity):
    519 """ 520 Adds a text node as the next child of a parent, to contain a percentage quantity. 521 522 If the C{percentageQuantity} is None, then no node will be created. 523 524 @param xmlDom: DOM tree as from C{impl.createDocument()}. 525 @param parentNode: Parent node to create child for. 526 @param nodeName: Name of the new container node. 527 @param percentageQuantity: PercentageQuantity object to put into the XML document 528 529 @return: Reference to the newly-created node. 530 """ 531 if percentageQuantity is not None: 532 addStringNode(xmlDom, parentNode, nodeName, percentageQuantity.quantity)
    533
    534 535 ######################################################################## 536 # Public functions 537 ######################################################################## 538 539 ########################### 540 # executeAction() function 541 ########################### 542 543 -def executeAction(configPath, options, config):
    544 """ 545 Executes the capacity action. 546 547 @param configPath: Path to configuration file on disk. 548 @type configPath: String representing a path on disk. 549 550 @param options: Program command-line options. 551 @type options: Options object. 552 553 @param config: Program configuration. 554 @type config: Config object. 555 556 @raise ValueError: Under many generic error conditions 557 @raise IOError: If there are I/O problems reading or writing files 558 """ 559 logger.debug("Executing capacity extended action.") 560 if config.options is None or config.store is None: 561 raise ValueError("Cedar Backup configuration is not properly filled in.") 562 local = LocalConfig(xmlPath=configPath) 563 if config.store.checkMedia: 564 checkMediaState(config.store) # raises exception if media is not initialized 565 capacity = createWriter(config).retrieveCapacity() 566 logger.debug("Media capacity: %s", capacity) 567 if local.capacity.maxPercentage is not None: 568 if capacity.utilized > local.capacity.maxPercentage.percentage: 569 logger.error("Media has reached capacity limit of %s%%: %.2f%% utilized", 570 local.capacity.maxPercentage.quantity, capacity.utilized) 571 else: 572 if capacity.bytesAvailable < local.capacity.minBytes: 573 logger.error("Media has reached capacity limit of %s: only %s available", 574 local.capacity.minBytes, displayBytes(capacity.bytesAvailable)) 575 logger.info("Executed the capacity extended action successfully.")
    576

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.mbox.LocalConfig-class.html0000664000175000017500000013702612657665544031065 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.mbox.LocalConfig
    Package CedarBackup3 :: Package extend :: Module mbox :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit Mbox-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds an <mbox> configuration section as the next child of a parent.
    source code
     
    _setMbox(self, value)
    Property target used to set the mbox configuration value.
    source code
     
    _getMbox(self)
    Property target used to get the mbox configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseMbox(parent)
    Parses an mbox configuration section.
    source code
     
    _parseMboxFiles(parent)
    Reads a list of MboxFile objects from immediately beneath the parent.
    source code
     
    _parseMboxDirs(parent)
    Reads a list of MboxDir objects from immediately beneath the parent.
    source code
     
    _parseExclusions(parentNode)
    Reads exclusions data from immediately beneath the parent.
    source code
     
    _addMboxFile(xmlDom, parentNode, mboxFile)
    Adds an mbox file container as the next child of a parent.
    source code
     
    _addMboxDir(xmlDom, parentNode, mboxDir)
    Adds an mbox directory container as the next child of a parent.
    source code
    Properties [hide private]
      mbox
    Mbox configuration in terms of a MboxConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    Mbox configuration must be filled in. Within that, the collect mode and compress mode are both optional, but the list of repositories must contain at least one entry.

    Each configured file or directory must contain an absolute path, and then must be either able to take collect mode and compress mode configuration from the parent MboxConfig object, or must set each value on its own.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds an <mbox> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      collectMode    //cb_config/mbox/collectMode
      compressMode   //cb_config/mbox/compressMode
    

    We also add groups of the following items, one list element per item:

      mboxFiles      //cb_config/mbox/file
      mboxDirs       //cb_config/mbox/dir
    

    The mbox files and mbox directories are added by _addMboxFile and _addMboxDir.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setMbox(self, value)

    source code 

    Property target used to set the mbox configuration value. If not None, the value must be a MboxConfig object.

    Raises:
    • ValueError - If the value is not a MboxConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the mbox configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseMbox(parent)
    Static Method

    source code 

    Parses an mbox configuration section.

    We read the following individual fields:

      collectMode    //cb_config/mbox/collect_mode
      compressMode   //cb_config/mbox/compress_mode
    

    We also read groups of the following item, one list element per item:

      mboxFiles      //cb_config/mbox/file
      mboxDirs       //cb_config/mbox/dir
    

    The mbox files are parsed by _parseMboxFiles and the mbox directories are parsed by _parseMboxDirs.

    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    MboxConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseMboxFiles(parent)
    Static Method

    source code 

    Reads a list of MboxFile objects from immediately beneath the parent.

    We read the following individual fields:

      absolutePath            abs_path
      collectMode             collect_mode
      compressMode            compess_mode
    
    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    List of MboxFile objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseMboxDirs(parent)
    Static Method

    source code 

    Reads a list of MboxDir objects from immediately beneath the parent.

    We read the following individual fields:

      absolutePath            abs_path
      collectMode             collect_mode
      compressMode            compess_mode
    

    We also read groups of the following items, one list element per item:

      relativeExcludePaths    exclude/rel_path
      excludePatterns         exclude/pattern
    

    The exclusions are parsed by _parseExclusions.

    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    List of MboxDir objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseExclusions(parentNode)
    Static Method

    source code 

    Reads exclusions data from immediately beneath the parent.

    We read groups of the following items, one list element per item:

      relative    exclude/rel_path
      patterns    exclude/pattern
    

    If there are none of some pattern (i.e. no relative path items) then None will be returned for that item in the tuple.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    Tuple of (relative, patterns) exclusions.

    _addMboxFile(xmlDom, parentNode, mboxFile)
    Static Method

    source code 

    Adds an mbox file container as the next child of a parent.

    We add the following fields to the document:

      absolutePath            file/abs_path
      collectMode             file/collect_mode
      compressMode            file/compress_mode
    

    The <file> node itself is created as the next child of the parent node. This method only adds one mbox file node. The parent must loop for each mbox file in the MboxConfig object.

    If mboxFile is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.
    • mboxFile - MboxFile to be added to the document.

    _addMboxDir(xmlDom, parentNode, mboxDir)
    Static Method

    source code 

    Adds an mbox directory container as the next child of a parent.

    We add the following fields to the document:

      absolutePath            dir/abs_path
      collectMode             dir/collect_mode
      compressMode            dir/compress_mode
    

    We also add groups of the following items, one list element per item:

      relativeExcludePaths    dir/exclude/rel_path
      excludePatterns         dir/exclude/pattern
    

    The <dir> node itself is created as the next child of the parent node. This method only adds one mbox directory node. The parent must loop for each mbox directory in the MboxConfig object.

    If mboxDir is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.
    • mboxDir - MboxDir to be added to the document.

    Property Details [hide private]

    mbox

    Mbox configuration in terms of a MboxConfig object.

    Get Method:
    _getMbox(self) - Property target used to get the mbox configuration value.
    Set Method:
    _setMbox(self, value) - Property target used to set the mbox configuration value.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.split-module.html0000664000175000017500000004457612657665544027263 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.split
    Package CedarBackup3 :: Package extend :: Module split
    [hide private]
    [frames] | no frames]

    Module split

    source code

    Provides an extension to split up large files in staging directories.

    When this extension is executed, it will look through the configured Cedar Backup staging directory for files exceeding a specified size limit, and split them down into smaller files using the 'split' utility. Any directory which has already been split (as indicated by the cback.split file) will be ignored.

    This extension requires a new configuration section <split> and is intended to be run immediately after the standard stage action or immediately before the standard store action. Aside from its own configuration, it requires the options and staging configuration sections in the standard Cedar Backup configuration file.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      SplitConfig
    Class representing split configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the split backup action.
    source code
     
    _splitDailyDir(dailyDir, sizeLimit, splitSize, backupUser, backupGroup)
    Splits large files in a daily staging directory.
    source code
     
    _splitFile(sourcePath, splitSize, backupUser, backupGroup, removeSource=False)
    Splits the source file into chunks of the indicated size.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.extend.split")
      SPLIT_COMMAND = ['split']
      SPLIT_INDICATOR = 'cback.split'
      __package__ = 'CedarBackup3.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the split backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are I/O problems reading or writing files

    _splitDailyDir(dailyDir, sizeLimit, splitSize, backupUser, backupGroup)

    source code 

    Splits large files in a daily staging directory.

    Files that match INDICATOR_PATTERNS (i.e. "cback.store", "cback.stage", etc.) are assumed to be indicator files and are ignored. All other files are split.

    Parameters:
    • dailyDir - Daily directory to encrypt
    • sizeLimit - Size limit, in bytes
    • splitSize - Split size, in bytes
    • backupUser - User that target files should be owned by
    • backupGroup - Group that target files should be owned by
    Raises:
    • ValueError - If the encrypt mode is not supported.
    • ValueError - If the daily staging directory does not exist.

    _splitFile(sourcePath, splitSize, backupUser, backupGroup, removeSource=False)

    source code 

    Splits the source file into chunks of the indicated size.

    The split files will be owned by the indicated backup user and group. If removeSource is True, then the source file will be removed after it is successfully split.

    Parameters:
    • sourcePath - Absolute path of the source file to split
    • splitSize - Encryption mode (only "gpg" is allowed)
    • backupUser - User that target files should be owned by
    • backupGroup - Group that target files should be owned by
    • removeSource - Indicates whether to remove the source file
    Raises:
    • IOError - If there is a problem accessing, splitting or removing the source file.

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.tools.amazons3-module.html0000664000175000017500000000555312657665544030307 0ustar pronovicpronovic00000000000000 amazons3

    Module amazons3


    Classes

    Options

    Functions

    cli

    Variables

    AWS_COMMAND
    LONG_SWITCHES
    SHORT_SWITCHES
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.capacity-module.html0000664000175000017500000002677312657665544027724 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.capacity
    Package CedarBackup3 :: Package extend :: Module capacity
    [hide private]
    [frames] | no frames]

    Module capacity

    source code

    Provides an extension to check remaining media capacity.

    Some users have asked for advance warning that their media is beginning to fill up. This is an extension that checks the current capacity of the media in the writer, and prints a warning if the media is more than X% full, or has fewer than X bytes of capacity remaining.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      PercentageQuantity
    Class representing a percentage quantity.
      CapacityConfig
    Class representing capacity configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the capacity action.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.extend.capacity")
      __package__ = 'CedarBackup3.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the capacity action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are I/O problems reading or writing files

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.encrypt-pysrc.html0000664000175000017500000060727312657665546027470 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.encrypt
    Package CedarBackup3 :: Package extend :: Module encrypt
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.extend.encrypt

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2007,2010,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Official Cedar Backup Extensions 
     30  # Purpose  : Provides an extension to encrypt staging directories. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Provides an extension to encrypt staging directories. 
     40   
     41  When this extension is executed, all backed-up files in the configured Cedar 
     42  Backup staging directory will be encrypted using gpg.  Any directory which has 
     43  already been encrypted (as indicated by the C{cback.encrypt} file) will be 
     44  ignored. 
     45   
     46  This extension requires a new configuration section <encrypt> and is intended 
     47  to be run immediately after the standard stage action or immediately before the 
     48  standard store action.  Aside from its own configuration, it requires the 
     49  options and staging configuration sections in the standard Cedar Backup 
     50  configuration file. 
     51   
     52  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     53  """ 
     54   
     55  ######################################################################## 
     56  # Imported modules 
     57  ######################################################################## 
     58   
     59  # System modules 
     60  import os 
     61  import logging 
     62  from functools import total_ordering 
     63   
     64  # Cedar Backup modules 
     65  from CedarBackup3.util import resolveCommand, executeCommand, changeOwnership 
     66  from CedarBackup3.xmlutil import createInputDom, addContainerNode, addStringNode 
     67  from CedarBackup3.xmlutil import readFirstChild, readString 
     68  from CedarBackup3.actions.util import findDailyDirs, writeIndicatorFile, getBackupFiles 
     69   
     70   
     71  ######################################################################## 
     72  # Module-wide constants and variables 
     73  ######################################################################## 
     74   
     75  logger = logging.getLogger("CedarBackup3.log.extend.encrypt") 
     76   
     77  GPG_COMMAND = [ "gpg", ] 
     78  VALID_ENCRYPT_MODES = [ "gpg", ] 
     79  ENCRYPT_INDICATOR = "cback.encrypt" 
    
    80 81 82 ######################################################################## 83 # EncryptConfig class definition 84 ######################################################################## 85 86 @total_ordering 87 -class EncryptConfig(object):
    88 89 """ 90 Class representing encrypt configuration. 91 92 Encrypt configuration is used for encrypting staging directories. 93 94 The following restrictions exist on data in this class: 95 96 - The encrypt mode must be one of the values in L{VALID_ENCRYPT_MODES} 97 - The encrypt target value must be a non-empty string 98 99 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 100 encryptMode, encryptTarget 101 """ 102
    103 - def __init__(self, encryptMode=None, encryptTarget=None):
    104 """ 105 Constructor for the C{EncryptConfig} class. 106 107 @param encryptMode: Encryption mode 108 @param encryptTarget: Encryption target (for instance, GPG recipient) 109 110 @raise ValueError: If one of the values is invalid. 111 """ 112 self._encryptMode = None 113 self._encryptTarget = None 114 self.encryptMode = encryptMode 115 self.encryptTarget = encryptTarget
    116
    117 - def __repr__(self):
    118 """ 119 Official string representation for class instance. 120 """ 121 return "EncryptConfig(%s, %s)" % (self.encryptMode, self.encryptTarget)
    122
    123 - def __str__(self):
    124 """ 125 Informal string representation for class instance. 126 """ 127 return self.__repr__()
    128
    129 - def __eq__(self, other):
    130 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 131 return self.__cmp__(other) == 0
    132
    133 - def __lt__(self, other):
    134 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 135 return self.__cmp__(other) < 0
    136
    137 - def __gt__(self, other):
    138 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 139 return self.__cmp__(other) > 0
    140
    141 - def __cmp__(self, other):
    142 """ 143 Original Python 2 comparison operator. 144 Lists within this class are "unordered" for equality comparisons. 145 @param other: Other object to compare to. 146 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 147 """ 148 if other is None: 149 return 1 150 if self.encryptMode != other.encryptMode: 151 if str(self.encryptMode or "") < str(other.encryptMode or ""): 152 return -1 153 else: 154 return 1 155 if self.encryptTarget != other.encryptTarget: 156 if str(self.encryptTarget or "") < str(other.encryptTarget or ""): 157 return -1 158 else: 159 return 1 160 return 0
    161
    162 - def _setEncryptMode(self, value):
    163 """ 164 Property target used to set the encrypt mode. 165 If not C{None}, the mode must be one of the values in L{VALID_ENCRYPT_MODES}. 166 @raise ValueError: If the value is not valid. 167 """ 168 if value is not None: 169 if value not in VALID_ENCRYPT_MODES: 170 raise ValueError("Encrypt mode must be one of %s." % VALID_ENCRYPT_MODES) 171 self._encryptMode = value
    172
    173 - def _getEncryptMode(self):
    174 """ 175 Property target used to get the encrypt mode. 176 """ 177 return self._encryptMode
    178
    179 - def _setEncryptTarget(self, value):
    180 """ 181 Property target used to set the encrypt target. 182 """ 183 if value is not None: 184 if len(value) < 1: 185 raise ValueError("Encrypt target must be non-empty string.") 186 self._encryptTarget = value
    187
    188 - def _getEncryptTarget(self):
    189 """ 190 Property target used to get the encrypt target. 191 """ 192 return self._encryptTarget
    193 194 encryptMode = property(_getEncryptMode, _setEncryptMode, None, doc="Encrypt mode.") 195 encryptTarget = property(_getEncryptTarget, _setEncryptTarget, None, doc="Encrypt target (i.e. GPG recipient).")
    196
    197 198 ######################################################################## 199 # LocalConfig class definition 200 ######################################################################## 201 202 @total_ordering 203 -class LocalConfig(object):
    204 205 """ 206 Class representing this extension's configuration document. 207 208 This is not a general-purpose configuration object like the main Cedar 209 Backup configuration object. Instead, it just knows how to parse and emit 210 encrypt-specific configuration values. Third parties who need to read and 211 write configuration related to this extension should access it through the 212 constructor, C{validate} and C{addConfig} methods. 213 214 @note: Lists within this class are "unordered" for equality comparisons. 215 216 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 217 encrypt, validate, addConfig 218 """ 219
    220 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    221 """ 222 Initializes a configuration object. 223 224 If you initialize the object without passing either C{xmlData} or 225 C{xmlPath} then configuration will be empty and will be invalid until it 226 is filled in properly. 227 228 No reference to the original XML data or original path is saved off by 229 this class. Once the data has been parsed (successfully or not) this 230 original information is discarded. 231 232 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 233 method will be called (with its default arguments) against configuration 234 after successfully parsing any passed-in XML. Keep in mind that even if 235 C{validate} is C{False}, it might not be possible to parse the passed-in 236 XML document if lower-level validations fail. 237 238 @note: It is strongly suggested that the C{validate} option always be set 239 to C{True} (the default) unless there is a specific need to read in 240 invalid configuration from disk. 241 242 @param xmlData: XML data representing configuration. 243 @type xmlData: String data. 244 245 @param xmlPath: Path to an XML file on disk. 246 @type xmlPath: Absolute path to a file on disk. 247 248 @param validate: Validate the document after parsing it. 249 @type validate: Boolean true/false. 250 251 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 252 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 253 @raise ValueError: If the parsed configuration document is not valid. 254 """ 255 self._encrypt = None 256 self.encrypt = None 257 if xmlData is not None and xmlPath is not None: 258 raise ValueError("Use either xmlData or xmlPath, but not both.") 259 if xmlData is not None: 260 self._parseXmlData(xmlData) 261 if validate: 262 self.validate() 263 elif xmlPath is not None: 264 with open(xmlPath) as f: 265 xmlData = f.read() 266 self._parseXmlData(xmlData) 267 if validate: 268 self.validate()
    269
    270 - def __repr__(self):
    271 """ 272 Official string representation for class instance. 273 """ 274 return "LocalConfig(%s)" % (self.encrypt)
    275
    276 - def __str__(self):
    277 """ 278 Informal string representation for class instance. 279 """ 280 return self.__repr__()
    281
    282 - def __eq__(self, other):
    283 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 284 return self.__cmp__(other) == 0
    285
    286 - def __lt__(self, other):
    287 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 288 return self.__cmp__(other) < 0
    289
    290 - def __gt__(self, other):
    291 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 292 return self.__cmp__(other) > 0
    293
    294 - def __cmp__(self, other):
    295 """ 296 Original Python 2 comparison operator. 297 Lists within this class are "unordered" for equality comparisons. 298 @param other: Other object to compare to. 299 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 300 """ 301 if other is None: 302 return 1 303 if self.encrypt != other.encrypt: 304 if self.encrypt < other.encrypt: 305 return -1 306 else: 307 return 1 308 return 0
    309
    310 - def _setEncrypt(self, value):
    311 """ 312 Property target used to set the encrypt configuration value. 313 If not C{None}, the value must be a C{EncryptConfig} object. 314 @raise ValueError: If the value is not a C{EncryptConfig} 315 """ 316 if value is None: 317 self._encrypt = None 318 else: 319 if not isinstance(value, EncryptConfig): 320 raise ValueError("Value must be a C{EncryptConfig} object.") 321 self._encrypt = value
    322
    323 - def _getEncrypt(self):
    324 """ 325 Property target used to get the encrypt configuration value. 326 """ 327 return self._encrypt
    328 329 encrypt = property(_getEncrypt, _setEncrypt, None, "Encrypt configuration in terms of a C{EncryptConfig} object.") 330
    331 - def validate(self):
    332 """ 333 Validates configuration represented by the object. 334 335 Encrypt configuration must be filled in. Within that, both the encrypt 336 mode and encrypt target must be filled in. 337 338 @raise ValueError: If one of the validations fails. 339 """ 340 if self.encrypt is None: 341 raise ValueError("Encrypt section is required.") 342 if self.encrypt.encryptMode is None: 343 raise ValueError("Encrypt mode must be set.") 344 if self.encrypt.encryptTarget is None: 345 raise ValueError("Encrypt target must be set.")
    346
    347 - def addConfig(self, xmlDom, parentNode):
    348 """ 349 Adds an <encrypt> configuration section as the next child of a parent. 350 351 Third parties should use this function to write configuration related to 352 this extension. 353 354 We add the following fields to the document:: 355 356 encryptMode //cb_config/encrypt/encrypt_mode 357 encryptTarget //cb_config/encrypt/encrypt_target 358 359 @param xmlDom: DOM tree as from C{impl.createDocument()}. 360 @param parentNode: Parent that the section should be appended to. 361 """ 362 if self.encrypt is not None: 363 sectionNode = addContainerNode(xmlDom, parentNode, "encrypt") 364 addStringNode(xmlDom, sectionNode, "encrypt_mode", self.encrypt.encryptMode) 365 addStringNode(xmlDom, sectionNode, "encrypt_target", self.encrypt.encryptTarget)
    366
    367 - def _parseXmlData(self, xmlData):
    368 """ 369 Internal method to parse an XML string into the object. 370 371 This method parses the XML document into a DOM tree (C{xmlDom}) and then 372 calls a static method to parse the encrypt configuration section. 373 374 @param xmlData: XML data to be parsed 375 @type xmlData: String data 376 377 @raise ValueError: If the XML cannot be successfully parsed. 378 """ 379 (xmlDom, parentNode) = createInputDom(xmlData) 380 self._encrypt = LocalConfig._parseEncrypt(parentNode)
    381 382 @staticmethod
    383 - def _parseEncrypt(parent):
    384 """ 385 Parses an encrypt configuration section. 386 387 We read the following individual fields:: 388 389 encryptMode //cb_config/encrypt/encrypt_mode 390 encryptTarget //cb_config/encrypt/encrypt_target 391 392 @param parent: Parent node to search beneath. 393 394 @return: C{EncryptConfig} object or C{None} if the section does not exist. 395 @raise ValueError: If some filled-in value is invalid. 396 """ 397 encrypt = None 398 section = readFirstChild(parent, "encrypt") 399 if section is not None: 400 encrypt = EncryptConfig() 401 encrypt.encryptMode = readString(section, "encrypt_mode") 402 encrypt.encryptTarget = readString(section, "encrypt_target") 403 return encrypt
    404
    405 406 ######################################################################## 407 # Public functions 408 ######################################################################## 409 410 ########################### 411 # executeAction() function 412 ########################### 413 414 -def executeAction(configPath, options, config):
    415 """ 416 Executes the encrypt backup action. 417 418 @param configPath: Path to configuration file on disk. 419 @type configPath: String representing a path on disk. 420 421 @param options: Program command-line options. 422 @type options: Options object. 423 424 @param config: Program configuration. 425 @type config: Config object. 426 427 @raise ValueError: Under many generic error conditions 428 @raise IOError: If there are I/O problems reading or writing files 429 """ 430 logger.debug("Executing encrypt extended action.") 431 if config.options is None or config.stage is None: 432 raise ValueError("Cedar Backup configuration is not properly filled in.") 433 local = LocalConfig(xmlPath=configPath) 434 if local.encrypt.encryptMode not in ["gpg", ]: 435 raise ValueError("Unknown encrypt mode [%s]" % local.encrypt.encryptMode) 436 if local.encrypt.encryptMode == "gpg": 437 _confirmGpgRecipient(local.encrypt.encryptTarget) 438 dailyDirs = findDailyDirs(config.stage.targetDir, ENCRYPT_INDICATOR) 439 for dailyDir in dailyDirs: 440 _encryptDailyDir(dailyDir, local.encrypt.encryptMode, local.encrypt.encryptTarget, 441 config.options.backupUser, config.options.backupGroup) 442 writeIndicatorFile(dailyDir, ENCRYPT_INDICATOR, config.options.backupUser, config.options.backupGroup) 443 logger.info("Executed the encrypt extended action successfully.")
    444
    445 446 ############################## 447 # _encryptDailyDir() function 448 ############################## 449 450 -def _encryptDailyDir(dailyDir, encryptMode, encryptTarget, backupUser, backupGroup):
    451 """ 452 Encrypts the contents of a daily staging directory. 453 454 Indicator files are ignored. All other files are encrypted. The only valid 455 encrypt mode is C{"gpg"}. 456 457 @param dailyDir: Daily directory to encrypt 458 @param encryptMode: Encryption mode (only "gpg" is allowed) 459 @param encryptTarget: Encryption target (GPG recipient for "gpg" mode) 460 @param backupUser: User that target files should be owned by 461 @param backupGroup: Group that target files should be owned by 462 463 @raise ValueError: If the encrypt mode is not supported. 464 @raise ValueError: If the daily staging directory does not exist. 465 """ 466 logger.debug("Begin encrypting contents of [%s].", dailyDir) 467 fileList = getBackupFiles(dailyDir) # ignores indicator files 468 for path in fileList: 469 _encryptFile(path, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=True) 470 logger.debug("Completed encrypting contents of [%s].", dailyDir)
    471
    472 473 ########################## 474 # _encryptFile() function 475 ########################## 476 477 -def _encryptFile(sourcePath, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=False):
    478 """ 479 Encrypts the source file using the indicated mode. 480 481 The encrypted file will be owned by the indicated backup user and group. If 482 C{removeSource} is C{True}, then the source file will be removed after it is 483 successfully encrypted. 484 485 Currently, only the C{"gpg"} encrypt mode is supported. 486 487 @param sourcePath: Absolute path of the source file to encrypt 488 @param encryptMode: Encryption mode (only "gpg" is allowed) 489 @param encryptTarget: Encryption target (GPG recipient) 490 @param backupUser: User that target files should be owned by 491 @param backupGroup: Group that target files should be owned by 492 @param removeSource: Indicates whether to remove the source file 493 494 @return: Path to the newly-created encrypted file. 495 496 @raise ValueError: If an invalid encrypt mode is passed in. 497 @raise IOError: If there is a problem accessing, encrypting or removing the source file. 498 """ 499 if not os.path.exists(sourcePath): 500 raise ValueError("Source path [%s] does not exist." % sourcePath) 501 if encryptMode == 'gpg': 502 encryptedPath = _encryptFileWithGpg(sourcePath, recipient=encryptTarget) 503 else: 504 raise ValueError("Unknown encrypt mode [%s]" % encryptMode) 505 changeOwnership(encryptedPath, backupUser, backupGroup) 506 if removeSource: 507 if os.path.exists(sourcePath): 508 try: 509 os.remove(sourcePath) 510 logger.debug("Completed removing old file [%s].", sourcePath) 511 except: 512 raise IOError("Failed to remove file [%s] after encrypting it." % (sourcePath)) 513 return encryptedPath
    514
    515 516 ################################# 517 # _encryptFileWithGpg() function 518 ################################# 519 520 -def _encryptFileWithGpg(sourcePath, recipient):
    521 """ 522 Encrypts the indicated source file using GPG. 523 524 The encrypted file will be in GPG's binary output format and will have the 525 same name as the source file plus a C{".gpg"} extension. The source file 526 will not be modified or removed by this function call. 527 528 @param sourcePath: Absolute path of file to be encrypted. 529 @param recipient: Recipient name to be passed to GPG's C{"-r"} option 530 531 @return: Path to the newly-created encrypted file. 532 533 @raise IOError: If there is a problem encrypting the file. 534 """ 535 encryptedPath = "%s.gpg" % sourcePath 536 command = resolveCommand(GPG_COMMAND) 537 args = [ "--batch", "--yes", "-e", "-r", recipient, "-o", encryptedPath, sourcePath, ] 538 result = executeCommand(command, args)[0] 539 if result != 0: 540 raise IOError("Error [%d] calling gpg to encrypt [%s]." % (result, sourcePath)) 541 if not os.path.exists(encryptedPath): 542 raise IOError("After call to [%s], encrypted file [%s] does not exist." % (command, encryptedPath)) 543 logger.debug("Completed encrypting file [%s] to [%s].", sourcePath, encryptedPath) 544 return encryptedPath
    545
    546 547 ################################# 548 # _confirmGpgRecpient() function 549 ################################# 550 551 -def _confirmGpgRecipient(recipient):
    552 """ 553 Confirms that a recipient's public key is known to GPG. 554 Throws an exception if there is a problem, or returns normally otherwise. 555 @param recipient: Recipient name 556 @raise IOError: If the recipient's public key is not known to GPG. 557 """ 558 command = resolveCommand(GPG_COMMAND) 559 args = [ "--batch", "-k", recipient, ] # should use --with-colons if the output will be parsed 560 result = executeCommand(command, args)[0] 561 if result != 0: 562 raise IOError("GPG unable to find public key for [%s]." % recipient)
    563

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.stage-pysrc.html0000664000175000017500000045177712657665547027267 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.stage
    Package CedarBackup3 :: Package actions :: Module stage
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.actions.stage

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2008,2010,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Cedar Backup, release 3 
     30  # Purpose  : Implements the standard 'stage' action. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Implements the standard 'stage' action. 
     40  @sort: executeStage 
     41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     42  """ 
     43   
     44   
     45  ######################################################################## 
     46  # Imported modules 
     47  ######################################################################## 
     48   
     49  # System modules 
     50  import os 
     51  import time 
     52  import logging 
     53   
     54  # Cedar Backup modules 
     55  from CedarBackup3.peer import RemotePeer, LocalPeer 
     56  from CedarBackup3.util import getUidGid, changeOwnership, isStartOfWeek, isRunningAsRoot 
     57  from CedarBackup3.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR 
     58  from CedarBackup3.actions.util import writeIndicatorFile 
     59   
     60   
     61  ######################################################################## 
     62  # Module-wide constants and variables 
     63  ######################################################################## 
     64   
     65  logger = logging.getLogger("CedarBackup3.log.actions.stage") 
     66   
     67   
     68  ######################################################################## 
     69  # Public functions 
     70  ######################################################################## 
     71   
     72  ########################## 
     73  # executeStage() function 
     74  ########################## 
     75   
    
    76 -def executeStage(configPath, options, config):
    77 """ 78 Executes the stage backup action. 79 80 @note: The daily directory is derived once and then we stick with it, just 81 in case a backup happens to span midnite. 82 83 @note: As portions of the stage action is complete, we will write various 84 indicator files so that it's obvious what actions have been completed. Each 85 peer gets a stage indicator in its collect directory, and then the master 86 gets a stage indicator in its daily staging directory. The store process 87 uses the master's stage indicator to decide whether a directory is ready to 88 be stored. Currently, nothing uses the indicator at each peer, and it 89 exists for reference only. 90 91 @param configPath: Path to configuration file on disk. 92 @type configPath: String representing a path on disk. 93 94 @param options: Program command-line options. 95 @type options: Options object. 96 97 @param config: Program configuration. 98 @type config: Config object. 99 100 @raise ValueError: Under many generic error conditions 101 @raise IOError: If there are problems reading or writing files. 102 """ 103 logger.debug("Executing the 'stage' action.") 104 if config.options is None or config.stage is None: 105 raise ValueError("Stage configuration is not properly filled in.") 106 dailyDir = _getDailyDir(config) 107 localPeers = _getLocalPeers(config) 108 remotePeers = _getRemotePeers(config) 109 allPeers = localPeers + remotePeers 110 stagingDirs = _createStagingDirs(config, dailyDir, allPeers) 111 for peer in allPeers: 112 logger.info("Staging peer [%s].", peer.name) 113 ignoreFailures = _getIgnoreFailuresFlag(options, config, peer) 114 if not peer.checkCollectIndicator(): 115 if not ignoreFailures: 116 logger.error("Peer [%s] was not ready to be staged.", peer.name) 117 else: 118 logger.info("Peer [%s] was not ready to be staged.", peer.name) 119 continue 120 logger.debug("Found collect indicator.") 121 targetDir = stagingDirs[peer.name] 122 if isRunningAsRoot(): 123 # Since we're running as root, we can change ownership 124 ownership = getUidGid(config.options.backupUser, config.options.backupGroup) 125 logger.debug("Using target dir [%s], ownership [%d:%d].", targetDir, ownership[0], ownership[1]) 126 else: 127 # Non-root cannot change ownership, so don't set it 128 ownership = None 129 logger.debug("Using target dir [%s], ownership [None].", targetDir) 130 try: 131 count = peer.stagePeer(targetDir=targetDir, ownership=ownership) # note: utilize effective user's default umask 132 logger.info("Staged %d files for peer [%s].", count, peer.name) 133 peer.writeStageIndicator() 134 except (ValueError, IOError, OSError) as e: 135 logger.error("Error staging [%s]: %s", peer.name, e) 136 writeIndicatorFile(dailyDir, STAGE_INDICATOR, config.options.backupUser, config.options.backupGroup) 137 logger.info("Executed the 'stage' action successfully.")
    138 139 140 ######################################################################## 141 # Private utility functions 142 ######################################################################## 143 144 ################################ 145 # _createStagingDirs() function 146 ################################ 147
    148 -def _createStagingDirs(config, dailyDir, peers):
    149 """ 150 Creates staging directories as required. 151 152 The main staging directory is the passed in daily directory, something like 153 C{staging/2002/05/23}. Then, individual peers get their own directories, 154 i.e. C{staging/2002/05/23/host}. 155 156 @param config: Config object. 157 @param dailyDir: Daily staging directory. 158 @param peers: List of all configured peers. 159 160 @return: Dictionary mapping peer name to staging directory. 161 """ 162 mapping = {} 163 if os.path.isdir(dailyDir): 164 logger.warning("Staging directory [%s] already existed.", dailyDir) 165 else: 166 try: 167 logger.debug("Creating staging directory [%s].", dailyDir) 168 os.makedirs(dailyDir) 169 for path in [ dailyDir, os.path.join(dailyDir, ".."), os.path.join(dailyDir, "..", ".."), ]: 170 changeOwnership(path, config.options.backupUser, config.options.backupGroup) 171 except Exception as e: 172 raise Exception("Unable to create staging directory: %s" % e) 173 for peer in peers: 174 peerDir = os.path.join(dailyDir, peer.name) 175 mapping[peer.name] = peerDir 176 if os.path.isdir(peerDir): 177 logger.warning("Peer staging directory [%s] already existed.", peerDir) 178 else: 179 try: 180 logger.debug("Creating peer staging directory [%s].", peerDir) 181 os.makedirs(peerDir) 182 changeOwnership(peerDir, config.options.backupUser, config.options.backupGroup) 183 except Exception as e: 184 raise Exception("Unable to create staging directory: %s" % e) 185 return mapping
    186 187 188 ######################################################################## 189 # Private attribute "getter" functions 190 ######################################################################## 191 192 #################################### 193 # _getIgnoreFailuresFlag() function 194 #################################### 195
    196 -def _getIgnoreFailuresFlag(options, config, peer):
    197 """ 198 Gets the ignore failures flag based on options, configuration, and peer. 199 @param options: Options object 200 @param config: Configuration object 201 @param peer: Peer to check 202 @return: Whether to ignore stage failures for this peer 203 """ 204 logger.debug("Ignore failure mode for this peer: %s", peer.ignoreFailureMode) 205 if peer.ignoreFailureMode is None or peer.ignoreFailureMode == "none": 206 return False 207 elif peer.ignoreFailureMode == "all": 208 return True 209 else: 210 if options.full or isStartOfWeek(config.options.startingDay): 211 return peer.ignoreFailureMode == "weekly" 212 else: 213 return peer.ignoreFailureMode == "daily"
    214 215 216 ########################## 217 # _getDailyDir() function 218 ########################## 219
    220 -def _getDailyDir(config):
    221 """ 222 Gets the daily staging directory. 223 224 This is just a directory in the form C{staging/YYYY/MM/DD}, i.e. 225 C{staging/2000/10/07}, except it will be an absolute path based on 226 C{config.stage.targetDir}. 227 228 @param config: Config object 229 230 @return: Path of daily staging directory. 231 """ 232 dailyDir = os.path.join(config.stage.targetDir, time.strftime(DIR_TIME_FORMAT)) 233 logger.debug("Daily staging directory is [%s].", dailyDir) 234 return dailyDir
    235 236 237 ############################ 238 # _getLocalPeers() function 239 ############################ 240
    241 -def _getLocalPeers(config):
    242 """ 243 Return a list of L{LocalPeer} objects based on configuration. 244 @param config: Config object. 245 @return: List of L{LocalPeer} objects. 246 """ 247 localPeers = [] 248 configPeers = None 249 if config.stage.hasPeers(): 250 logger.debug("Using list of local peers from stage configuration.") 251 configPeers = config.stage.localPeers 252 elif config.peers is not None and config.peers.hasPeers(): 253 logger.debug("Using list of local peers from peers configuration.") 254 configPeers = config.peers.localPeers 255 if configPeers is not None: 256 for peer in configPeers: 257 localPeer = LocalPeer(peer.name, peer.collectDir, peer.ignoreFailureMode) 258 localPeers.append(localPeer) 259 logger.debug("Found local peer: [%s]", localPeer.name) 260 return localPeers
    261 262 263 ############################# 264 # _getRemotePeers() function 265 ############################# 266
    267 -def _getRemotePeers(config):
    268 """ 269 Return a list of L{RemotePeer} objects based on configuration. 270 @param config: Config object. 271 @return: List of L{RemotePeer} objects. 272 """ 273 remotePeers = [] 274 configPeers = None 275 if config.stage.hasPeers(): 276 logger.debug("Using list of remote peers from stage configuration.") 277 configPeers = config.stage.remotePeers 278 elif config.peers is not None and config.peers.hasPeers(): 279 logger.debug("Using list of remote peers from peers configuration.") 280 configPeers = config.peers.remotePeers 281 if configPeers is not None: 282 for peer in configPeers: 283 remoteUser = _getRemoteUser(config, peer) 284 localUser = _getLocalUser(config) 285 rcpCommand = _getRcpCommand(config, peer) 286 remotePeer = RemotePeer(peer.name, peer.collectDir, config.options.workingDir, 287 remoteUser, rcpCommand, localUser, 288 ignoreFailureMode=peer.ignoreFailureMode) 289 remotePeers.append(remotePeer) 290 logger.debug("Found remote peer: [%s]", remotePeer.name) 291 return remotePeers
    292 293 294 ############################ 295 # _getRemoteUser() function 296 ############################ 297
    298 -def _getRemoteUser(config, remotePeer):
    299 """ 300 Gets the remote user associated with a remote peer. 301 Use peer's if possible, otherwise take from options section. 302 @param config: Config object. 303 @param remotePeer: Configuration-style remote peer object. 304 @return: Name of remote user associated with remote peer. 305 """ 306 if remotePeer.remoteUser is None: 307 return config.options.backupUser 308 return remotePeer.remoteUser
    309 310 311 ########################### 312 # _getLocalUser() function 313 ########################### 314
    315 -def _getLocalUser(config):
    316 """ 317 Gets the remote user associated with a remote peer. 318 @param config: Config object. 319 @return: Name of local user that should be used 320 """ 321 if not isRunningAsRoot(): 322 return None 323 return config.options.backupUser
    324 325 326 ############################ 327 # _getRcpCommand() function 328 ############################ 329
    330 -def _getRcpCommand(config, remotePeer):
    331 """ 332 Gets the RCP command associated with a remote peer. 333 Use peer's if possible, otherwise take from options section. 334 @param config: Config object. 335 @param remotePeer: Configuration-style remote peer object. 336 @return: RCP command associated with remote peer. 337 """ 338 if remotePeer.rcpCommand is None: 339 return config.options.rcpCommand 340 return remotePeer.rcpCommand
    341

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.actions.collect-module.html0000664000175000017500000000665612657665544030506 0ustar pronovicpronovic00000000000000 collect

    Module collect


    Functions

    executeCollect

    Variables

    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.release-pysrc.html0000664000175000017500000002534112657665547026125 0ustar pronovicpronovic00000000000000 CedarBackup3.release
    Package CedarBackup3 :: Module release
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.release

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 3 (>= 3.4) 
    13  # Project  : Cedar Backup, release 3 
    14  # Purpose  : Provides location to maintain release information. 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  """ 
    19  Provides location to maintain version information. 
    20   
    21  @sort: AUTHOR, EMAIL, COPYRIGHT, VERSION, DATE, URL 
    22   
    23  @var AUTHOR: Author of software. 
    24  @var EMAIL: Email address of author. 
    25  @var COPYRIGHT: Copyright date. 
    26  @var VERSION: Software version. 
    27  @var DATE: Software release date. 
    28  @var URL: URL of Cedar Backup webpage. 
    29   
    30  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    31  """ 
    32   
    33  AUTHOR      = "Kenneth J. Pronovici" 
    34  EMAIL       = "pronovic@ieee.org" 
    35  COPYRIGHT   = "2004-2011,2013-2016" 
    36  VERSION     = "3.1.6" 
    37  DATE        = "13 Feb 2016" 
    38  URL         = "https://bitbucket.org/cedarsolutions/cedar-backup3" 
    39   
    

    CedarBackup3-3.1.6/doc/interface/CedarBackup3-pysrc.html0000664000175000017500000002766012657665546024513 0ustar pronovicpronovic00000000000000 CedarBackup3
    Package CedarBackup3
    [hide private]
    [frames] | no frames]

    Source Code for Package CedarBackup3

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 3 (>= 3.4) 
    13  # Project  : Cedar Backup, release 3 
    14  # Purpose  : Provides package initialization 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Implements local and remote backups to CD or DVD media. 
    24   
    25  Cedar Backup is a software package designed to manage system backups for a pool 
    26  of local and remote machines.  Cedar Backup understands how to back up 
    27  filesystem data as well as MySQL and PostgreSQL databases and Subversion 
    28  repositories.  It can also be easily extended to support other kinds of data 
    29  sources. 
    30   
    31  Cedar Backup is focused around weekly backups to a single CD or DVD disc, with 
    32  the expectation that the disc will be changed or overwritten at the beginning 
    33  of each week.  If your hardware is new enough, Cedar Backup can write 
    34  multisession discs, allowing you to add incremental data to a disc on a daily 
    35  basis. 
    36   
    37  Besides offering command-line utilities to manage the backup process, Cedar 
    38  Backup provides a well-organized library of backup-related functionality, 
    39  written in the Python programming language. 
    40   
    41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    42  """ 
    43   
    44   
    45  ######################################################################## 
    46  # Package initialization 
    47  ######################################################################## 
    48   
    49  # Using 'from CedarBackup3 import *' will just import the modules listed 
    50  # in the __all__ variable. 
    51   
    52  __all__ = [ 'actions', 'cli', 'config', 'extend', 'filesystem', 'knapsack', 
    53              'peer', 'release', 'tools', 'util', 'writers', ] 
    54   
    

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions.initialize-pysrc.html0000664000175000017500000006526612657665547030317 0ustar pronovicpronovic00000000000000 CedarBackup3.actions.initialize
    Package CedarBackup3 :: Package actions :: Module initialize
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.actions.initialize

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Copyright (c) 2007,2010,2015 Kenneth J. Pronovici. 
    12  # All rights reserved. 
    13  # 
    14  # This program is free software; you can redistribute it and/or 
    15  # modify it under the terms of the GNU General Public License, 
    16  # Version 2, as published by the Free Software Foundation. 
    17  # 
    18  # This program is distributed in the hope that it will be useful, 
    19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
    20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
    21  # 
    22  # Copies of the GNU General Public License are available from 
    23  # the Free Software Foundation website, http://www.gnu.org/. 
    24  # 
    25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    26  # 
    27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    28  # Language : Python 3 (>= 3.4) 
    29  # Project  : Cedar Backup, release 3 
    30  # Purpose  : Implements the standard 'initialize' action. 
    31  # 
    32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    33   
    34  ######################################################################## 
    35  # Module documentation 
    36  ######################################################################## 
    37   
    38  """ 
    39  Implements the standard 'initialize' action. 
    40  @sort: executeInitialize 
    41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    42  """ 
    43   
    44   
    45  ######################################################################## 
    46  # Imported modules 
    47  ######################################################################## 
    48   
    49  # System modules 
    50  import logging 
    51   
    52  # Cedar Backup modules 
    53  from CedarBackup3.actions.util import initializeMediaState 
    54   
    55   
    56  ######################################################################## 
    57  # Module-wide constants and variables 
    58  ######################################################################## 
    59   
    60  logger = logging.getLogger("CedarBackup3.log.actions.initialize") 
    61   
    62   
    63  ######################################################################## 
    64  # Public functions 
    65  ######################################################################## 
    66   
    67  ############################### 
    68  # executeInitialize() function 
    69  ############################### 
    70   
    
    71 -def executeInitialize(configPath, options, config):
    72 """ 73 Executes the initialize action. 74 75 The initialize action initializes the media currently in the writer 76 device so that Cedar Backup can recognize it later. This is an optional 77 step; it's only required if checkMedia is set on the store configuration. 78 79 @param configPath: Path to configuration file on disk. 80 @type configPath: String representing a path on disk. 81 82 @param options: Program command-line options. 83 @type options: Options object. 84 85 @param config: Program configuration. 86 @type config: Config object. 87 """ 88 logger.debug("Executing the 'initialize' action.") 89 if config.options is None or config.store is None: 90 raise ValueError("Store configuration is not properly filled in.") 91 initializeMediaState(config) 92 logger.info("Executed the 'initialize' action successfully.")
    93

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.tools.span-module.html0000664000175000017500000000733112657665544027511 0ustar pronovicpronovic00000000000000 span

    Module span


    Classes

    SpanOptions

    Functions

    cli

    Variables

    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.image-module.html0000664000175000017500000001317312657665544025711 0ustar pronovicpronovic00000000000000 CedarBackup3.image
    Package CedarBackup3 :: Module image
    [hide private]
    [frames] | no frames]

    Module image

    source code

    Provides interface backwards compatibility.

    In Cedar Backup 2.10.0, a refactoring effort took place while adding code to support DVD hardware. All of the writer functionality was moved to the writers/ package. This mostly-empty file remains to preserve the Cedar Backup library interface.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Variables [hide private]
      __package__ = 'CedarBackup3'
    CedarBackup3-3.1.6/doc/interface/CedarBackup3.xmlutil.Serializer-class.html0000664000175000017500000006535212657665545030264 0ustar pronovicpronovic00000000000000 CedarBackup3.xmlutil.Serializer
    Package CedarBackup3 :: Module xmlutil :: Class Serializer
    [hide private]
    [frames] | no frames]

    Class Serializer

    source code

    object --+
             |
            Serializer
    

    XML serializer class.

    This is a customized serializer that I hacked together based on what I found in the PyXML distribution. Basically, around release 2.7.0, the only reason I still had around a dependency on PyXML was for the PrettyPrint functionality, and that seemed pointless. So, I stripped the PrettyPrint code out of PyXML and hacked bits of it off until it did just what I needed and no more.

    This code started out being called PrintVisitor, but I decided it makes more sense just calling it a serializer. I've made nearly all of the methods private, and I've added a new high-level serialize() method rather than having clients call visit().

    Anyway, as a consequence of my hacking with it, this can't quite be called a complete XML serializer any more. I ripped out support for HTML and XHTML, and there is also no longer any support for namespaces (which I took out because this dragged along a lot of extra code, and Cedar Backup doesn't use namespaces). However, everything else should pretty much work as expected.


    Copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved.

    Instance Methods [hide private]
     
    __init__(self, stream=sys.stdout, encoding='UTF-8', indent=3)
    Initialize a serializer.
    source code
     
    serialize(self, xmlDom)
    Serialize the passed-in XML document.
    source code
     
    _write(self, text) source code
     
    _tryIndent(self) source code
     
    _visit(self, node) source code
     
    _visitNodeList(self, node, exclude=None) source code
     
    _visitNamedNodeMap(self, node) source code
     
    _visitAttr(self, node) source code
     
    _visitProlog(self) source code
     
    _visitDocument(self, node) source code
     
    _visitDocumentFragment(self, node) source code
     
    _visitElement(self, node) source code
     
    _visitText(self, node) source code
     
    _visitDocumentType(self, doctype) source code
     
    _visitEntity(self, node)
    Visited from a NamedNodeMap in DocumentType
    source code
     
    _visitNotation(self, node)
    Visited from a NamedNodeMap in DocumentType
    source code
     
    _visitCDATASection(self, node) source code
     
    _visitComment(self, node) source code
     
    _visitEntityReference(self, node) source code
     
    _visitProcessingInstruction(self, node) source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, stream=sys.stdout, encoding='UTF-8', indent=3)
    (Constructor)

    source code 

    Initialize a serializer.

    Parameters:
    • stream - Stream to write output to.
    • encoding - Output encoding.
    • indent - Number of spaces to indent, as an integer
    Overrides: object.__init__

    serialize(self, xmlDom)

    source code 

    Serialize the passed-in XML document.

    Parameters:
    • xmlDom - XML DOM tree to serialize
    Raises:
    • ValueError - If there's an unknown node type in the document.

    _visit(self, node)

    source code 
    Raises:
    • ValueError - If there's an unknown node type in the document.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.tools.span-pysrc.html0000664000175000017500000070663012657665546026613 0ustar pronovicpronovic00000000000000 CedarBackup3.tools.span
    Package CedarBackup3 :: Package tools :: Module span
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.tools.span

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2007-2008,2010,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Cedar Backup, release 3 
     30  # Purpose  : Spans staged data among multiple discs 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Notes 
     36  ######################################################################## 
     37   
     38  """ 
     39  Spans staged data among multiple discs 
     40   
     41  This is the Cedar Backup span tool.  It is intended for use by people who stage 
     42  more data than can fit on a single disc.  It allows a user to split staged data 
     43  among more than one disc.  It can't be an extension because it requires user 
     44  input when switching media. 
     45   
     46  Most configuration is taken from the Cedar Backup configuration file, 
     47  specifically the store section.  A few pieces of configuration are taken 
     48  directly from the user. 
     49   
     50  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     51  """ 
     52   
     53  ######################################################################## 
     54  # Imported modules and constants 
     55  ######################################################################## 
     56   
     57  # System modules 
     58  import sys 
     59  import os 
     60  import logging 
     61  import tempfile 
     62   
     63  # Cedar Backup modules 
     64  from CedarBackup3.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT 
     65  from CedarBackup3.util import displayBytes, convertSize, mount, unmount 
     66  from CedarBackup3.util import UNIT_SECTORS, UNIT_BYTES 
     67  from CedarBackup3.config import Config 
     68  from CedarBackup3.filesystem import BackupFileList, compareDigestMaps, normalizeDir 
     69  from CedarBackup3.cli import Options, setupLogging, setupPathResolver 
     70  from CedarBackup3.cli import DEFAULT_CONFIG, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP, DEFAULT_MODE 
     71  from CedarBackup3.actions.constants import STORE_INDICATOR 
     72  from CedarBackup3.actions.util import createWriter 
     73  from CedarBackup3.actions.store import writeIndicatorFile 
     74  from CedarBackup3.actions.util import findDailyDirs 
     75  from CedarBackup3.util import Diagnostics 
     76   
     77   
     78  ######################################################################## 
     79  # Module-wide constants and variables 
     80  ######################################################################## 
     81   
     82  logger = logging.getLogger("CedarBackup3.log.tools.span") 
     83   
     84   
     85  ####################################################################### 
     86  # SpanOptions class 
     87  ####################################################################### 
     88   
    
    89 -class SpanOptions(Options):
    90 91 """ 92 Tool-specific command-line options. 93 94 Most of the cback3 command-line options are exactly what we need here -- 95 logfile path, permissions, verbosity, etc. However, we need to make a few 96 tweaks since we don't accept any actions. 97 98 Also, a few extra command line options that we accept are really ignored 99 underneath. I just don't care about that for a tool like this. 100 """ 101
    102 - def validate(self):
    103 """ 104 Validates command-line options represented by the object. 105 There are no validations here, because we don't use any actions. 106 @raise ValueError: If one of the validations fails. 107 """ 108 pass
    109 110 111 ####################################################################### 112 # Public functions 113 ####################################################################### 114 115 ################# 116 # cli() function 117 ################# 118
    119 -def cli():
    120 """ 121 Implements the command-line interface for the C{cback3-span} script. 122 123 Essentially, this is the "main routine" for the cback3-span script. It does 124 all of the argument processing for the script, and then also implements the 125 tool functionality. 126 127 This function looks pretty similiar to C{CedarBackup3.cli.cli()}. It's not 128 easy to refactor this code to make it reusable and also readable, so I've 129 decided to just live with the duplication. 130 131 A different error code is returned for each type of failure: 132 133 - C{1}: The Python interpreter version is < 3.4 134 - C{2}: Error processing command-line arguments 135 - C{3}: Error configuring logging 136 - C{4}: Error parsing indicated configuration file 137 - C{5}: Backup was interrupted with a CTRL-C or similar 138 - C{6}: Error executing other parts of the script 139 140 @note: This script uses print rather than logging to the INFO level, because 141 it is interactive. Underlying Cedar Backup functionality uses the logging 142 mechanism exclusively. 143 144 @return: Error code as described above. 145 """ 146 try: 147 if list(map(int, [sys.version_info[0], sys.version_info[1]])) < [3, 4]: 148 sys.stderr.write("Python 3 version 3.4 or greater required.\n") 149 return 1 150 except: 151 # sys.version_info isn't available before 2.0 152 sys.stderr.write("Python 3 version 3.4 or greater required.\n") 153 return 1 154 155 try: 156 options = SpanOptions(argumentList=sys.argv[1:]) 157 except Exception as e: 158 _usage() 159 sys.stderr.write(" *** Error: %s\n" % e) 160 return 2 161 162 if options.help: 163 _usage() 164 return 0 165 if options.version: 166 _version() 167 return 0 168 if options.diagnostics: 169 _diagnostics() 170 return 0 171 172 if options.stacktrace: 173 logfile = setupLogging(options) 174 else: 175 try: 176 logfile = setupLogging(options) 177 except Exception as e: 178 sys.stderr.write("Error setting up logging: %s\n" % e) 179 return 3 180 181 logger.info("Cedar Backup 'span' utility run started.") 182 logger.info("Options were [%s]", options) 183 logger.info("Logfile is [%s]", logfile) 184 185 if options.config is None: 186 logger.debug("Using default configuration file.") 187 configPath = DEFAULT_CONFIG 188 else: 189 logger.debug("Using user-supplied configuration file.") 190 configPath = options.config 191 192 try: 193 logger.info("Configuration path is [%s]", configPath) 194 config = Config(xmlPath=configPath) 195 setupPathResolver(config) 196 except Exception as e: 197 logger.error("Error reading or handling configuration: %s", e) 198 logger.info("Cedar Backup 'span' utility run completed with status 4.") 199 return 4 200 201 if options.stacktrace: 202 _executeAction(options, config) 203 else: 204 try: 205 _executeAction(options, config) 206 except KeyboardInterrupt: 207 logger.error("Backup interrupted.") 208 logger.info("Cedar Backup 'span' utility run completed with status 5.") 209 return 5 210 except Exception as e: 211 logger.error("Error executing backup: %s", e) 212 logger.info("Cedar Backup 'span' utility run completed with status 6.") 213 return 6 214 215 logger.info("Cedar Backup 'span' utility run completed with status 0.") 216 return 0
    217 218 219 ####################################################################### 220 # Utility functions 221 ####################################################################### 222 223 #################### 224 # _usage() function 225 #################### 226
    227 -def _usage(fd=sys.stderr):
    228 """ 229 Prints usage information for the cback3-span script. 230 @param fd: File descriptor used to print information. 231 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 232 """ 233 fd.write("\n") 234 fd.write(" Usage: cback3-span [switches]\n") 235 fd.write("\n") 236 fd.write(" Cedar Backup 'span' tool.\n") 237 fd.write("\n") 238 fd.write(" This Cedar Backup utility spans staged data between multiple discs.\n") 239 fd.write(" It is a utility, not an extension, and requires user interaction.\n") 240 fd.write("\n") 241 fd.write(" The following switches are accepted, mostly to set up underlying\n") 242 fd.write(" Cedar Backup functionality:\n") 243 fd.write("\n") 244 fd.write(" -h, --help Display this usage/help listing\n") 245 fd.write(" -V, --version Display version information\n") 246 fd.write(" -b, --verbose Print verbose output as well as logging to disk\n") 247 fd.write(" -c, --config Path to config file (default: %s)\n" % DEFAULT_CONFIG) 248 fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE) 249 fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1])) 250 fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE) 251 fd.write(" -O, --output Record some sub-command (i.e. tar) output to the log\n") 252 fd.write(" -d, --debug Write debugging information to the log (implies --output)\n") 253 fd.write(" -s, --stack Dump a Python stack trace instead of swallowing exceptions\n") 254 fd.write("\n")
    255 256 257 ###################### 258 # _version() function 259 ###################### 260
    261 -def _version(fd=sys.stdout):
    262 """ 263 Prints version information for the cback3-span script. 264 @param fd: File descriptor used to print information. 265 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 266 """ 267 fd.write("\n") 268 fd.write(" Cedar Backup 'span' tool.\n") 269 fd.write(" Included with Cedar Backup version %s, released %s.\n" % (VERSION, DATE)) 270 fd.write("\n") 271 fd.write(" Copyright (c) %s %s <%s>.\n" % (COPYRIGHT, AUTHOR, EMAIL)) 272 fd.write(" See CREDITS for a list of included code and other contributors.\n") 273 fd.write(" This is free software; there is NO warranty. See the\n") 274 fd.write(" GNU General Public License version 2 for copying conditions.\n") 275 fd.write("\n") 276 fd.write(" Use the --help option for usage information.\n") 277 fd.write("\n")
    278 279 280 ########################## 281 # _diagnostics() function 282 ########################## 283
    284 -def _diagnostics(fd=sys.stdout):
    285 """ 286 Prints runtime diagnostics information. 287 @param fd: File descriptor used to print information. 288 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 289 """ 290 fd.write("\n") 291 fd.write("Diagnostics:\n") 292 fd.write("\n") 293 Diagnostics().printDiagnostics(fd=fd, prefix=" ") 294 fd.write("\n")
    295 296 297 ############################ 298 # _executeAction() function 299 ############################ 300
    301 -def _executeAction(options, config):
    302 """ 303 Implements the guts of the cback3-span tool. 304 305 @param options: Program command-line options. 306 @type options: SpanOptions object. 307 308 @param config: Program configuration. 309 @type config: Config object. 310 311 @raise Exception: Under many generic error conditions 312 """ 313 print("") 314 print("================================================") 315 print(" Cedar Backup 'span' tool") 316 print("================================================") 317 print("") 318 print("This the Cedar Backup span tool. It is used to split up staging") 319 print("data when that staging data does not fit onto a single disc.") 320 print("") 321 print("This utility operates using Cedar Backup configuration. Configuration") 322 print("specifies which staging directory to look at and which writer device") 323 print("and media type to use.") 324 print("") 325 if not _getYesNoAnswer("Continue?", default="Y"): 326 return 327 print("===") 328 329 print("") 330 print("Cedar Backup store configuration looks like this:") 331 print("") 332 print(" Source Directory...: %s" % config.store.sourceDir) 333 print(" Media Type.........: %s" % config.store.mediaType) 334 print(" Device Type........: %s" % config.store.deviceType) 335 print(" Device Path........: %s" % config.store.devicePath) 336 print(" Device SCSI ID.....: %s" % config.store.deviceScsiId) 337 print(" Drive Speed........: %s" % config.store.driveSpeed) 338 print(" Check Data Flag....: %s" % config.store.checkData) 339 print(" No Eject Flag......: %s" % config.store.noEject) 340 print("") 341 if not _getYesNoAnswer("Is this OK?", default="Y"): 342 return 343 print("===") 344 345 (writer, mediaCapacity) = _getWriter(config) 346 347 print("") 348 print("Please wait, indexing the source directory (this may take a while)...") 349 (dailyDirs, fileList) = _findDailyDirs(config.store.sourceDir) 350 print("===") 351 352 print("") 353 print("The following daily staging directories have not yet been written to disc:") 354 print("") 355 for dailyDir in dailyDirs: 356 print(" %s" % dailyDir) 357 358 totalSize = fileList.totalSize() 359 print("") 360 print("The total size of the data in these directories is %s." % displayBytes(totalSize)) 361 print("") 362 if not _getYesNoAnswer("Continue?", default="Y"): 363 return 364 print("===") 365 366 print("") 367 print("Based on configuration, the capacity of your media is %s." % displayBytes(mediaCapacity)) 368 369 print("") 370 print("Since estimates are not perfect and there is some uncertainly in") 371 print("media capacity calculations, it is good to have a \"cushion\",") 372 print("a percentage of capacity to set aside. The cushion reduces the") 373 print("capacity of your media, so a 1.5% cushion leaves 98.5% remaining.") 374 print("") 375 cushion = _getFloat("What cushion percentage?", default=4.5) 376 print("===") 377 378 realCapacity = ((100.0 - cushion)/100.0) * mediaCapacity 379 minimumDiscs = (totalSize/realCapacity) + 1 380 print("") 381 print("The real capacity, taking into account the %.2f%% cushion, is %s." % (cushion, displayBytes(realCapacity))) 382 print("It will take at least %d disc(s) to store your %s of data." % (minimumDiscs, displayBytes(totalSize))) 383 print("") 384 if not _getYesNoAnswer("Continue?", default="Y"): 385 return 386 print("===") 387 388 happy = False 389 while not happy: 390 print("") 391 print("Which algorithm do you want to use to span your data across") 392 print("multiple discs?") 393 print("") 394 print("The following algorithms are available:") 395 print("") 396 print(" first....: The \"first-fit\" algorithm") 397 print(" best.....: The \"best-fit\" algorithm") 398 print(" worst....: The \"worst-fit\" algorithm") 399 print(" alternate: The \"alternate-fit\" algorithm") 400 print("") 401 print("If you don't like the results you will have a chance to try a") 402 print("different one later.") 403 print("") 404 algorithm = _getChoiceAnswer("Which algorithm?", "worst", [ "first", "best", "worst", "alternate", ]) 405 print("===") 406 407 print("") 408 print("Please wait, generating file lists (this may take a while)...") 409 spanSet = fileList.generateSpan(capacity=realCapacity, algorithm="%s_fit" % algorithm) 410 print("===") 411 412 print("") 413 print("Using the \"%s-fit\" algorithm, Cedar Backup can split your data" % algorithm) 414 print("into %d discs." % len(spanSet)) 415 print("") 416 counter = 0 417 for item in spanSet: 418 counter += 1 419 print("Disc %d: %d files, %s, %.2f%% utilization" % (counter, len(item.fileList), 420 displayBytes(item.size), item.utilization)) 421 print("") 422 if _getYesNoAnswer("Accept this solution?", default="Y"): 423 happy = True 424 print("===") 425 426 counter = 0 427 for spanItem in spanSet: 428 counter += 1 429 if counter == 1: 430 print("") 431 _getReturn("Please place the first disc in your backup device.\nPress return when ready.") 432 print("===") 433 else: 434 print("") 435 _getReturn("Please replace the disc in your backup device.\nPress return when ready.") 436 print("===") 437 _writeDisc(config, writer, spanItem) 438 439 _writeStoreIndicator(config, dailyDirs) 440 441 print("") 442 print("Completed writing all discs.")
    443 444 445 ############################ 446 # _findDailyDirs() function 447 ############################ 448
    449 -def _findDailyDirs(stagingDir):
    450 """ 451 Returns a list of all daily staging directories that have not yet been 452 stored. 453 454 The store indicator file C{cback.store} will be written to a daily staging 455 directory once that directory is written to disc. So, this function looks 456 at each daily staging directory within the configured staging directory, and 457 returns a list of those which do not contain the indicator file. 458 459 Returned is a tuple containing two items: a list of daily staging 460 directories, and a BackupFileList containing all files among those staging 461 directories. 462 463 @param stagingDir: Configured staging directory 464 465 @return: Tuple (staging dirs, backup file list) 466 """ 467 results = findDailyDirs(stagingDir, STORE_INDICATOR) 468 fileList = BackupFileList() 469 for item in results: 470 fileList.addDirContents(item) 471 return (results, fileList)
    472 473 474 ################################## 475 # _writeStoreIndicator() function 476 ################################## 477
    478 -def _writeStoreIndicator(config, dailyDirs):
    479 """ 480 Writes a store indicator file into daily directories. 481 482 @param config: Config object. 483 @param dailyDirs: List of daily directories 484 """ 485 for dailyDir in dailyDirs: 486 writeIndicatorFile(dailyDir, STORE_INDICATOR, 487 config.options.backupUser, 488 config.options.backupGroup)
    489 490 491 ######################## 492 # _getWriter() function 493 ######################## 494
    495 -def _getWriter(config):
    496 """ 497 Gets a writer and media capacity from store configuration. 498 Returned is a writer and a media capacity in bytes. 499 @param config: Cedar Backup configuration 500 @return: Tuple of (writer, mediaCapacity) 501 """ 502 writer = createWriter(config) 503 mediaCapacity = convertSize(writer.media.capacity, UNIT_SECTORS, UNIT_BYTES) 504 return (writer, mediaCapacity)
    505 506 507 ######################## 508 # _writeDisc() function 509 ######################## 510
    511 -def _writeDisc(config, writer, spanItem):
    512 """ 513 Writes a span item to disc. 514 @param config: Cedar Backup configuration 515 @param writer: Writer to use 516 @param spanItem: Span item to write 517 """ 518 print("") 519 _discInitializeImage(config, writer, spanItem) 520 _discWriteImage(config, writer) 521 _discConsistencyCheck(config, writer, spanItem) 522 print("Write process is complete.") 523 print("===")
    524
    525 -def _discInitializeImage(config, writer, spanItem):
    526 """ 527 Initialize an ISO image for a span item. 528 @param config: Cedar Backup configuration 529 @param writer: Writer to use 530 @param spanItem: Span item to write 531 """ 532 complete = False 533 while not complete: 534 try: 535 print("Initializing image...") 536 writer.initializeImage(newDisc=True, tmpdir=config.options.workingDir) 537 for path in spanItem.fileList: 538 graftPoint = os.path.dirname(path.replace(config.store.sourceDir, "", 1)) 539 writer.addImageEntry(path, graftPoint) 540 complete = True 541 except KeyboardInterrupt as e: 542 raise e 543 except Exception as e: 544 logger.error("Failed to initialize image: %s", e) 545 if not _getYesNoAnswer("Retry initialization step?", default="Y"): 546 raise e 547 print("Ok, attempting retry.") 548 print("===") 549 print("Completed initializing image.")
    550
    551 -def _discWriteImage(config, writer):
    552 """ 553 Writes a ISO image for a span item. 554 @param config: Cedar Backup configuration 555 @param writer: Writer to use 556 """ 557 complete = False 558 while not complete: 559 try: 560 print("Writing image to disc...") 561 writer.writeImage() 562 complete = True 563 except KeyboardInterrupt as e: 564 raise e 565 except Exception as e: 566 logger.error("Failed to write image: %s", e) 567 if not _getYesNoAnswer("Retry this step?", default="Y"): 568 raise e 569 print("Ok, attempting retry.") 570 _getReturn("Please replace media if needed.\nPress return when ready.") 571 print("===") 572 print("Completed writing image.")
    573
    574 -def _discConsistencyCheck(config, writer, spanItem):
    575 """ 576 Run a consistency check on an ISO image for a span item. 577 @param config: Cedar Backup configuration 578 @param writer: Writer to use 579 @param spanItem: Span item to write 580 """ 581 if config.store.checkData: 582 complete = False 583 while not complete: 584 try: 585 print("Running consistency check...") 586 _consistencyCheck(config, spanItem.fileList) 587 complete = True 588 except KeyboardInterrupt as e: 589 raise e 590 except Exception as e: 591 logger.error("Consistency check failed: %s", e) 592 if not _getYesNoAnswer("Retry the consistency check?", default="Y"): 593 raise e 594 if _getYesNoAnswer("Rewrite the disc first?", default="N"): 595 print("Ok, attempting retry.") 596 _getReturn("Please replace the disc in your backup device.\nPress return when ready.") 597 print("===") 598 _discWriteImage(config, writer) 599 else: 600 print("Ok, attempting retry.") 601 print("===") 602 print("Completed consistency check.")
    603 604 605 ############################### 606 # _consistencyCheck() function 607 ############################### 608
    609 -def _consistencyCheck(config, fileList):
    610 """ 611 Runs a consistency check against media in the backup device. 612 613 The function mounts the device at a temporary mount point in the working 614 directory, and then compares the passed-in file list's digest map with the 615 one generated from the disc. The two lists should be identical. 616 617 If no exceptions are thrown, there were no problems with the consistency 618 check. 619 620 @warning: The implementation of this function is very UNIX-specific. 621 622 @param config: Config object. 623 @param fileList: BackupFileList whose contents to check against 624 625 @raise ValueError: If the check fails 626 @raise IOError: If there is a problem working with the media. 627 """ 628 logger.debug("Running consistency check.") 629 mountPoint = tempfile.mkdtemp(dir=config.options.workingDir) 630 try: 631 mount(config.store.devicePath, mountPoint, "iso9660") 632 discList = BackupFileList() 633 discList.addDirContents(mountPoint) 634 sourceList = BackupFileList() 635 sourceList.extend(fileList) 636 discListDigest = discList.generateDigestMap(stripPrefix=normalizeDir(mountPoint)) 637 sourceListDigest = sourceList.generateDigestMap(stripPrefix=normalizeDir(config.store.sourceDir)) 638 compareDigestMaps(sourceListDigest, discListDigest, verbose=True) 639 logger.info("Consistency check completed. No problems found.") 640 finally: 641 unmount(mountPoint, True, 5, 1) # try 5 times, and remove mount point when done
    642 643 644 ######################################################################### 645 # User interface utilities 646 ######################################################################## 647
    648 -def _getYesNoAnswer(prompt, default):
    649 """ 650 Get a yes/no answer from the user. 651 The default will be placed at the end of the prompt. 652 A "Y" or "y" is considered yes, anything else no. 653 A blank (empty) response results in the default. 654 @param prompt: Prompt to show. 655 @param default: Default to set if the result is blank 656 @return: Boolean true/false corresponding to Y/N 657 """ 658 if default == "Y": 659 prompt = "%s [Y/n]: " % prompt 660 else: 661 prompt = "%s [y/N]: " % prompt 662 answer = input(prompt) 663 if answer in [ None, "", ]: 664 answer = default 665 if answer[0] in [ "Y", "y", ]: 666 return True 667 else: 668 return False
    669
    670 -def _getChoiceAnswer(prompt, default, validChoices):
    671 """ 672 Get a particular choice from the user. 673 The default will be placed at the end of the prompt. 674 The function loops until getting a valid choice. 675 A blank (empty) response results in the default. 676 @param prompt: Prompt to show. 677 @param default: Default to set if the result is None or blank. 678 @param validChoices: List of valid choices (strings) 679 @return: Valid choice from user. 680 """ 681 prompt = "%s [%s]: " % (prompt, default) 682 answer = input(prompt) 683 if answer in [ None, "", ]: 684 answer = default 685 while answer not in validChoices: 686 print("Choice must be one of %s" % validChoices) 687 answer = input(prompt) 688 return answer
    689
    690 -def _getFloat(prompt, default):
    691 """ 692 Get a floating point number from the user. 693 The default will be placed at the end of the prompt. 694 The function loops until getting a valid floating point number. 695 A blank (empty) response results in the default. 696 @param prompt: Prompt to show. 697 @param default: Default to set if the result is None or blank. 698 @return: Floating point number from user 699 """ 700 prompt = "%s [%.2f]: " % (prompt, default) 701 while True: 702 answer = input(prompt) 703 if answer in [ None, "" ]: 704 return default 705 else: 706 try: 707 return float(answer) 708 except ValueError: 709 print("Enter a floating point number.")
    710
    711 -def _getReturn(prompt):
    712 """ 713 Get a return key from the user. 714 @param prompt: Prompt to show. 715 """ 716 input(prompt)
    717 718 719 ######################################################################### 720 # Main routine 721 ######################################################################## 722 723 if __name__ == "__main__": 724 sys.exit(cli()) 725

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.StageConfig-class.html0000664000175000017500000010060212657665544030076 0ustar pronovicpronovic00000000000000 CedarBackup3.config.StageConfig
    Package CedarBackup3 :: Module config :: Class StageConfig
    [hide private]
    [frames] | no frames]

    Class StageConfig

    source code

    object --+
             |
            StageConfig
    

    Class representing a Cedar Backup stage configuration.

    The following restrictions exist on data in this class:

    • The target directory must be an absolute path
    • The list of local peers must contain only LocalPeer objects
    • The list of remote peers must contain only RemotePeer objects

    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, targetDir=None, localPeers=None, remotePeers=None)
    Constructor for the StageConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    hasPeers(self)
    Indicates whether any peers are filled into this object.
    source code
     
    _setTargetDir(self, value)
    Property target used to set the target directory.
    source code
     
    _getTargetDir(self)
    Property target used to get the target directory.
    source code
     
    _setLocalPeers(self, value)
    Property target used to set the local peers list.
    source code
     
    _getLocalPeers(self)
    Property target used to get the local peers list.
    source code
     
    _setRemotePeers(self, value)
    Property target used to set the remote peers list.
    source code
     
    _getRemotePeers(self)
    Property target used to get the remote peers list.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      targetDir
    Directory to stage files into, by peer name.
      localPeers
    List of local peers.
      remotePeers
    List of remote peers.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, targetDir=None, localPeers=None, remotePeers=None)
    (Constructor)

    source code 

    Constructor for the StageConfig class.

    Parameters:
    • targetDir - Directory to stage files into, by peer name.
    • localPeers - List of local peers.
    • remotePeers - List of remote peers.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    hasPeers(self)

    source code 

    Indicates whether any peers are filled into this object.

    Returns:
    Boolean true if any local or remote peers are filled in, false otherwise.

    _setTargetDir(self, value)

    source code 

    Property target used to set the target directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setLocalPeers(self, value)

    source code 

    Property target used to set the local peers list. Either the value must be None or each element must be a LocalPeer.

    Raises:
    • ValueError - If the value is not an absolute path.

    _setRemotePeers(self, value)

    source code 

    Property target used to set the remote peers list. Either the value must be None or each element must be a RemotePeer.

    Raises:
    • ValueError - If the value is not a RemotePeer

    Property Details [hide private]

    targetDir

    Directory to stage files into, by peer name.

    Get Method:
    _getTargetDir(self) - Property target used to get the target directory.
    Set Method:
    _setTargetDir(self, value) - Property target used to set the target directory.

    localPeers

    List of local peers.

    Get Method:
    _getLocalPeers(self) - Property target used to get the local peers list.
    Set Method:
    _setLocalPeers(self, value) - Property target used to set the local peers list.

    remotePeers

    List of remote peers.

    Get Method:
    _getRemotePeers(self) - Property target used to get the remote peers list.
    Set Method:
    _setRemotePeers(self, value) - Property target used to set the remote peers list.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.writer-module.html0000664000175000017500000001320012657665544026132 0ustar pronovicpronovic00000000000000 CedarBackup3.writer
    Package CedarBackup3 :: Module writer
    [hide private]
    [frames] | no frames]

    Module writer

    source code

    Provides interface backwards compatibility.

    In Cedar Backup 2.10.0, a refactoring effort took place while adding code to support DVD hardware. All of the writer functionality was moved to the writers/ package. This mostly-empty file remains to preserve the Cedar Backup library interface.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Variables [hide private]
      __package__ = 'CedarBackup3'
    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.CollectFile-class.html0000664000175000017500000007575512657665544030116 0ustar pronovicpronovic00000000000000 CedarBackup3.config.CollectFile
    Package CedarBackup3 :: Module config :: Class CollectFile
    [hide private]
    [frames] | no frames]

    Class CollectFile

    source code

    object --+
             |
            CollectFile
    

    Class representing a Cedar Backup collect file.

    The following restrictions exist on data in this class:

    Instance Methods [hide private]
     
    __init__(self, absolutePath=None, collectMode=None, archiveMode=None)
    Constructor for the CollectFile class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setArchiveMode(self, value)
    Property target used to set the archive mode.
    source code
     
    _getArchiveMode(self)
    Property target used to get the archive mode.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      absolutePath
    Absolute path of the file to collect.
      collectMode
    Overridden collect mode for this file.
      archiveMode
    Overridden archive mode for this file.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, absolutePath=None, collectMode=None, archiveMode=None)
    (Constructor)

    source code 

    Constructor for the CollectFile class.

    Parameters:
    • absolutePath - Absolute path of the file to collect.
    • collectMode - Overridden collect mode for this file.
    • archiveMode - Overridden archive mode for this file.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setArchiveMode(self, value)

    source code 

    Property target used to set the archive mode. If not None, the mode must be one of the values in VALID_ARCHIVE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    absolutePath

    Absolute path of the file to collect.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    collectMode

    Overridden collect mode for this file.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    archiveMode

    Overridden archive mode for this file.

    Get Method:
    _getArchiveMode(self) - Property target used to get the archive mode.
    Set Method:
    _setArchiveMode(self, value) - Property target used to set the archive mode.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.knapsack-pysrc.html0000664000175000017500000021234112657665546026275 0ustar pronovicpronovic00000000000000 CedarBackup3.knapsack
    Package CedarBackup3 :: Module knapsack
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.knapsack

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2005,2010,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Cedar Backup, release 3 
     30  # Purpose  : Provides knapsack algorithms used for "fit" decisions 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######## 
     35  # Notes 
     36  ######## 
     37   
     38  """ 
     39  Provides the implementation for various knapsack algorithms. 
     40   
     41  Knapsack algorithms are "fit" algorithms, used to take a set of "things" and 
     42  decide on the optimal way to fit them into some container.  The focus of this 
     43  code is to fit files onto a disc, although the interface (in terms of item, 
     44  item size and capacity size, with no units) is generic enough that it can 
     45  be applied to items other than files. 
     46   
     47  All of the algorithms implemented below assume that "optimal" means "use up as 
     48  much of the disc's capacity as possible", but each produces slightly different 
     49  results.  For instance, the best fit and first fit algorithms tend to include 
     50  fewer files than the worst fit and alternate fit algorithms, even if they use 
     51  the disc space more efficiently. 
     52   
     53  Usually, for a given set of circumstances, it will be obvious to a human which 
     54  algorithm is the right one to use, based on trade-offs between number of files 
     55  included and ideal space utilization.  It's a little more difficult to do this 
     56  programmatically.  For Cedar Backup's purposes (i.e. trying to fit a small 
     57  number of collect-directory tarfiles onto a disc), worst-fit is probably the 
     58  best choice if the goal is to include as many of the collect directories as 
     59  possible. 
     60   
     61  @sort: firstFit, bestFit, worstFit, alternateFit 
     62   
     63  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     64  """ 
     65   
     66  ####################################################################### 
     67  # Public functions 
     68  ####################################################################### 
     69   
     70  ###################### 
     71  # firstFit() function 
     72  ###################### 
     73   
    
    74 -def firstFit(items, capacity):
    75 76 """ 77 Implements the first-fit knapsack algorithm. 78 79 The first-fit algorithm proceeds through an unsorted list of items until 80 running out of items or meeting capacity exactly. If capacity is exceeded, 81 the item that caused capacity to be exceeded is thrown away and the next one 82 is tried. This algorithm generally performs more poorly than the other 83 algorithms both in terms of capacity utilization and item utilization, but 84 can be as much as an order of magnitude faster on large lists of items 85 because it doesn't require any sorting. 86 87 The "size" values in the items and capacity arguments must be comparable, 88 but they are unitless from the perspective of this function. Zero-sized 89 items and capacity are considered degenerate cases. If capacity is zero, 90 no items fit, period, even if the items list contains zero-sized items. 91 92 The dictionary is indexed by its key, and then includes its key. This 93 seems kind of strange on first glance. It works this way to facilitate 94 easy sorting of the list on key if needed. 95 96 The function assumes that the list of items may be used destructively, if 97 needed. This avoids the overhead of having the function make a copy of the 98 list, if this is not required. Callers should pass C{items.copy()} if they 99 do not want their version of the list modified. 100 101 The function returns a list of chosen items and the unitless amount of 102 capacity used by the items. 103 104 @param items: Items to operate on 105 @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer 106 107 @param capacity: Capacity of container to fit to 108 @type capacity: integer 109 110 @returns: Tuple C{(items, used)} as described above 111 """ 112 113 # Use dict since insert into dict is faster than list append 114 included = { } 115 116 # Search the list as it stands (arbitrary order) 117 used = 0 118 remaining = capacity 119 for key in list(items.keys()): 120 if remaining == 0: 121 break 122 if remaining - items[key][1] >= 0: 123 included[key] = None 124 used += items[key][1] 125 remaining -= items[key][1] 126 127 # Return results 128 return (list(included.keys()), used)
    129 130 131 ##################### 132 # bestFit() function 133 ##################### 134
    135 -def bestFit(items, capacity):
    136 137 """ 138 Implements the best-fit knapsack algorithm. 139 140 The best-fit algorithm proceeds through a sorted list of items (sorted from 141 largest to smallest) until running out of items or meeting capacity exactly. 142 If capacity is exceeded, the item that caused capacity to be exceeded is 143 thrown away and the next one is tried. The algorithm effectively includes 144 the minimum number of items possible in its search for optimal capacity 145 utilization. For large lists of mixed-size items, it's not ususual to see 146 the algorithm achieve 100% capacity utilization by including fewer than 1% 147 of the items. Probably because it often has to look at fewer of the items 148 before completing, it tends to be a little faster than the worst-fit or 149 alternate-fit algorithms. 150 151 The "size" values in the items and capacity arguments must be comparable, 152 but they are unitless from the perspective of this function. Zero-sized 153 items and capacity are considered degenerate cases. If capacity is zero, 154 no items fit, period, even if the items list contains zero-sized items. 155 156 The dictionary is indexed by its key, and then includes its key. This 157 seems kind of strange on first glance. It works this way to facilitate 158 easy sorting of the list on key if needed. 159 160 The function assumes that the list of items may be used destructively, if 161 needed. This avoids the overhead of having the function make a copy of the 162 list, if this is not required. Callers should pass C{items.copy()} if they 163 do not want their version of the list modified. 164 165 The function returns a list of chosen items and the unitless amount of 166 capacity used by the items. 167 168 @param items: Items to operate on 169 @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer 170 171 @param capacity: Capacity of container to fit to 172 @type capacity: integer 173 174 @returns: Tuple C{(items, used)} as described above 175 """ 176 177 # Use dict since insert into dict is faster than list append 178 included = { } 179 180 # Sort the list from largest to smallest 181 itemlist = list(items.items()) 182 itemlist.sort(key=lambda x: x[1][1], reverse=True) # sort descending 183 keys = [] 184 for item in itemlist: 185 keys.append(item[0]) 186 187 # Search the list 188 used = 0 189 remaining = capacity 190 for key in keys: 191 if remaining == 0: 192 break 193 if remaining - items[key][1] >= 0: 194 included[key] = None 195 used += items[key][1] 196 remaining -= items[key][1] 197 198 # Return the results 199 return (list(included.keys()), used)
    200 201 202 ###################### 203 # worstFit() function 204 ###################### 205
    206 -def worstFit(items, capacity):
    207 208 """ 209 Implements the worst-fit knapsack algorithm. 210 211 The worst-fit algorithm proceeds through an a sorted list of items (sorted 212 from smallest to largest) until running out of items or meeting capacity 213 exactly. If capacity is exceeded, the item that caused capacity to be 214 exceeded is thrown away and the next one is tried. The algorithm 215 effectively includes the maximum number of items possible in its search for 216 optimal capacity utilization. It tends to be somewhat slower than either 217 the best-fit or alternate-fit algorithm, probably because on average it has 218 to look at more items before completing. 219 220 The "size" values in the items and capacity arguments must be comparable, 221 but they are unitless from the perspective of this function. Zero-sized 222 items and capacity are considered degenerate cases. If capacity is zero, 223 no items fit, period, even if the items list contains zero-sized items. 224 225 The dictionary is indexed by its key, and then includes its key. This 226 seems kind of strange on first glance. It works this way to facilitate 227 easy sorting of the list on key if needed. 228 229 The function assumes that the list of items may be used destructively, if 230 needed. This avoids the overhead of having the function make a copy of the 231 list, if this is not required. Callers should pass C{items.copy()} if they 232 do not want their version of the list modified. 233 234 The function returns a list of chosen items and the unitless amount of 235 capacity used by the items. 236 237 @param items: Items to operate on 238 @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer 239 240 @param capacity: Capacity of container to fit to 241 @type capacity: integer 242 243 @returns: Tuple C{(items, used)} as described above 244 """ 245 246 # Use dict since insert into dict is faster than list append 247 included = { } 248 249 # Sort the list from smallest to largest 250 itemlist = list(items.items()) 251 itemlist.sort(key=lambda x: x[1][1]) # sort ascending 252 keys = [] 253 for item in itemlist: 254 keys.append(item[0]) 255 256 # Search the list 257 used = 0 258 remaining = capacity 259 for key in keys: 260 if remaining == 0: 261 break 262 if remaining - items[key][1] >= 0: 263 included[key] = None 264 used += items[key][1] 265 remaining -= items[key][1] 266 267 # Return results 268 return (list(included.keys()), used)
    269 270 271 ########################## 272 # alternateFit() function 273 ########################## 274
    275 -def alternateFit(items, capacity):
    276 277 """ 278 Implements the alternate-fit knapsack algorithm. 279 280 This algorithm (which I'm calling "alternate-fit" as in "alternate from one 281 to the other") tries to balance small and large items to achieve better 282 end-of-disk performance. Instead of just working one direction through a 283 list, it alternately works from the start and end of a sorted list (sorted 284 from smallest to largest), throwing away any item which causes capacity to 285 be exceeded. The algorithm tends to be slower than the best-fit and 286 first-fit algorithms, and slightly faster than the worst-fit algorithm, 287 probably because of the number of items it considers on average before 288 completing. It often achieves slightly better capacity utilization than the 289 worst-fit algorithm, while including slighly fewer items. 290 291 The "size" values in the items and capacity arguments must be comparable, 292 but they are unitless from the perspective of this function. Zero-sized 293 items and capacity are considered degenerate cases. If capacity is zero, 294 no items fit, period, even if the items list contains zero-sized items. 295 296 The dictionary is indexed by its key, and then includes its key. This 297 seems kind of strange on first glance. It works this way to facilitate 298 easy sorting of the list on key if needed. 299 300 The function assumes that the list of items may be used destructively, if 301 needed. This avoids the overhead of having the function make a copy of the 302 list, if this is not required. Callers should pass C{items.copy()} if they 303 do not want their version of the list modified. 304 305 The function returns a list of chosen items and the unitless amount of 306 capacity used by the items. 307 308 @param items: Items to operate on 309 @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer 310 311 @param capacity: Capacity of container to fit to 312 @type capacity: integer 313 314 @returns: Tuple C{(items, used)} as described above 315 """ 316 317 # Use dict since insert into dict is faster than list append 318 included = { } 319 320 # Sort the list from smallest to largest 321 itemlist = list(items.items()) 322 itemlist.sort(key=lambda x: x[1][1]) # sort ascending 323 keys = [] 324 for item in itemlist: 325 keys.append(item[0]) 326 327 # Search the list 328 used = 0 329 remaining = capacity 330 331 front = keys[0:len(keys)//2] 332 back = keys[len(keys)//2:len(keys)] 333 back.reverse() 334 335 i = 0 336 j = 0 337 338 while remaining > 0 and (i < len(front) or j < len(back)): 339 if i < len(front): 340 if remaining - items[front[i]][1] >= 0: 341 included[front[i]] = None 342 used += items[front[i]][1] 343 remaining -= items[front[i]][1] 344 i += 1 345 if j < len(back): 346 if remaining - items[back[j]][1] >= 0: 347 included[back[j]] = None 348 used += items[back[j]][1] 349 remaining -= items[back[j]][1] 350 j += 1 351 352 # Return results 353 return (list(included.keys()), used)
    354

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.actions-module.html0000664000175000017500000002051512657665544026265 0ustar pronovicpronovic00000000000000 CedarBackup3.actions
    Package CedarBackup3 :: Package actions
    [hide private]
    [frames] | no frames]

    Package actions

    source code

    Cedar Backup actions.

    This package code related to the offical Cedar Backup actions (collect, stage, store, purge, rebuild, and validate).

    The action modules consist of mostly "glue" code that uses other lower-level functionality to actually implement a backup. There is one module for each high-level backup action, plus a module that provides shared constants.

    All of the public action function implement the Cedar Backup Extension Architecture Interface, i.e. the same interface that extensions implement.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Submodules [hide private]

    Variables [hide private]
      __package__ = None
    hash(x)
    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.peer-module.html0000664000175000017500000000405112657665544026340 0ustar pronovicpronovic00000000000000 peer

    Module peer


    Classes

    LocalPeer
    RemotePeer

    Variables

    DEF_CBACK_COMMAND
    DEF_COLLECT_INDICATOR
    DEF_RCP_COMMAND
    DEF_RSH_COMMAND
    DEF_STAGE_INDICATOR
    SU_COMMAND
    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.ActionHook-class.html0000664000175000017500000007345012657665544027755 0ustar pronovicpronovic00000000000000 CedarBackup3.config.ActionHook
    Package CedarBackup3 :: Module config :: Class ActionHook
    [hide private]
    [frames] | no frames]

    Class ActionHook

    source code

    object --+
             |
            ActionHook
    
    Known Subclasses:

    Class representing a hook associated with an action.

    A hook associated with an action is a shell command to be executed either before or after a named action is executed.

    The following restrictions exist on data in this class:

    • The action name must be a non-empty string matching ACTION_NAME_REGEX
    • The shell command must be a non-empty string.

    The internal before and after instance variables are always set to False in this parent class.

    Instance Methods [hide private]
     
    __init__(self, action=None, command=None)
    Constructor for the ActionHook class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    _setAction(self, value)
    Property target used to set the action name.
    source code
     
    _getAction(self)
    Property target used to get the action name.
    source code
     
    _setCommand(self, value)
    Property target used to set the command.
    source code
     
    _getCommand(self)
    Property target used to get the command.
    source code
     
    _getBefore(self)
    Property target used to get the before flag.
    source code
     
    _getAfter(self)
    Property target used to get the after flag.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      action
    Action this hook is associated with.
      command
    Shell command to execute.
      before
    Indicates whether command should be executed before action.
      after
    Indicates whether command should be executed after action.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, action=None, command=None)
    (Constructor)

    source code 

    Constructor for the ActionHook class.

    Parameters:
    • action - Action this hook is associated with
    • command - Shell command to execute
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAction(self, value)

    source code 

    Property target used to set the action name. The value must be a non-empty string if it is not None. It must also consist only of lower-case letters and digits.

    Raises:
    • ValueError - If the value is an empty string.

    _setCommand(self, value)

    source code 

    Property target used to set the command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    Property Details [hide private]

    action

    Action this hook is associated with.

    Get Method:
    _getAction(self) - Property target used to get the action name.
    Set Method:
    _setAction(self, value) - Property target used to set the action name.

    command

    Shell command to execute.

    Get Method:
    _getCommand(self) - Property target used to get the command.
    Set Method:
    _setCommand(self, value) - Property target used to set the command.

    before

    Indicates whether command should be executed before action.

    Get Method:
    _getBefore(self) - Property target used to get the before flag.

    after

    Indicates whether command should be executed after action.

    Get Method:
    _getAfter(self) - Property target used to get the after flag.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.tools-pysrc.html0000664000175000017500000002524712657665546025651 0ustar pronovicpronovic00000000000000 CedarBackup3.tools
    Package CedarBackup3 :: Package tools
    [hide private]
    [frames] | no frames]

    Source Code for Package CedarBackup3.tools

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 3 (>= 3.4) 
    13  # Project  : Official Cedar Backup Tools 
    14  # Purpose  : Provides package initialization 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Official Cedar Backup Tools 
    24   
    25  This package provides official Cedar Backup tools.  Tools are things that feel 
    26  a little like extensions, but don't fit the normal mold of extensions.  For 
    27  instance, they might not be intended to run from cron, or might need to interact 
    28  dynamically with the user (i.e. accept user input). 
    29   
    30  Tools are usually scripts that are run directly from the command line, just 
    31  like the main C{cback3} script.  Like the C{cback3} script, the majority of a 
    32  tool is implemented in a .py module, and then the script just invokes the 
    33  module's C{cli()} function.  The actual scripts for tools are distributed in 
    34  the util/ directory. 
    35   
    36  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    37  """ 
    38   
    39   
    40  ######################################################################## 
    41  # Package initialization 
    42  ######################################################################## 
    43   
    44  # Using 'from CedarBackup3.tools import *' will just import the modules listed 
    45  # in the __all__ variable. 
    46   
    47  __all__ = [ 'span', 'amazons3', ] 
    48   
    

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.release-module.html0000664000175000017500000001761112657665544026250 0ustar pronovicpronovic00000000000000 CedarBackup3.release
    Package CedarBackup3 :: Module release
    [hide private]
    [frames] | no frames]

    Module release

    source code

    Provides location to maintain version information.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Variables [hide private]
      AUTHOR = 'Kenneth J. Pronovici'
    Author of software.
      EMAIL = 'pronovic@ieee.org'
    Email address of author.
      COPYRIGHT = '2004-2011,2013-2016'
    Copyright date.
      VERSION = '3.1.6'
    Software version.
      DATE = '13 Feb 2016'
    Software release date.
      URL = 'https://bitbucket.org/cedarsolutions/cedar-backup3'
    URL of Cedar Backup webpage.
      __package__ = None
    hash(x)
    CedarBackup3-3.1.6/doc/interface/CedarBackup3.tools.span-module.html0000664000175000017500000012525412657665544026733 0ustar pronovicpronovic00000000000000 CedarBackup3.tools.span
    Package CedarBackup3 :: Package tools :: Module span
    [hide private]
    [frames] | no frames]

    Module span

    source code

    Spans staged data among multiple discs

    This is the Cedar Backup span tool. It is intended for use by people who stage more data than can fit on a single disc. It allows a user to split staged data among more than one disc. It can't be an extension because it requires user input when switching media.

    Most configuration is taken from the Cedar Backup configuration file, specifically the store section. A few pieces of configuration are taken directly from the user.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      SpanOptions
    Tool-specific command-line options.
    Functions [hide private]
     
    cli()
    Implements the command-line interface for the cback3-span script.
    source code
     
    _usage(fd=sys.stdout)
    Prints usage information for the cback3-span script.
    source code
     
    _version(fd=sys.stdout)
    Prints version information for the cback3-span script.
    source code
     
    _diagnostics(fd=sys.stdout)
    Prints runtime diagnostics information.
    source code
     
    _executeAction(options, config)
    Implements the guts of the cback3-span tool.
    source code
     
    _findDailyDirs(stagingDir)
    Returns a list of all daily staging directories that have not yet been stored.
    source code
     
    _writeStoreIndicator(config, dailyDirs)
    Writes a store indicator file into daily directories.
    source code
     
    _getWriter(config)
    Gets a writer and media capacity from store configuration.
    source code
     
    _writeDisc(config, writer, spanItem)
    Writes a span item to disc.
    source code
     
    _discInitializeImage(config, writer, spanItem)
    Initialize an ISO image for a span item.
    source code
     
    _discWriteImage(config, writer)
    Writes a ISO image for a span item.
    source code
     
    _discConsistencyCheck(config, writer, spanItem)
    Run a consistency check on an ISO image for a span item.
    source code
     
    _consistencyCheck(config, fileList)
    Runs a consistency check against media in the backup device.
    source code
     
    _getYesNoAnswer(prompt, default)
    Get a yes/no answer from the user.
    source code
     
    _getChoiceAnswer(prompt, default, validChoices)
    Get a particular choice from the user.
    source code
     
    _getFloat(prompt, default)
    Get a floating point number from the user.
    source code
     
    _getReturn(prompt)
    Get a return key from the user.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.tools.span")
      __package__ = 'CedarBackup3.tools'
    Function Details [hide private]

    cli()

    source code 

    Implements the command-line interface for the cback3-span script.

    Essentially, this is the "main routine" for the cback3-span script. It does all of the argument processing for the script, and then also implements the tool functionality.

    This function looks pretty similiar to CedarBackup3.cli.cli(). It's not easy to refactor this code to make it reusable and also readable, so I've decided to just live with the duplication.

    A different error code is returned for each type of failure:

    • 1: The Python interpreter version is < 3.4
    • 2: Error processing command-line arguments
    • 3: Error configuring logging
    • 4: Error parsing indicated configuration file
    • 5: Backup was interrupted with a CTRL-C or similar
    • 6: Error executing other parts of the script
    Returns:
    Error code as described above.

    Note: This script uses print rather than logging to the INFO level, because it is interactive. Underlying Cedar Backup functionality uses the logging mechanism exclusively.

    _usage(fd=sys.stdout)

    source code 

    Prints usage information for the cback3-span script.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _version(fd=sys.stdout)

    source code 

    Prints version information for the cback3-span script.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _diagnostics(fd=sys.stdout)

    source code 

    Prints runtime diagnostics information.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _executeAction(options, config)

    source code 

    Implements the guts of the cback3-span tool.

    Parameters:
    • options (SpanOptions object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • Exception - Under many generic error conditions

    _findDailyDirs(stagingDir)

    source code 

    Returns a list of all daily staging directories that have not yet been stored.

    The store indicator file cback.store will be written to a daily staging directory once that directory is written to disc. So, this function looks at each daily staging directory within the configured staging directory, and returns a list of those which do not contain the indicator file.

    Returned is a tuple containing two items: a list of daily staging directories, and a BackupFileList containing all files among those staging directories.

    Parameters:
    • stagingDir - Configured staging directory
    Returns:
    Tuple (staging dirs, backup file list)

    _writeStoreIndicator(config, dailyDirs)

    source code 

    Writes a store indicator file into daily directories.

    Parameters:
    • config - Config object.
    • dailyDirs - List of daily directories

    _getWriter(config)

    source code 

    Gets a writer and media capacity from store configuration. Returned is a writer and a media capacity in bytes.

    Parameters:
    • config - Cedar Backup configuration
    Returns:
    Tuple of (writer, mediaCapacity)

    _writeDisc(config, writer, spanItem)

    source code 

    Writes a span item to disc.

    Parameters:
    • config - Cedar Backup configuration
    • writer - Writer to use
    • spanItem - Span item to write

    _discInitializeImage(config, writer, spanItem)

    source code 

    Initialize an ISO image for a span item.

    Parameters:
    • config - Cedar Backup configuration
    • writer - Writer to use
    • spanItem - Span item to write

    _discWriteImage(config, writer)

    source code 

    Writes a ISO image for a span item.

    Parameters:
    • config - Cedar Backup configuration
    • writer - Writer to use

    _discConsistencyCheck(config, writer, spanItem)

    source code 

    Run a consistency check on an ISO image for a span item.

    Parameters:
    • config - Cedar Backup configuration
    • writer - Writer to use
    • spanItem - Span item to write

    _consistencyCheck(config, fileList)

    source code 

    Runs a consistency check against media in the backup device.

    The function mounts the device at a temporary mount point in the working directory, and then compares the passed-in file list's digest map with the one generated from the disc. The two lists should be identical.

    If no exceptions are thrown, there were no problems with the consistency check.

    Parameters:
    • config - Config object.
    • fileList - BackupFileList whose contents to check against
    Raises:
    • ValueError - If the check fails
    • IOError - If there is a problem working with the media.

    Warning: The implementation of this function is very UNIX-specific.

    _getYesNoAnswer(prompt, default)

    source code 

    Get a yes/no answer from the user. The default will be placed at the end of the prompt. A "Y" or "y" is considered yes, anything else no. A blank (empty) response results in the default.

    Parameters:
    • prompt - Prompt to show.
    • default - Default to set if the result is blank
    Returns:
    Boolean true/false corresponding to Y/N

    _getChoiceAnswer(prompt, default, validChoices)

    source code 

    Get a particular choice from the user. The default will be placed at the end of the prompt. The function loops until getting a valid choice. A blank (empty) response results in the default.

    Parameters:
    • prompt - Prompt to show.
    • default - Default to set if the result is None or blank.
    • validChoices - List of valid choices (strings)
    Returns:
    Valid choice from user.

    _getFloat(prompt, default)

    source code 

    Get a floating point number from the user. The default will be placed at the end of the prompt. The function loops until getting a valid floating point number. A blank (empty) response results in the default.

    Parameters:
    • prompt - Prompt to show.
    • default - Default to set if the result is None or blank.
    Returns:
    Floating point number from user

    _getReturn(prompt)

    source code 

    Get a return key from the user.

    Parameters:
    • prompt - Prompt to show.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.cli-pysrc.html0000664000175000017500000255560412657665546025267 0ustar pronovicpronovic00000000000000 CedarBackup3.cli
    Package CedarBackup3 :: Module cli
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.cli

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2007,2010,2015 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python 3 (>= 3.4) 
      29  # Project  : Cedar Backup, release 3 
      30  # Purpose  : Provides command-line interface implementation. 
      31  # 
      32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      33   
      34  ######################################################################## 
      35  # Module documentation 
      36  ######################################################################## 
      37   
      38  """ 
      39  Provides command-line interface implementation for the cback3 script. 
      40   
      41  Summary 
      42  ======= 
      43   
      44     The functionality in this module encapsulates the command-line interface for 
      45     the cback3 script.  The cback3 script itself is very short, basically just an 
      46     invokation of one function implemented here.  That, in turn, makes it 
      47     simpler to validate the command line interface (for instance, it's easier to 
      48     run pychecker against a module, and unit tests are easier, too). 
      49   
      50     The objects and functions implemented in this module are probably not useful 
      51     to any code external to Cedar Backup.   Anyone else implementing their own 
      52     command-line interface would have to reimplement (or at least enhance) all 
      53     of this anyway. 
      54   
      55  Backwards Compatibility 
      56  ======================= 
      57   
      58     The command line interface has changed between Cedar Backup 1.x and Cedar 
      59     Backup 2.x.  Some new switches have been added, and the actions have become 
      60     simple arguments rather than switches (which is a much more standard command 
      61     line format).  Old 1.x command lines are generally no longer valid. 
      62   
      63  @var DEFAULT_CONFIG: The default configuration file. 
      64  @var DEFAULT_LOGFILE: The default log file path. 
      65  @var DEFAULT_OWNERSHIP: Default ownership for the logfile. 
      66  @var DEFAULT_MODE: Default file permissions mode on the logfile. 
      67  @var VALID_ACTIONS: List of valid actions. 
      68  @var COMBINE_ACTIONS: List of actions which can be combined with other actions. 
      69  @var NONCOMBINE_ACTIONS: List of actions which cannot be combined with other actions. 
      70   
      71  @sort: cli, Options, DEFAULT_CONFIG, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP, 
      72         DEFAULT_MODE, VALID_ACTIONS, COMBINE_ACTIONS, NONCOMBINE_ACTIONS 
      73   
      74  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      75  """ 
      76   
      77  ######################################################################## 
      78  # Imported modules 
      79  ######################################################################## 
      80   
      81  # System modules 
      82  import sys 
      83  import os 
      84  import logging 
      85  import getopt 
      86  from functools import total_ordering 
      87   
      88  # Cedar Backup modules 
      89  from CedarBackup3.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT 
      90  from CedarBackup3.customize import customizeOverrides 
      91  from CedarBackup3.util import DirectedGraph, PathResolverSingleton 
      92  from CedarBackup3.util import sortDict, splitCommandLine, executeCommand, getFunctionReference 
      93  from CedarBackup3.util import getUidGid, encodePath, Diagnostics 
      94  from CedarBackup3.config import Config 
      95  from CedarBackup3.peer import RemotePeer 
      96  from CedarBackup3.actions.collect import executeCollect 
      97  from CedarBackup3.actions.stage import executeStage 
      98  from CedarBackup3.actions.store import executeStore 
      99  from CedarBackup3.actions.purge import executePurge 
     100  from CedarBackup3.actions.rebuild import executeRebuild 
     101  from CedarBackup3.actions.validate import executeValidate 
     102  from CedarBackup3.actions.initialize import executeInitialize 
     103   
     104   
     105  ######################################################################## 
     106  # Module-wide constants and variables 
     107  ######################################################################## 
     108   
     109  logger = logging.getLogger("CedarBackup3.log.cli") 
     110   
     111  DISK_LOG_FORMAT    = "%(asctime)s --> [%(levelname)-7s] %(message)s" 
     112  DISK_OUTPUT_FORMAT = "%(message)s" 
     113  SCREEN_LOG_FORMAT  = "%(message)s" 
     114  SCREEN_LOG_STREAM  = sys.stdout 
     115  DATE_FORMAT        = "%Y-%m-%dT%H:%M:%S %Z" 
     116   
     117  DEFAULT_CONFIG     = "/etc/cback3.conf" 
     118  DEFAULT_LOGFILE    = "/var/log/cback3.log" 
     119  DEFAULT_OWNERSHIP  = [ "root", "adm", ] 
     120  DEFAULT_MODE       = 0o640 
     121   
     122  REBUILD_INDEX      = 0        # can't run with anything else, anyway 
     123  VALIDATE_INDEX     = 0        # can't run with anything else, anyway 
     124  INITIALIZE_INDEX   = 0        # can't run with anything else, anyway 
     125  COLLECT_INDEX      = 100 
     126  STAGE_INDEX        = 200 
     127  STORE_INDEX        = 300 
     128  PURGE_INDEX        = 400 
     129   
     130  VALID_ACTIONS      = [ "collect", "stage", "store", "purge", "rebuild", "validate", "initialize", "all", ] 
     131  COMBINE_ACTIONS    = [ "collect", "stage", "store", "purge", ] 
     132  NONCOMBINE_ACTIONS = [ "rebuild", "validate", "initialize", "all", ] 
     133   
     134  SHORT_SWITCHES     = "hVbqc:fMNl:o:m:OdsD" 
     135  LONG_SWITCHES      = [ 'help', 'version', 'verbose', 'quiet', 
     136                         'config=', 'full', 'managed', 'managed-only', 
     137                         'logfile=', 'owner=', 'mode=', 
     138                         'output', 'debug', 'stack', 'diagnostics', ] 
    
    139 140 141 ####################################################################### 142 # Public functions 143 ####################################################################### 144 145 ################# 146 # cli() function 147 ################# 148 149 -def cli():
    150 """ 151 Implements the command-line interface for the C{cback3} script. 152 153 Essentially, this is the "main routine" for the cback3 script. It does all 154 of the argument processing for the script, and then sets about executing the 155 indicated actions. 156 157 As a general rule, only the actions indicated on the command line will be 158 executed. We will accept any of the built-in actions and any of the 159 configured extended actions (which makes action list verification a two- 160 step process). 161 162 The C{'all'} action has a special meaning: it means that the built-in set of 163 actions (collect, stage, store, purge) will all be executed, in that order. 164 Extended actions will be ignored as part of the C{'all'} action. 165 166 Raised exceptions always result in an immediate return. Otherwise, we 167 generally return when all specified actions have been completed. Actions 168 are ignored if the help, version or validate flags are set. 169 170 A different error code is returned for each type of failure: 171 172 - C{1}: The Python interpreter version is < 3.4 173 - C{2}: Error processing command-line arguments 174 - C{3}: Error configuring logging 175 - C{4}: Error parsing indicated configuration file 176 - C{5}: Backup was interrupted with a CTRL-C or similar 177 - C{6}: Error executing specified backup actions 178 179 @note: This function contains a good amount of logging at the INFO level, 180 because this is the right place to document high-level flow of control (i.e. 181 what the command-line options were, what config file was being used, etc.) 182 183 @note: We assume that anything that I{must} be seen on the screen is logged 184 at the ERROR level. Errors that occur before logging can be configured are 185 written to C{sys.stderr}. 186 187 @return: Error code as described above. 188 """ 189 try: 190 if list(map(int, [sys.version_info[0], sys.version_info[1]])) < [3, 4]: 191 sys.stderr.write("Python 3 version 3.4 or greater required.\n") 192 return 1 193 except: 194 # sys.version_info isn't available before 2.0 195 sys.stderr.write("Python 3 version 3.4 or greater required.\n") 196 return 1 197 198 try: 199 options = Options(argumentList=sys.argv[1:]) 200 logger.info("Specified command-line actions: %s", options.actions) 201 except Exception as e: 202 _usage() 203 sys.stderr.write(" *** Error: %s\n" % e) 204 return 2 205 206 if options.help: 207 _usage() 208 return 0 209 if options.version: 210 _version() 211 return 0 212 if options.diagnostics: 213 _diagnostics() 214 return 0 215 216 if options.stacktrace: 217 logfile = setupLogging(options) 218 else: 219 try: 220 logfile = setupLogging(options) 221 except Exception as e: 222 sys.stderr.write("Error setting up logging: %s\n" % e) 223 return 3 224 225 logger.info("Cedar Backup run started.") 226 logger.info("Options were [%s]", options) 227 logger.info("Logfile is [%s]", logfile) 228 Diagnostics().logDiagnostics(method=logger.info) 229 230 if options.config is None: 231 logger.debug("Using default configuration file.") 232 configPath = DEFAULT_CONFIG 233 else: 234 logger.debug("Using user-supplied configuration file.") 235 configPath = options.config 236 237 executeLocal = True 238 executeManaged = False 239 if options.managedOnly: 240 executeLocal = False 241 executeManaged = True 242 if options.managed: 243 executeManaged = True 244 logger.debug("Execute local actions: %s", executeLocal) 245 logger.debug("Execute managed actions: %s", executeManaged) 246 247 try: 248 logger.info("Configuration path is [%s]", configPath) 249 config = Config(xmlPath=configPath) 250 customizeOverrides(config) 251 setupPathResolver(config) 252 actionSet = _ActionSet(options.actions, config.extensions, config.options, 253 config.peers, executeManaged, executeLocal) 254 except Exception as e: 255 logger.error("Error reading or handling configuration: %s", e) 256 logger.info("Cedar Backup run completed with status 4.") 257 return 4 258 259 if options.stacktrace: 260 actionSet.executeActions(configPath, options, config) 261 else: 262 try: 263 actionSet.executeActions(configPath, options, config) 264 except KeyboardInterrupt: 265 logger.error("Backup interrupted.") 266 logger.info("Cedar Backup run completed with status 5.") 267 return 5 268 except Exception as e: 269 logger.error("Error executing backup: %s", e) 270 logger.info("Cedar Backup run completed with status 6.") 271 return 6 272 273 logger.info("Cedar Backup run completed with status 0.") 274 return 0
    275
    276 277 ######################################################################## 278 # Action-related class definition 279 ######################################################################## 280 281 #################### 282 # _ActionItem class 283 #################### 284 285 @total_ordering 286 -class _ActionItem(object):
    287 288 """ 289 Class representing a single action to be executed. 290 291 This class represents a single named action to be executed, and understands 292 how to execute that action. 293 294 The built-in actions will use only the options and config values. We also 295 pass in the config path so that extension modules can re-parse configuration 296 if they want to, to add in extra information. 297 298 This class is also where pre-action and post-action hooks are executed. An 299 action item is instantiated in terms of optional pre- and post-action hook 300 objects (config.ActionHook), which are then executed at the appropriate time 301 (if set). 302 303 @note: The comparison operators for this class have been implemented to only 304 compare based on the index and SORT_ORDER value, and ignore all other 305 values. This is so that the action set list can be easily sorted first by 306 type (_ActionItem before _ManagedActionItem) and then by index within type. 307 308 @cvar SORT_ORDER: Defines a sort order to order properly between types. 309 """ 310 311 SORT_ORDER = 0 312
    313 - def __init__(self, index, name, preHooks, postHooks, function):
    314 """ 315 Default constructor. 316 317 It's OK to pass C{None} for C{index}, C{preHooks} or C{postHooks}, but not 318 for C{name}. 319 320 @param index: Index of the item (or C{None}). 321 @param name: Name of the action that is being executed. 322 @param preHooks: List of pre-action hooks in terms of an C{ActionHook} object, or C{None}. 323 @param postHooks: List of post-action hooks in terms of an C{ActionHook} object, or C{None}. 324 @param function: Reference to function associated with item. 325 """ 326 self.index = index 327 self.name = name 328 self.preHooks = preHooks 329 self.postHooks = postHooks 330 self.function = function
    331
    332 - def __eq__(self, other):
    333 """Equals operator, implemented in terms of original Python 2 compare operator.""" 334 return self.__cmp__(other) == 0
    335
    336 - def __lt__(self, other):
    337 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 338 return self.__cmp__(other) < 0
    339
    340 - def __gt__(self, other):
    341 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 342 return self.__cmp__(other) > 0
    343
    344 - def __cmp__(self, other):
    345 """ 346 Original Python 2 comparison operator. 347 The only thing we compare is the item's index. 348 @param other: Other object to compare to. 349 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 350 """ 351 if other is None: 352 return 1 353 if self.index != other.index: 354 if int(self.index or 0) < int(other.index or 0): 355 return -1 356 else: 357 return 1 358 else: 359 if self.SORT_ORDER != other.SORT_ORDER: 360 if int(self.SORT_ORDER or 0) < int(other.SORT_ORDER or 0): 361 return -1 362 else: 363 return 1 364 return 0
    365
    366 - def executeAction(self, configPath, options, config):
    367 """ 368 Executes the action associated with an item, including hooks. 369 370 See class notes for more details on how the action is executed. 371 372 @param configPath: Path to configuration file on disk. 373 @param options: Command-line options to be passed to action. 374 @param config: Parsed configuration to be passed to action. 375 376 @raise Exception: If there is a problem executing the action. 377 """ 378 logger.debug("Executing [%s] action.", self.name) 379 if self.preHooks is not None: 380 for hook in self.preHooks: 381 self._executeHook("pre-action", hook) 382 self._executeAction(configPath, options, config) 383 if self.postHooks is not None: 384 for hook in self.postHooks: 385 self._executeHook("post-action", hook)
    386
    387 - def _executeAction(self, configPath, options, config):
    388 """ 389 Executes the action, specifically the function associated with the action. 390 @param configPath: Path to configuration file on disk. 391 @param options: Command-line options to be passed to action. 392 @param config: Parsed configuration to be passed to action. 393 """ 394 name = "%s.%s" % (self.function.__module__, self.function.__name__) 395 logger.debug("Calling action function [%s], execution index [%d]", name, self.index) 396 self.function(configPath, options, config)
    397
    398 - def _executeHook(self, type, hook): # pylint: disable=W0622,R0201
    399 """ 400 Executes a hook command via L{util.executeCommand()}. 401 @param type: String describing the type of hook, for logging. 402 @param hook: Hook, in terms of a C{ActionHook} object. 403 """ 404 fields = splitCommandLine(hook.command) 405 logger.debug("Executing %s hook for action [%s]: %s", type, hook.action, fields[0:1]) 406 result = executeCommand(command=fields[0:1], args=fields[1:])[0] 407 if result != 0: 408 raise IOError("Error (%d) executing %s hook for action [%s]: %s" % (result, type, hook.action, fields[0:1]))
    409
    410 411 ########################### 412 # _ManagedActionItem class 413 ########################### 414 415 @total_ordering 416 -class _ManagedActionItem(object):
    417 418 """ 419 Class representing a single action to be executed on a managed peer. 420 421 This class represents a single named action to be executed, and understands 422 how to execute that action. 423 424 Actions to be executed on a managed peer rely on peer configuration and 425 on the full-backup flag. All other configuration takes place on the remote 426 peer itself. 427 428 @note: The comparison operators for this class have been implemented to only 429 compare based on the index and SORT_ORDER value, and ignore all other 430 values. This is so that the action set list can be easily sorted first by 431 type (_ActionItem before _ManagedActionItem) and then by index within type. 432 433 @cvar SORT_ORDER: Defines a sort order to order properly between types. 434 """ 435 436 SORT_ORDER = 1 437
    438 - def __init__(self, index, name, remotePeers):
    439 """ 440 Default constructor. 441 442 @param index: Index of the item (or C{None}). 443 @param name: Name of the action that is being executed. 444 @param remotePeers: List of remote peers on which to execute the action. 445 """ 446 self.index = index 447 self.name = name 448 self.remotePeers = remotePeers
    449
    450 - def __eq__(self, other):
    451 """Equals operator, implemented in terms of original Python 2 compare operator.""" 452 return self.__cmp__(other) == 0
    453
    454 - def __lt__(self, other):
    455 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 456 return self.__cmp__(other) < 0
    457
    458 - def __gt__(self, other):
    459 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 460 return self.__cmp__(other) > 0
    461
    462 - def __cmp__(self, other):
    463 """ 464 Original Python 2 comparison operator. 465 The only thing we compare is the item's index. 466 @param other: Other object to compare to. 467 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 468 """ 469 if other is None: 470 return 1 471 if self.index != other.index: 472 if int(self.index or 0) < int(other.index or 0): 473 return -1 474 else: 475 return 1 476 else: 477 if self.SORT_ORDER != other.SORT_ORDER: 478 if int(self.SORT_ORDER or 0) < int(other.SORT_ORDER or 0): 479 return -1 480 else: 481 return 1 482 return 0
    483
    484 - def executeAction(self, configPath, options, config):
    485 """ 486 Executes the managed action associated with an item. 487 488 @note: Only options.full is actually used. The rest of the arguments 489 exist to satisfy the ActionItem iterface. 490 491 @note: Errors here result in a message logged to ERROR, but no thrown 492 exception. The analogy is the stage action where a problem with one host 493 should not kill the entire backup. Since we're logging an error, the 494 administrator will get an email. 495 496 @param configPath: Path to configuration file on disk. 497 @param options: Command-line options to be passed to action. 498 @param config: Parsed configuration to be passed to action. 499 500 @raise Exception: If there is a problem executing the action. 501 """ 502 for peer in self.remotePeers: 503 logger.debug("Executing managed action [%s] on peer [%s].", self.name, peer.name) 504 try: 505 peer.executeManagedAction(self.name, options.full) 506 except IOError as e: 507 logger.error(e) # log the message and go on, so we don't kill the backup
    508
    509 ################### 510 # _ActionSet class 511 ################### 512 513 -class _ActionSet(object):
    514 515 """ 516 Class representing a set of local actions to be executed. 517 518 This class does four different things. First, it ensures that the actions 519 specified on the command-line are sensible. The command-line can only list 520 either built-in actions or extended actions specified in configuration. 521 Also, certain actions (in L{NONCOMBINE_ACTIONS}) cannot be combined with 522 other actions. 523 524 Second, the class enforces an execution order on the specified actions. Any 525 time actions are combined on the command line (either built-in actions or 526 extended actions), we must make sure they get executed in a sensible order. 527 528 Third, the class ensures that any pre-action or post-action hooks are 529 scheduled and executed appropriately. Hooks are configured by building a 530 dictionary mapping between hook action name and command. Pre-action hooks 531 are executed immediately before their associated action, and post-action 532 hooks are executed immediately after their associated action. 533 534 Finally, the class properly interleaves local and managed actions so that 535 the same action gets executed first locally and then on managed peers. 536 537 @sort: __init__, executeActions 538 """ 539
    540 - def __init__(self, actions, extensions, options, peers, managed, local):
    541 """ 542 Constructor for the C{_ActionSet} class. 543 544 This is kind of ugly, because the constructor has to set up a lot of data 545 before being able to do anything useful. The following data structures 546 are initialized based on the input: 547 548 - C{extensionNames}: List of extensions available in configuration 549 - C{preHookMap}: Mapping from action name to list of C{PreActionHook} 550 - C{postHookMap}: Mapping from action name to list of C{PostActionHook} 551 - C{functionMap}: Mapping from action name to Python function 552 - C{indexMap}: Mapping from action name to execution index 553 - C{peerMap}: Mapping from action name to set of C{RemotePeer} 554 - C{actionMap}: Mapping from action name to C{_ActionItem} 555 556 Once these data structures are set up, the command line is validated to 557 make sure only valid actions have been requested, and in a sensible 558 combination. Then, all of the data is used to build C{self.actionSet}, 559 the set action items to be executed by C{executeActions()}. This list 560 might contain either C{_ActionItem} or C{_ManagedActionItem}. 561 562 @param actions: Names of actions specified on the command-line. 563 @param extensions: Extended action configuration (i.e. config.extensions) 564 @param options: Options configuration (i.e. config.options) 565 @param peers: Peers configuration (i.e. config.peers) 566 @param managed: Whether to include managed actions in the set 567 @param local: Whether to include local actions in the set 568 569 @raise ValueError: If one of the specified actions is invalid. 570 """ 571 extensionNames = _ActionSet._deriveExtensionNames(extensions) 572 (preHookMap, postHookMap) = _ActionSet._buildHookMaps(options.hooks) 573 functionMap = _ActionSet._buildFunctionMap(extensions) 574 indexMap = _ActionSet._buildIndexMap(extensions) 575 peerMap = _ActionSet._buildPeerMap(options, peers) 576 actionMap = _ActionSet._buildActionMap(managed, local, extensionNames, functionMap, 577 indexMap, preHookMap, postHookMap, peerMap) 578 _ActionSet._validateActions(actions, extensionNames) 579 self.actionSet = _ActionSet._buildActionSet(actions, actionMap)
    580 581 @staticmethod
    582 - def _deriveExtensionNames(extensions):
    583 """ 584 Builds a list of extended actions that are available in configuration. 585 @param extensions: Extended action configuration (i.e. config.extensions) 586 @return: List of extended action names. 587 """ 588 extensionNames = [] 589 if extensions is not None and extensions.actions is not None: 590 for action in extensions.actions: 591 extensionNames.append(action.name) 592 return extensionNames
    593 594 @staticmethod
    595 - def _buildHookMaps(hooks):
    596 """ 597 Build two mappings from action name to configured C{ActionHook}. 598 @param hooks: List of pre- and post-action hooks (i.e. config.options.hooks) 599 @return: Tuple of (pre hook dictionary, post hook dictionary). 600 """ 601 preHookMap = {} 602 postHookMap = {} 603 if hooks is not None: 604 for hook in hooks: 605 if hook.before: 606 if not hook.action in preHookMap: 607 preHookMap[hook.action] = [] 608 preHookMap[hook.action].append(hook) 609 elif hook.after: 610 if not hook.action in postHookMap: 611 postHookMap[hook.action] = [] 612 postHookMap[hook.action].append(hook) 613 return (preHookMap, postHookMap)
    614 615 @staticmethod
    616 - def _buildFunctionMap(extensions):
    617 """ 618 Builds a mapping from named action to action function. 619 @param extensions: Extended action configuration (i.e. config.extensions) 620 @return: Dictionary mapping action to function. 621 """ 622 functionMap = {} 623 functionMap['rebuild'] = executeRebuild 624 functionMap['validate'] = executeValidate 625 functionMap['initialize'] = executeInitialize 626 functionMap['collect'] = executeCollect 627 functionMap['stage'] = executeStage 628 functionMap['store'] = executeStore 629 functionMap['purge'] = executePurge 630 if extensions is not None and extensions.actions is not None: 631 for action in extensions.actions: 632 functionMap[action.name] = getFunctionReference(action.module, action.function) 633 return functionMap
    634 635 @staticmethod
    636 - def _buildIndexMap(extensions):
    637 """ 638 Builds a mapping from action name to proper execution index. 639 640 If extensions configuration is C{None}, or there are no configured 641 extended actions, the ordering dictionary will only include the built-in 642 actions and their standard indices. 643 644 Otherwise, if the extensions order mode is C{None} or C{"index"}, actions 645 will scheduled by explicit index; and if the extensions order mode is 646 C{"dependency"}, actions will be scheduled using a dependency graph. 647 648 @param extensions: Extended action configuration (i.e. config.extensions) 649 650 @return: Dictionary mapping action name to integer execution index. 651 """ 652 indexMap = {} 653 if extensions is None or extensions.actions is None or extensions.actions == []: 654 logger.info("Action ordering will use 'index' order mode.") 655 indexMap['rebuild'] = REBUILD_INDEX 656 indexMap['validate'] = VALIDATE_INDEX 657 indexMap['initialize'] = INITIALIZE_INDEX 658 indexMap['collect'] = COLLECT_INDEX 659 indexMap['stage'] = STAGE_INDEX 660 indexMap['store'] = STORE_INDEX 661 indexMap['purge'] = PURGE_INDEX 662 logger.debug("Completed filling in action indices for built-in actions.") 663 logger.info("Action order will be: %s", sortDict(indexMap)) 664 else: 665 if extensions.orderMode is None or extensions.orderMode == "index": 666 logger.info("Action ordering will use 'index' order mode.") 667 indexMap['rebuild'] = REBUILD_INDEX 668 indexMap['validate'] = VALIDATE_INDEX 669 indexMap['initialize'] = INITIALIZE_INDEX 670 indexMap['collect'] = COLLECT_INDEX 671 indexMap['stage'] = STAGE_INDEX 672 indexMap['store'] = STORE_INDEX 673 indexMap['purge'] = PURGE_INDEX 674 logger.debug("Completed filling in action indices for built-in actions.") 675 for action in extensions.actions: 676 indexMap[action.name] = action.index 677 logger.debug("Completed filling in action indices for extended actions.") 678 logger.info("Action order will be: %s", sortDict(indexMap)) 679 else: 680 logger.info("Action ordering will use 'dependency' order mode.") 681 graph = DirectedGraph("dependencies") 682 graph.createVertex("rebuild") 683 graph.createVertex("validate") 684 graph.createVertex("initialize") 685 graph.createVertex("collect") 686 graph.createVertex("stage") 687 graph.createVertex("store") 688 graph.createVertex("purge") 689 for action in extensions.actions: 690 graph.createVertex(action.name) 691 graph.createEdge("collect", "stage") # Collect must run before stage, store or purge 692 graph.createEdge("collect", "store") 693 graph.createEdge("collect", "purge") 694 graph.createEdge("stage", "store") # Stage must run before store or purge 695 graph.createEdge("stage", "purge") 696 graph.createEdge("store", "purge") # Store must run before purge 697 for action in extensions.actions: 698 if action.dependencies.beforeList is not None: 699 for vertex in action.dependencies.beforeList: 700 try: 701 graph.createEdge(action.name, vertex) # actions that this action must be run before 702 except ValueError: 703 logger.error("Dependency [%s] on extension [%s] is unknown.", vertex, action.name) 704 raise ValueError("Unable to determine proper action order due to invalid dependency.") 705 if action.dependencies.afterList is not None: 706 for vertex in action.dependencies.afterList: 707 try: 708 graph.createEdge(vertex, action.name) # actions that this action must be run after 709 except ValueError: 710 logger.error("Dependency [%s] on extension [%s] is unknown.", vertex, action.name) 711 raise ValueError("Unable to determine proper action order due to invalid dependency.") 712 try: 713 ordering = graph.topologicalSort() 714 indexMap = dict([(ordering[i], i+1) for i in range(0, len(ordering))]) 715 logger.info("Action order will be: %s", ordering) 716 except ValueError: 717 logger.error("Unable to determine proper action order due to dependency recursion.") 718 logger.error("Extensions configuration is invalid (check for loops).") 719 raise ValueError("Unable to determine proper action order due to dependency recursion.") 720 return indexMap
    721 722 @staticmethod
    723 - def _buildActionMap(managed, local, extensionNames, functionMap, indexMap, preHookMap, postHookMap, peerMap):
    724 """ 725 Builds a mapping from action name to list of action items. 726 727 We build either C{_ActionItem} or C{_ManagedActionItem} objects here. 728 729 In most cases, the mapping from action name to C{_ActionItem} is 1:1. 730 The exception is the "all" action, which is a special case. However, a 731 list is returned in all cases, just for consistency later. Each 732 C{_ActionItem} will be created with a proper function reference and index 733 value for execution ordering. 734 735 The mapping from action name to C{_ManagedActionItem} is always 1:1. 736 Each managed action item contains a list of peers which the action should 737 be executed. 738 739 @param managed: Whether to include managed actions in the set 740 @param local: Whether to include local actions in the set 741 @param extensionNames: List of valid extended action names 742 @param functionMap: Dictionary mapping action name to Python function 743 @param indexMap: Dictionary mapping action name to integer execution index 744 @param preHookMap: Dictionary mapping action name to pre hooks (if any) for the action 745 @param postHookMap: Dictionary mapping action name to post hooks (if any) for the action 746 @param peerMap: Dictionary mapping action name to list of remote peers on which to execute the action 747 748 @return: Dictionary mapping action name to list of C{_ActionItem} objects. 749 """ 750 actionMap = {} 751 for name in extensionNames + VALID_ACTIONS: 752 if name != 'all': # do this one later 753 function = functionMap[name] 754 index = indexMap[name] 755 actionMap[name] = [] 756 if local: 757 (preHooks, postHooks) = _ActionSet._deriveHooks(name, preHookMap, postHookMap) 758 actionMap[name].append(_ActionItem(index, name, preHooks, postHooks, function)) 759 if managed: 760 if name in peerMap: 761 actionMap[name].append(_ManagedActionItem(index, name, peerMap[name])) 762 actionMap['all'] = actionMap['collect'] + actionMap['stage'] + actionMap['store'] + actionMap['purge'] 763 return actionMap
    764 765 @staticmethod
    766 - def _buildPeerMap(options, peers):
    767 """ 768 Build a mapping from action name to list of remote peers. 769 770 There will be one entry in the mapping for each managed action. If there 771 are no managed peers, the mapping will be empty. Only managed actions 772 will be listed in the mapping. 773 774 @param options: Option configuration (i.e. config.options) 775 @param peers: Peers configuration (i.e. config.peers) 776 """ 777 peerMap = {} 778 if peers is not None: 779 if peers.remotePeers is not None: 780 for peer in peers.remotePeers: 781 if peer.managed: 782 remoteUser = _ActionSet._getRemoteUser(options, peer) 783 rshCommand = _ActionSet._getRshCommand(options, peer) 784 cbackCommand = _ActionSet._getCbackCommand(options, peer) 785 managedActions = _ActionSet._getManagedActions(options, peer) 786 remotePeer = RemotePeer(peer.name, None, options.workingDir, remoteUser, None, 787 options.backupUser, rshCommand, cbackCommand) 788 if managedActions is not None: 789 for managedAction in managedActions: 790 if managedAction in peerMap: 791 if remotePeer not in peerMap[managedAction]: 792 peerMap[managedAction].append(remotePeer) 793 else: 794 peerMap[managedAction] = [ remotePeer, ] 795 return peerMap
    796 797 @staticmethod
    798 - def _deriveHooks(action, preHookDict, postHookDict):
    799 """ 800 Derive pre- and post-action hooks, if any, associated with named action. 801 @param action: Name of action to look up 802 @param preHookDict: Dictionary mapping pre-action hooks to action name 803 @param postHookDict: Dictionary mapping post-action hooks to action name 804 @return Tuple (preHooks, postHooks) per mapping, with None values if there is no hook. 805 """ 806 preHooks = None 807 postHooks = None 808 if action in preHookDict: 809 preHooks = preHookDict[action] 810 if action in postHookDict: 811 postHooks = postHookDict[action] 812 return (preHooks, postHooks)
    813 814 @staticmethod
    815 - def _validateActions(actions, extensionNames):
    816 """ 817 Validate that the set of specified actions is sensible. 818 819 Any specified action must either be a built-in action or must be among 820 the extended actions defined in configuration. The actions from within 821 L{NONCOMBINE_ACTIONS} may not be combined with other actions. 822 823 @param actions: Names of actions specified on the command-line. 824 @param extensionNames: Names of extensions specified in configuration. 825 826 @raise ValueError: If one or more configured actions are not valid. 827 """ 828 if actions is None or actions == []: 829 raise ValueError("No actions specified.") 830 for action in actions: 831 if action not in VALID_ACTIONS and action not in extensionNames: 832 raise ValueError("Action [%s] is not a valid action or extended action." % action) 833 for action in NONCOMBINE_ACTIONS: 834 if action in actions and actions != [ action, ]: 835 raise ValueError("Action [%s] may not be combined with other actions." % action)
    836 837 @staticmethod
    838 - def _buildActionSet(actions, actionMap):
    839 """ 840 Build set of actions to be executed. 841 842 The set of actions is built in the proper order, so C{executeActions} can 843 spin through the set without thinking about it. Since we've already validated 844 that the set of actions is sensible, we don't take any precautions here to 845 make sure things are combined properly. If the action is listed, it will 846 be "scheduled" for execution. 847 848 @param actions: Names of actions specified on the command-line. 849 @param actionMap: Dictionary mapping action name to C{_ActionItem} object. 850 851 @return: Set of action items in proper order. 852 """ 853 actionSet = [] 854 for action in actions: 855 actionSet.extend(actionMap[action]) 856 actionSet.sort() # sort the actions in order by index 857 return actionSet
    858
    859 - def executeActions(self, configPath, options, config):
    860 """ 861 Executes all actions and extended actions, in the proper order. 862 863 Each action (whether built-in or extension) is executed in an identical 864 manner. The built-in actions will use only the options and config 865 values. We also pass in the config path so that extension modules can 866 re-parse configuration if they want to, to add in extra information. 867 868 @param configPath: Path to configuration file on disk. 869 @param options: Command-line options to be passed to action functions. 870 @param config: Parsed configuration to be passed to action functions. 871 872 @raise Exception: If there is a problem executing the actions. 873 """ 874 logger.debug("Executing local actions.") 875 for actionItem in self.actionSet: 876 actionItem.executeAction(configPath, options, config)
    877 878 @staticmethod
    879 - def _getRemoteUser(options, remotePeer):
    880 """ 881 Gets the remote user associated with a remote peer. 882 Use peer's if possible, otherwise take from options section. 883 @param options: OptionsConfig object, as from config.options 884 @param remotePeer: Configuration-style remote peer object. 885 @return: Name of remote user associated with remote peer. 886 """ 887 if remotePeer.remoteUser is None: 888 return options.backupUser 889 return remotePeer.remoteUser
    890 891 @staticmethod
    892 - def _getRshCommand(options, remotePeer):
    893 """ 894 Gets the RSH command associated with a remote peer. 895 Use peer's if possible, otherwise take from options section. 896 @param options: OptionsConfig object, as from config.options 897 @param remotePeer: Configuration-style remote peer object. 898 @return: RSH command associated with remote peer. 899 """ 900 if remotePeer.rshCommand is None: 901 return options.rshCommand 902 return remotePeer.rshCommand
    903 904 @staticmethod
    905 - def _getCbackCommand(options, remotePeer):
    906 """ 907 Gets the cback command associated with a remote peer. 908 Use peer's if possible, otherwise take from options section. 909 @param options: OptionsConfig object, as from config.options 910 @param remotePeer: Configuration-style remote peer object. 911 @return: cback command associated with remote peer. 912 """ 913 if remotePeer.cbackCommand is None: 914 return options.cbackCommand 915 return remotePeer.cbackCommand
    916 917 @staticmethod
    918 - def _getManagedActions(options, remotePeer):
    919 """ 920 Gets the managed actions list associated with a remote peer. 921 Use peer's if possible, otherwise take from options section. 922 @param options: OptionsConfig object, as from config.options 923 @param remotePeer: Configuration-style remote peer object. 924 @return: Set of managed actions associated with remote peer. 925 """ 926 if remotePeer.managedActions is None: 927 return options.managedActions 928 return remotePeer.managedActions
    929
    930 931 ####################################################################### 932 # Utility functions 933 ####################################################################### 934 935 #################### 936 # _usage() function 937 #################### 938 939 -def _usage(fd=sys.stderr):
    940 """ 941 Prints usage information for the cback3 script. 942 @param fd: File descriptor used to print information. 943 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 944 """ 945 fd.write("\n") 946 fd.write(" Usage: cback3 [switches] action(s)\n") 947 fd.write("\n") 948 fd.write(" The following switches are accepted:\n") 949 fd.write("\n") 950 fd.write(" -h, --help Display this usage/help listing\n") 951 fd.write(" -V, --version Display version information\n") 952 fd.write(" -b, --verbose Print verbose output as well as logging to disk\n") 953 fd.write(" -q, --quiet Run quietly (display no output to the screen)\n") 954 fd.write(" -c, --config Path to config file (default: %s)\n" % DEFAULT_CONFIG) 955 fd.write(" -f, --full Perform a full backup, regardless of configuration\n") 956 fd.write(" -M, --managed Include managed clients when executing actions\n") 957 fd.write(" -N, --managed-only Include ONLY managed clients when executing actions\n") 958 fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE) 959 fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1])) 960 fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE) 961 fd.write(" -O, --output Record some sub-command (i.e. cdrecord) output to the log\n") 962 fd.write(" -d, --debug Write debugging information to the log (implies --output)\n") 963 fd.write(" -s, --stack Dump a Python stack trace instead of swallowing exceptions\n") # exactly 80 characters in width! 964 fd.write(" -D, --diagnostics Print runtime diagnostics to the screen and exit\n") 965 fd.write("\n") 966 fd.write(" The following actions may be specified:\n") 967 fd.write("\n") 968 fd.write(" all Take all normal actions (collect, stage, store, purge)\n") 969 fd.write(" collect Take the collect action\n") 970 fd.write(" stage Take the stage action\n") 971 fd.write(" store Take the store action\n") 972 fd.write(" purge Take the purge action\n") 973 fd.write(" rebuild Rebuild \"this week's\" disc if possible\n") 974 fd.write(" validate Validate configuration only\n") 975 fd.write(" initialize Initialize media for use with Cedar Backup\n") 976 fd.write("\n") 977 fd.write(" You may also specify extended actions that have been defined in\n") 978 fd.write(" configuration.\n") 979 fd.write("\n") 980 fd.write(" You must specify at least one action to take. More than one of\n") 981 fd.write(" the \"collect\", \"stage\", \"store\" or \"purge\" actions and/or\n") 982 fd.write(" extended actions may be specified in any arbitrary order; they\n") 983 fd.write(" will be executed in a sensible order. The \"all\", \"rebuild\",\n") 984 fd.write(" \"validate\", and \"initialize\" actions may not be combined with\n") 985 fd.write(" other actions.\n") 986 fd.write("\n")
    987
    988 989 ###################### 990 # _version() function 991 ###################### 992 993 -def _version(fd=sys.stdout):
    994 """ 995 Prints version information for the cback3 script. 996 @param fd: File descriptor used to print information. 997 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 998 """ 999 fd.write("\n") 1000 fd.write(" Cedar Backup version %s, released %s.\n" % (VERSION, DATE)) 1001 fd.write("\n") 1002 fd.write(" Copyright (c) %s %s <%s>.\n" % (COPYRIGHT, AUTHOR, EMAIL)) 1003 fd.write(" See CREDITS for a list of included code and other contributors.\n") 1004 fd.write(" This is free software; there is NO warranty. See the\n") 1005 fd.write(" GNU General Public License version 2 for copying conditions.\n") 1006 fd.write("\n") 1007 fd.write(" Use the --help option for usage information.\n") 1008 fd.write("\n")
    1009
    1010 1011 ########################## 1012 # _diagnostics() function 1013 ########################## 1014 1015 -def _diagnostics(fd=sys.stdout):
    1016 """ 1017 Prints runtime diagnostics information. 1018 @param fd: File descriptor used to print information. 1019 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 1020 """ 1021 fd.write("\n") 1022 fd.write("Diagnostics:\n") 1023 fd.write("\n") 1024 Diagnostics().printDiagnostics(fd=fd, prefix=" ") 1025 fd.write("\n")
    1026
    1027 1028 ########################## 1029 # setupLogging() function 1030 ########################## 1031 1032 -def setupLogging(options):
    1033 """ 1034 Set up logging based on command-line options. 1035 1036 There are two kinds of logging: flow logging and output logging. Output 1037 logging contains information about system commands executed by Cedar Backup, 1038 for instance the calls to C{mkisofs} or C{mount}, etc. Flow logging 1039 contains error and informational messages used to understand program flow. 1040 Flow log messages and output log messages are written to two different 1041 loggers target (C{CedarBackup3.log} and C{CedarBackup3.output}). Flow log 1042 messages are written at the ERROR, INFO and DEBUG log levels, while output 1043 log messages are generally only written at the INFO log level. 1044 1045 By default, output logging is disabled. When the C{options.output} or 1046 C{options.debug} flags are set, output logging will be written to the 1047 configured logfile. Output logging is never written to the screen. 1048 1049 By default, flow logging is enabled at the ERROR level to the screen and at 1050 the INFO level to the configured logfile. If the C{options.quiet} flag is 1051 set, flow logging is enabled at the INFO level to the configured logfile 1052 only (i.e. no output will be sent to the screen). If the C{options.verbose} 1053 flag is set, flow logging is enabled at the INFO level to both the screen 1054 and the configured logfile. If the C{options.debug} flag is set, flow 1055 logging is enabled at the DEBUG level to both the screen and the configured 1056 logfile. 1057 1058 @param options: Command-line options. 1059 @type options: L{Options} object 1060 1061 @return: Path to logfile on disk. 1062 """ 1063 logfile = _setupLogfile(options) 1064 _setupFlowLogging(logfile, options) 1065 _setupOutputLogging(logfile, options) 1066 return logfile
    1067
    1068 -def _setupLogfile(options):
    1069 """ 1070 Sets up and creates logfile as needed. 1071 1072 If the logfile already exists on disk, it will be left as-is, under the 1073 assumption that it was created with appropriate ownership and permissions. 1074 If the logfile does not exist on disk, it will be created as an empty file. 1075 Ownership and permissions will remain at their defaults unless user/group 1076 and/or mode are set in the options. We ignore errors setting the indicated 1077 user and group. 1078 1079 @note: This function is vulnerable to a race condition. If the log file 1080 does not exist when the function is run, it will attempt to create the file 1081 as safely as possible (using C{O_CREAT}). If two processes attempt to 1082 create the file at the same time, then one of them will fail. In practice, 1083 this shouldn't really be a problem, but it might happen occassionally if two 1084 instances of cback3 run concurrently or if cback3 collides with logrotate or 1085 something. 1086 1087 @param options: Command-line options. 1088 1089 @return: Path to logfile on disk. 1090 """ 1091 if options.logfile is None: 1092 logfile = DEFAULT_LOGFILE 1093 else: 1094 logfile = options.logfile 1095 if not os.path.exists(logfile): 1096 mode = DEFAULT_MODE if options.mode is None else options.mode 1097 orig = os.umask(0) # Per os.open(), "When computing mode, the current umask value is first masked out" 1098 try: 1099 fd = os.open(logfile, os.O_RDWR|os.O_CREAT|os.O_APPEND, mode) 1100 with os.fdopen(fd, "a+") as f: 1101 f.write("") 1102 finally: 1103 os.umask(orig) 1104 try: 1105 if options.owner is None or len(options.owner) < 2: 1106 (uid, gid) = getUidGid(DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1]) 1107 else: 1108 (uid, gid) = getUidGid(options.owner[0], options.owner[1]) 1109 os.chown(logfile, uid, gid) 1110 except: pass 1111 return logfile
    1112
    1113 -def _setupFlowLogging(logfile, options):
    1114 """ 1115 Sets up flow logging. 1116 @param logfile: Path to logfile on disk. 1117 @param options: Command-line options. 1118 """ 1119 flowLogger = logging.getLogger("CedarBackup3.log") 1120 flowLogger.setLevel(logging.DEBUG) # let the logger see all messages 1121 _setupDiskFlowLogging(flowLogger, logfile, options) 1122 _setupScreenFlowLogging(flowLogger, options)
    1123
    1124 -def _setupOutputLogging(logfile, options):
    1125 """ 1126 Sets up command output logging. 1127 @param logfile: Path to logfile on disk. 1128 @param options: Command-line options. 1129 """ 1130 outputLogger = logging.getLogger("CedarBackup3.output") 1131 outputLogger.setLevel(logging.DEBUG) # let the logger see all messages 1132 _setupDiskOutputLogging(outputLogger, logfile, options)
    1133
    1134 -def _setupDiskFlowLogging(flowLogger, logfile, options):
    1135 """ 1136 Sets up on-disk flow logging. 1137 @param flowLogger: Python flow logger object. 1138 @param logfile: Path to logfile on disk. 1139 @param options: Command-line options. 1140 """ 1141 formatter = logging.Formatter(fmt=DISK_LOG_FORMAT, datefmt=DATE_FORMAT) 1142 handler = logging.FileHandler(logfile, mode="a") 1143 handler.setFormatter(formatter) 1144 if options.debug: 1145 handler.setLevel(logging.DEBUG) 1146 else: 1147 handler.setLevel(logging.INFO) 1148 flowLogger.addHandler(handler)
    1149
    1150 -def _setupScreenFlowLogging(flowLogger, options):
    1151 """ 1152 Sets up on-screen flow logging. 1153 @param flowLogger: Python flow logger object. 1154 @param options: Command-line options. 1155 """ 1156 formatter = logging.Formatter(fmt=SCREEN_LOG_FORMAT) 1157 handler = logging.StreamHandler(SCREEN_LOG_STREAM) 1158 handler.setFormatter(formatter) 1159 if options.quiet: 1160 handler.setLevel(logging.CRITICAL) # effectively turn it off 1161 elif options.verbose: 1162 if options.debug: 1163 handler.setLevel(logging.DEBUG) 1164 else: 1165 handler.setLevel(logging.INFO) 1166 else: 1167 handler.setLevel(logging.ERROR) 1168 flowLogger.addHandler(handler)
    1169
    1170 -def _setupDiskOutputLogging(outputLogger, logfile, options):
    1171 """ 1172 Sets up on-disk command output logging. 1173 @param outputLogger: Python command output logger object. 1174 @param logfile: Path to logfile on disk. 1175 @param options: Command-line options. 1176 """ 1177 formatter = logging.Formatter(fmt=DISK_OUTPUT_FORMAT, datefmt=DATE_FORMAT) 1178 handler = logging.FileHandler(logfile, mode="a") 1179 handler.setFormatter(formatter) 1180 if options.debug or options.output: 1181 handler.setLevel(logging.DEBUG) 1182 else: 1183 handler.setLevel(logging.CRITICAL) # effectively turn it off 1184 outputLogger.addHandler(handler)
    1185
    1186 1187 ############################### 1188 # setupPathResolver() function 1189 ############################### 1190 1191 -def setupPathResolver(config):
    1192 """ 1193 Set up the path resolver singleton based on configuration. 1194 1195 Cedar Backup's path resolver is implemented in terms of a singleton, the 1196 L{PathResolverSingleton} class. This function takes options configuration, 1197 converts it into the dictionary form needed by the singleton, and then 1198 initializes the singleton. After that, any function that needs to resolve 1199 the path of a command can use the singleton. 1200 1201 @param config: Configuration 1202 @type config: L{Config} object 1203 """ 1204 mapping = {} 1205 if config.options.overrides is not None: 1206 for override in config.options.overrides: 1207 mapping[override.command] = override.absolutePath 1208 singleton = PathResolverSingleton() 1209 singleton.fill(mapping)
    1210
    1211 1212 ######################################################################### 1213 # Options class definition 1214 ######################################################################## 1215 1216 @total_ordering 1217 -class Options(object):
    1218 1219 ###################### 1220 # Class documentation 1221 ###################### 1222 1223 """ 1224 Class representing command-line options for the cback3 script. 1225 1226 The C{Options} class is a Python object representation of the command-line 1227 options of the cback3 script. 1228 1229 The object representation is two-way: a command line string or a list of 1230 command line arguments can be used to create an C{Options} object, and then 1231 changes to the object can be propogated back to a list of command-line 1232 arguments or to a command-line string. An C{Options} object can even be 1233 created from scratch programmatically (if you have a need for that). 1234 1235 There are two main levels of validation in the C{Options} class. The first 1236 is field-level validation. Field-level validation comes into play when a 1237 given field in an object is assigned to or updated. We use Python's 1238 C{property} functionality to enforce specific validations on field values, 1239 and in some places we even use customized list classes to enforce 1240 validations on list members. You should expect to catch a C{ValueError} 1241 exception when making assignments to fields if you are programmatically 1242 filling an object. 1243 1244 The second level of validation is post-completion validation. Certain 1245 validations don't make sense until an object representation of options is 1246 fully "complete". We don't want these validations to apply all of the time, 1247 because it would make building up a valid object from scratch a real pain. 1248 For instance, we might have to do things in the right order to keep from 1249 throwing exceptions, etc. 1250 1251 All of these post-completion validations are encapsulated in the 1252 L{Options.validate} method. This method can be called at any time by a 1253 client, and will always be called immediately after creating a C{Options} 1254 object from a command line and before exporting a C{Options} object back to 1255 a command line. This way, we get acceptable ease-of-use but we also don't 1256 accept or emit invalid command lines. 1257 1258 @note: Lists within this class are "unordered" for equality comparisons. 1259 1260 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__ 1261 """ 1262 1263 ############## 1264 # Constructor 1265 ############## 1266
    1267 - def __init__(self, argumentList=None, argumentString=None, validate=True):
    1268 """ 1269 Initializes an options object. 1270 1271 If you initialize the object without passing either C{argumentList} or 1272 C{argumentString}, the object will be empty and will be invalid until it 1273 is filled in properly. 1274 1275 No reference to the original arguments is saved off by this class. Once 1276 the data has been parsed (successfully or not) this original information 1277 is discarded. 1278 1279 The argument list is assumed to be a list of arguments, not including the 1280 name of the command, something like C{sys.argv[1:]}. If you pass 1281 C{sys.argv} instead, things are not going to work. 1282 1283 The argument string will be parsed into an argument list by the 1284 L{util.splitCommandLine} function (see the documentation for that 1285 function for some important notes about its limitations). There is an 1286 assumption that the resulting list will be equivalent to C{sys.argv[1:]}, 1287 just like C{argumentList}. 1288 1289 Unless the C{validate} argument is C{False}, the L{Options.validate} 1290 method will be called (with its default arguments) after successfully 1291 parsing any passed-in command line. This validation ensures that 1292 appropriate actions, etc. have been specified. Keep in mind that even if 1293 C{validate} is C{False}, it might not be possible to parse the passed-in 1294 command line, so an exception might still be raised. 1295 1296 @note: The command line format is specified by the L{_usage} function. 1297 Call L{_usage} to see a usage statement for the cback3 script. 1298 1299 @note: It is strongly suggested that the C{validate} option always be set 1300 to C{True} (the default) unless there is a specific need to read in 1301 invalid command line arguments. 1302 1303 @param argumentList: Command line for a program. 1304 @type argumentList: List of arguments, i.e. C{sys.argv} 1305 1306 @param argumentString: Command line for a program. 1307 @type argumentString: String, i.e. "cback3 --verbose stage store" 1308 1309 @param validate: Validate the command line after parsing it. 1310 @type validate: Boolean true/false. 1311 1312 @raise getopt.GetoptError: If the command-line arguments could not be parsed. 1313 @raise ValueError: If the command-line arguments are invalid. 1314 """ 1315 self._help = False 1316 self._version = False 1317 self._verbose = False 1318 self._quiet = False 1319 self._config = None 1320 self._full = False 1321 self._managed = False 1322 self._managedOnly = False 1323 self._logfile = None 1324 self._owner = None 1325 self._mode = None 1326 self._output = False 1327 self._debug = False 1328 self._stacktrace = False 1329 self._diagnostics = False 1330 self._actions = None 1331 self.actions = [] # initialize to an empty list; remainder are OK 1332 if argumentList is not None and argumentString is not None: 1333 raise ValueError("Use either argumentList or argumentString, but not both.") 1334 if argumentString is not None: 1335 argumentList = splitCommandLine(argumentString) 1336 if argumentList is not None: 1337 self._parseArgumentList(argumentList) 1338 if validate: 1339 self.validate()
    1340 1341 1342 ######################### 1343 # String representations 1344 ######################### 1345
    1346 - def __repr__(self):
    1347 """ 1348 Official string representation for class instance. 1349 """ 1350 return self.buildArgumentString(validate=False)
    1351
    1352 - def __str__(self):
    1353 """ 1354 Informal string representation for class instance. 1355 """ 1356 return self.__repr__()
    1357 1358 1359 ############################# 1360 # Standard comparison method 1361 ############################# 1362
    1363 - def __eq__(self, other):
    1364 """Equals operator, implemented in terms of original Python 2 compare operator.""" 1365 return self.__cmp__(other) == 0
    1366
    1367 - def __lt__(self, other):
    1368 """Less-than operator, implemented in terms of original Python 2 compare operator.""" 1369 return self.__cmp__(other) < 0
    1370
    1371 - def __gt__(self, other):
    1372 """Greater-than operator, implemented in terms of original Python 2 compare operator.""" 1373 return self.__cmp__(other) > 0
    1374
    1375 - def __cmp__(self, other):
    1376 """ 1377 Original Python 2 comparison operator. 1378 Lists within this class are "unordered" for equality comparisons. 1379 @param other: Other object to compare to. 1380 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1381 """ 1382 if other is None: 1383 return 1 1384 if self.help != other.help: 1385 if self.help < other.help: 1386 return -1 1387 else: 1388 return 1 1389 if self.version != other.version: 1390 if self.version < other.version: 1391 return -1 1392 else: 1393 return 1 1394 if self.verbose != other.verbose: 1395 if self.verbose < other.verbose: 1396 return -1 1397 else: 1398 return 1 1399 if self.quiet != other.quiet: 1400 if self.quiet < other.quiet: 1401 return -1 1402 else: 1403 return 1 1404 if self.config != other.config: 1405 if self.config < other.config: 1406 return -1 1407 else: 1408 return 1 1409 if self.full != other.full: 1410 if self.full < other.full: 1411 return -1 1412 else: 1413 return 1 1414 if self.managed != other.managed: 1415 if self.managed < other.managed: 1416 return -1 1417 else: 1418 return 1 1419 if self.managedOnly != other.managedOnly: 1420 if self.managedOnly < other.managedOnly: 1421 return -1 1422 else: 1423 return 1 1424 if self.logfile != other.logfile: 1425 if str(self.logfile or "") < str(other.logfile or ""): 1426 return -1 1427 else: 1428 return 1 1429 if self.owner != other.owner: 1430 if str(self.owner or "") < str(other.owner or ""): 1431 return -1 1432 else: 1433 return 1 1434 if self.mode != other.mode: 1435 if int(self.mode or 0) < int(other.mode or 0): 1436 return -1 1437 else: 1438 return 1 1439 if self.output != other.output: 1440 if self.output < other.output: 1441 return -1 1442 else: 1443 return 1 1444 if self.debug != other.debug: 1445 if self.debug < other.debug: 1446 return -1 1447 else: 1448 return 1 1449 if self.stacktrace != other.stacktrace: 1450 if self.stacktrace < other.stacktrace: 1451 return -1 1452 else: 1453 return 1 1454 if self.diagnostics != other.diagnostics: 1455 if self.diagnostics < other.diagnostics: 1456 return -1 1457 else: 1458 return 1 1459 if self.actions != other.actions: 1460 if self.actions < other.actions: 1461 return -1 1462 else: 1463 return 1 1464 return 0
    1465 1466 1467 ############# 1468 # Properties 1469 ############# 1470
    1471 - def _setHelp(self, value):
    1472 """ 1473 Property target used to set the help flag. 1474 No validations, but we normalize the value to C{True} or C{False}. 1475 """ 1476 if value: 1477 self._help = True 1478 else: 1479 self._help = False
    1480
    1481 - def _getHelp(self):
    1482 """ 1483 Property target used to get the help flag. 1484 """ 1485 return self._help
    1486
    1487 - def _setVersion(self, value):
    1488 """ 1489 Property target used to set the version flag. 1490 No validations, but we normalize the value to C{True} or C{False}. 1491 """ 1492 if value: 1493 self._version = True 1494 else: 1495 self._version = False
    1496
    1497 - def _getVersion(self):
    1498 """ 1499 Property target used to get the version flag. 1500 """ 1501 return self._version
    1502
    1503 - def _setVerbose(self, value):
    1504 """ 1505 Property target used to set the verbose flag. 1506 No validations, but we normalize the value to C{True} or C{False}. 1507 """ 1508 if value: 1509 self._verbose = True 1510 else: 1511 self._verbose = False
    1512
    1513 - def _getVerbose(self):
    1514 """ 1515 Property target used to get the verbose flag. 1516 """ 1517 return self._verbose
    1518
    1519 - def _setQuiet(self, value):
    1520 """ 1521 Property target used to set the quiet flag. 1522 No validations, but we normalize the value to C{True} or C{False}. 1523 """ 1524 if value: 1525 self._quiet = True 1526 else: 1527 self._quiet = False
    1528
    1529 - def _getQuiet(self):
    1530 """ 1531 Property target used to get the quiet flag. 1532 """ 1533 return self._quiet
    1534
    1535 - def _setConfig(self, value):
    1536 """ 1537 Property target used to set the config parameter. 1538 """ 1539 if value is not None: 1540 if len(value) < 1: 1541 raise ValueError("The config parameter must be a non-empty string.") 1542 self._config = value
    1543
    1544 - def _getConfig(self):
    1545 """ 1546 Property target used to get the config parameter. 1547 """ 1548 return self._config
    1549
    1550 - def _setFull(self, value):
    1551 """ 1552 Property target used to set the full flag. 1553 No validations, but we normalize the value to C{True} or C{False}. 1554 """ 1555 if value: 1556 self._full = True 1557 else: 1558 self._full = False
    1559
    1560 - def _getFull(self):
    1561 """ 1562 Property target used to get the full flag. 1563 """ 1564 return self._full
    1565
    1566 - def _setManaged(self, value):
    1567 """ 1568 Property target used to set the managed flag. 1569 No validations, but we normalize the value to C{True} or C{False}. 1570 """ 1571 if value: 1572 self._managed = True 1573 else: 1574 self._managed = False
    1575
    1576 - def _getManaged(self):
    1577 """ 1578 Property target used to get the managed flag. 1579 """ 1580 return self._managed
    1581
    1582 - def _setManagedOnly(self, value):
    1583 """ 1584 Property target used to set the managedOnly flag. 1585 No validations, but we normalize the value to C{True} or C{False}. 1586 """ 1587 if value: 1588 self._managedOnly = True 1589 else: 1590 self._managedOnly = False
    1591
    1592 - def _getManagedOnly(self):
    1593 """ 1594 Property target used to get the managedOnly flag. 1595 """ 1596 return self._managedOnly
    1597
    1598 - def _setLogfile(self, value):
    1599 """ 1600 Property target used to set the logfile parameter. 1601 @raise ValueError: If the value cannot be encoded properly. 1602 """ 1603 if value is not None: 1604 if len(value) < 1: 1605 raise ValueError("The logfile parameter must be a non-empty string.") 1606 self._logfile = encodePath(value)
    1607
    1608 - def _getLogfile(self):
    1609 """ 1610 Property target used to get the logfile parameter. 1611 """ 1612 return self._logfile
    1613
    1614 - def _setOwner(self, value):
    1615 """ 1616 Property target used to set the owner parameter. 1617 If not C{None}, the owner must be a C{(user,group)} tuple or list. 1618 Strings (and inherited children of strings) are explicitly disallowed. 1619 The value will be normalized to a tuple. 1620 @raise ValueError: If the value is not valid. 1621 """ 1622 if value is None: 1623 self._owner = None 1624 else: 1625 if isinstance(value, str): 1626 raise ValueError("Must specify user and group tuple for owner parameter.") 1627 if len(value) != 2: 1628 raise ValueError("Must specify user and group tuple for owner parameter.") 1629 if len(value[0]) < 1 or len(value[1]) < 1: 1630 raise ValueError("User and group tuple values must be non-empty strings.") 1631 self._owner = (value[0], value[1])
    1632
    1633 - def _getOwner(self):
    1634 """ 1635 Property target used to get the owner parameter. 1636 The parameter is a tuple of C{(user, group)}. 1637 """ 1638 return self._owner
    1639
    1640 - def _setMode(self, value):
    1641 """ 1642 Property target used to set the mode parameter. 1643 """ 1644 if value is None: 1645 self._mode = None 1646 else: 1647 try: 1648 if isinstance(value, str): 1649 value = int(value, 8) 1650 else: 1651 value = int(value) 1652 except TypeError: 1653 raise ValueError("Mode must be an octal integer >= 0, i.e. 644.") 1654 if value < 0: 1655 raise ValueError("Mode must be an octal integer >= 0. i.e. 644.") 1656 self._mode = value
    1657
    1658 - def _getMode(self):
    1659 """ 1660 Property target used to get the mode parameter. 1661 """ 1662 return self._mode
    1663
    1664 - def _setOutput(self, value):
    1665 """ 1666 Property target used to set the output flag. 1667 No validations, but we normalize the value to C{True} or C{False}. 1668 """ 1669 if value: 1670 self._output = True 1671 else: 1672 self._output = False
    1673
    1674 - def _getOutput(self):
    1675 """ 1676 Property target used to get the output flag. 1677 """ 1678 return self._output
    1679
    1680 - def _setDebug(self, value):
    1681 """ 1682 Property target used to set the debug flag. 1683 No validations, but we normalize the value to C{True} or C{False}. 1684 """ 1685 if value: 1686 self._debug = True 1687 else: 1688 self._debug = False
    1689
    1690 - def _getDebug(self):
    1691 """ 1692 Property target used to get the debug flag. 1693 """ 1694 return self._debug
    1695
    1696 - def _setStacktrace(self, value):
    1697 """ 1698 Property target used to set the stacktrace flag. 1699 No validations, but we normalize the value to C{True} or C{False}. 1700 """ 1701 if value: 1702 self._stacktrace = True 1703 else: 1704 self._stacktrace = False
    1705
    1706 - def _getStacktrace(self):
    1707 """ 1708 Property target used to get the stacktrace flag. 1709 """ 1710 return self._stacktrace
    1711
    1712 - def _setDiagnostics(self, value):
    1713 """ 1714 Property target used to set the diagnostics flag. 1715 No validations, but we normalize the value to C{True} or C{False}. 1716 """ 1717 if value: 1718 self._diagnostics = True 1719 else: 1720 self._diagnostics = False
    1721
    1722 - def _getDiagnostics(self):
    1723 """ 1724 Property target used to get the diagnostics flag. 1725 """ 1726 return self._diagnostics
    1727
    1728 - def _setActions(self, value):
    1729 """ 1730 Property target used to set the actions list. 1731 We don't restrict the contents of actions. They're validated somewhere else. 1732 @raise ValueError: If the value is not valid. 1733 """ 1734 if value is None: 1735 self._actions = None 1736 else: 1737 try: 1738 saved = self._actions 1739 self._actions = [] 1740 self._actions.extend(value) 1741 except Exception as e: 1742 self._actions = saved 1743 raise e
    1744
    1745 - def _getActions(self):
    1746 """ 1747 Property target used to get the actions list. 1748 """ 1749 return self._actions
    1750 1751 help = property(_getHelp, _setHelp, None, "Command-line help (C{-h,--help}) flag.") 1752 version = property(_getVersion, _setVersion, None, "Command-line version (C{-V,--version}) flag.") 1753 verbose = property(_getVerbose, _setVerbose, None, "Command-line verbose (C{-b,--verbose}) flag.") 1754 quiet = property(_getQuiet, _setQuiet, None, "Command-line quiet (C{-q,--quiet}) flag.") 1755 config = property(_getConfig, _setConfig, None, "Command-line configuration file (C{-c,--config}) parameter.") 1756 full = property(_getFull, _setFull, None, "Command-line full-backup (C{-f,--full}) flag.") 1757 managed = property(_getManaged, _setManaged, None, "Command-line managed (C{-M,--managed}) flag.") 1758 managedOnly = property(_getManagedOnly, _setManagedOnly, None, "Command-line managed-only (C{-N,--managed-only}) flag.") 1759 logfile = property(_getLogfile, _setLogfile, None, "Command-line logfile (C{-l,--logfile}) parameter.") 1760 owner = property(_getOwner, _setOwner, None, "Command-line owner (C{-o,--owner}) parameter, as tuple C{(user,group)}.") 1761 mode = property(_getMode, _setMode, None, "Command-line mode (C{-m,--mode}) parameter.") 1762 output = property(_getOutput, _setOutput, None, "Command-line output (C{-O,--output}) flag.") 1763 debug = property(_getDebug, _setDebug, None, "Command-line debug (C{-d,--debug}) flag.") 1764 stacktrace = property(_getStacktrace, _setStacktrace, None, "Command-line stacktrace (C{-s,--stack}) flag.") 1765 diagnostics = property(_getDiagnostics, _setDiagnostics, None, "Command-line diagnostics (C{-D,--diagnostics}) flag.") 1766 actions = property(_getActions, _setActions, None, "Command-line actions list.") 1767 1768 1769 ################## 1770 # Utility methods 1771 ################## 1772
    1773 - def validate(self):
    1774 """ 1775 Validates command-line options represented by the object. 1776 1777 Unless C{--help} or C{--version} are supplied, at least one action must 1778 be specified. Other validations (as for allowed values for particular 1779 options) will be taken care of at assignment time by the properties 1780 functionality. 1781 1782 @note: The command line format is specified by the L{_usage} function. 1783 Call L{_usage} to see a usage statement for the cback3 script. 1784 1785 @raise ValueError: If one of the validations fails. 1786 """ 1787 if not self.help and not self.version and not self.diagnostics: 1788 if self.actions is None or len(self.actions) == 0: 1789 raise ValueError("At least one action must be specified.") 1790 if self.managed and self.managedOnly: 1791 raise ValueError("The --managed and --managed-only options may not be combined.")
    1792
    1793 - def buildArgumentList(self, validate=True):
    1794 """ 1795 Extracts options into a list of command line arguments. 1796 1797 The original order of the various arguments (if, indeed, the object was 1798 initialized with a command-line) is not preserved in this generated 1799 argument list. Besides that, the argument list is normalized to use the 1800 long option names (i.e. --version rather than -V). The resulting list 1801 will be suitable for passing back to the constructor in the 1802 C{argumentList} parameter. Unlike L{buildArgumentString}, string 1803 arguments are not quoted here, because there is no need for it. 1804 1805 Unless the C{validate} parameter is C{False}, the L{Options.validate} 1806 method will be called (with its default arguments) against the 1807 options before extracting the command line. If the options are not valid, 1808 then an argument list will not be extracted. 1809 1810 @note: It is strongly suggested that the C{validate} option always be set 1811 to C{True} (the default) unless there is a specific need to extract an 1812 invalid command line. 1813 1814 @param validate: Validate the options before extracting the command line. 1815 @type validate: Boolean true/false. 1816 1817 @return: List representation of command-line arguments. 1818 @raise ValueError: If options within the object are invalid. 1819 """ 1820 if validate: 1821 self.validate() 1822 argumentList = [] 1823 if self._help: 1824 argumentList.append("--help") 1825 if self.version: 1826 argumentList.append("--version") 1827 if self.verbose: 1828 argumentList.append("--verbose") 1829 if self.quiet: 1830 argumentList.append("--quiet") 1831 if self.config is not None: 1832 argumentList.append("--config") 1833 argumentList.append(self.config) 1834 if self.full: 1835 argumentList.append("--full") 1836 if self.managed: 1837 argumentList.append("--managed") 1838 if self.managedOnly: 1839 argumentList.append("--managed-only") 1840 if self.logfile is not None: 1841 argumentList.append("--logfile") 1842 argumentList.append(self.logfile) 1843 if self.owner is not None: 1844 argumentList.append("--owner") 1845 argumentList.append("%s:%s" % (self.owner[0], self.owner[1])) 1846 if self.mode is not None: 1847 argumentList.append("--mode") 1848 argumentList.append("%o" % self.mode) 1849 if self.output: 1850 argumentList.append("--output") 1851 if self.debug: 1852 argumentList.append("--debug") 1853 if self.stacktrace: 1854 argumentList.append("--stack") 1855 if self.diagnostics: 1856 argumentList.append("--diagnostics") 1857 if self.actions is not None: 1858 for action in self.actions: 1859 argumentList.append(action) 1860 return argumentList
    1861
    1862 - def buildArgumentString(self, validate=True):
    1863 """ 1864 Extracts options into a string of command-line arguments. 1865 1866 The original order of the various arguments (if, indeed, the object was 1867 initialized with a command-line) is not preserved in this generated 1868 argument string. Besides that, the argument string is normalized to use 1869 the long option names (i.e. --version rather than -V) and to quote all 1870 string arguments with double quotes (C{"}). The resulting string will be 1871 suitable for passing back to the constructor in the C{argumentString} 1872 parameter. 1873 1874 Unless the C{validate} parameter is C{False}, the L{Options.validate} 1875 method will be called (with its default arguments) against the options 1876 before extracting the command line. If the options are not valid, then 1877 an argument string will not be extracted. 1878 1879 @note: It is strongly suggested that the C{validate} option always be set 1880 to C{True} (the default) unless there is a specific need to extract an 1881 invalid command line. 1882 1883 @param validate: Validate the options before extracting the command line. 1884 @type validate: Boolean true/false. 1885 1886 @return: String representation of command-line arguments. 1887 @raise ValueError: If options within the object are invalid. 1888 """ 1889 if validate: 1890 self.validate() 1891 argumentString = "" 1892 if self._help: 1893 argumentString += "--help " 1894 if self.version: 1895 argumentString += "--version " 1896 if self.verbose: 1897 argumentString += "--verbose " 1898 if self.quiet: 1899 argumentString += "--quiet " 1900 if self.config is not None: 1901 argumentString += "--config \"%s\" " % self.config 1902 if self.full: 1903 argumentString += "--full " 1904 if self.managed: 1905 argumentString += "--managed " 1906 if self.managedOnly: 1907 argumentString += "--managed-only " 1908 if self.logfile is not None: 1909 argumentString += "--logfile \"%s\" " % self.logfile 1910 if self.owner is not None: 1911 argumentString += "--owner \"%s:%s\" " % (self.owner[0], self.owner[1]) 1912 if self.mode is not None: 1913 argumentString += "--mode %o " % self.mode 1914 if self.output: 1915 argumentString += "--output " 1916 if self.debug: 1917 argumentString += "--debug " 1918 if self.stacktrace: 1919 argumentString += "--stack " 1920 if self.diagnostics: 1921 argumentString += "--diagnostics " 1922 if self.actions is not None: 1923 for action in self.actions: 1924 argumentString += "\"%s\" " % action 1925 return argumentString
    1926
    1927 - def _parseArgumentList(self, argumentList):
    1928 """ 1929 Internal method to parse a list of command-line arguments. 1930 1931 Most of the validation we do here has to do with whether the arguments 1932 can be parsed and whether any values which exist are valid. We don't do 1933 any validation as to whether required elements exist or whether elements 1934 exist in the proper combination (instead, that's the job of the 1935 L{validate} method). 1936 1937 For any of the options which supply parameters, if the option is 1938 duplicated with long and short switches (i.e. C{-l} and a C{--logfile}) 1939 then the long switch is used. If the same option is duplicated with the 1940 same switch (long or short), then the last entry on the command line is 1941 used. 1942 1943 @param argumentList: List of arguments to a command. 1944 @type argumentList: List of arguments to a command, i.e. C{sys.argv[1:]} 1945 1946 @raise ValueError: If the argument list cannot be successfully parsed. 1947 """ 1948 switches = { } 1949 opts, self.actions = getopt.getopt(argumentList, SHORT_SWITCHES, LONG_SWITCHES) 1950 for o, a in opts: # push the switches into a hash 1951 switches[o] = a 1952 if "-h" in switches or "--help" in switches: 1953 self.help = True 1954 if "-V" in switches or "--version" in switches: 1955 self.version = True 1956 if "-b" in switches or "--verbose" in switches: 1957 self.verbose = True 1958 if "-q" in switches or "--quiet" in switches: 1959 self.quiet = True 1960 if "-c" in switches: 1961 self.config = switches["-c"] 1962 if "--config" in switches: 1963 self.config = switches["--config"] 1964 if "-f" in switches or "--full" in switches: 1965 self.full = True 1966 if "-M" in switches or "--managed" in switches: 1967 self.managed = True 1968 if "-N" in switches or "--managed-only" in switches: 1969 self.managedOnly = True 1970 if "-l" in switches: 1971 self.logfile = switches["-l"] 1972 if "--logfile" in switches: 1973 self.logfile = switches["--logfile"] 1974 if "-o" in switches: 1975 self.owner = switches["-o"].split(":", 1) 1976 if "--owner" in switches: 1977 self.owner = switches["--owner"].split(":", 1) 1978 if "-m" in switches: 1979 self.mode = switches["-m"] 1980 if "--mode" in switches: 1981 self.mode = switches["--mode"] 1982 if "-O" in switches or "--output" in switches: 1983 self.output = True 1984 if "-d" in switches or "--debug" in switches: 1985 self.debug = True 1986 if "-s" in switches or "--stack" in switches: 1987 self.stacktrace = True 1988 if "-D" in switches or "--diagnostics" in switches: 1989 self.diagnostics = True
    1990 1991 1992 ######################################################################### 1993 # Main routine 1994 ######################################################################## 1995 1996 if __name__ == "__main__": 1997 result = cli() 1998 sys.exit(result) 1999

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.writers.dvdwriter._ImageProperties-class.html0000664000175000017500000002172212657665545033414 0ustar pronovicpronovic00000000000000 CedarBackup3.writers.dvdwriter._ImageProperties
    Package CedarBackup3 :: Package writers :: Module dvdwriter :: Class _ImageProperties
    [hide private]
    [frames] | no frames]

    Class _ImageProperties

    source code

    object --+
             |
            _ImageProperties
    

    Simple value object to hold image properties for DvdWriter.

    Instance Methods [hide private]
     
    __init__(self)
    x.__init__(...) initializes x; see help(type(x)) for signature
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    x.__init__(...) initializes x; see help(type(x)) for signature

    Overrides: object.__init__
    (inherited documentation)

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.encrypt.EncryptConfig-class.html0000664000175000017500000006362712657665544032203 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.encrypt.EncryptConfig
    Package CedarBackup3 :: Package extend :: Module encrypt :: Class EncryptConfig
    [hide private]
    [frames] | no frames]

    Class EncryptConfig

    source code

    object --+
             |
            EncryptConfig
    

    Class representing encrypt configuration.

    Encrypt configuration is used for encrypting staging directories.

    The following restrictions exist on data in this class:

    • The encrypt mode must be one of the values in VALID_ENCRYPT_MODES
    • The encrypt target value must be a non-empty string
    Instance Methods [hide private]
     
    __init__(self, encryptMode=None, encryptTarget=None)
    Constructor for the EncryptConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    _setEncryptMode(self, value)
    Property target used to set the encrypt mode.
    source code
     
    _getEncryptMode(self)
    Property target used to get the encrypt mode.
    source code
     
    _setEncryptTarget(self, value)
    Property target used to set the encrypt target.
    source code
     
    _getEncryptTarget(self)
    Property target used to get the encrypt target.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      encryptMode
    Encrypt mode.
      encryptTarget
    Encrypt target (i.e.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, encryptMode=None, encryptTarget=None)
    (Constructor)

    source code 

    Constructor for the EncryptConfig class.

    Parameters:
    • encryptMode - Encryption mode
    • encryptTarget - Encryption target (for instance, GPG recipient)
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setEncryptMode(self, value)

    source code 

    Property target used to set the encrypt mode. If not None, the mode must be one of the values in VALID_ENCRYPT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    encryptMode

    Encrypt mode.

    Get Method:
    _getEncryptMode(self) - Property target used to get the encrypt mode.
    Set Method:
    _setEncryptMode(self, value) - Property target used to set the encrypt mode.

    encryptTarget

    Encrypt target (i.e. GPG recipient).

    Get Method:
    _getEncryptTarget(self) - Property target used to get the encrypt target.
    Set Method:
    _setEncryptTarget(self, value) - Property target used to set the encrypt target.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.writers.cdwriter.MediaCapacity-class.html0000664000175000017500000005125512657665545032470 0ustar pronovicpronovic00000000000000 CedarBackup3.writers.cdwriter.MediaCapacity
    Package CedarBackup3 :: Package writers :: Module cdwriter :: Class MediaCapacity
    [hide private]
    [frames] | no frames]

    Class MediaCapacity

    source code

    object --+
             |
            MediaCapacity
    

    Class encapsulating information about CD media capacity.

    Space used includes the required media lead-in (unless the disk is unused). Space available attempts to provide a picture of how many bytes are available for data storage, including any required lead-in.

    The boundaries value is either None (if multisession discs are not supported or if the disc has no boundaries) or in exactly the form provided by cdrecord -msinfo. It can be passed as-is to the IsoImage class.

    Instance Methods [hide private]
     
    __init__(self, bytesUsed, bytesAvailable, boundaries)
    Initializes a capacity object.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    _getBytesUsed(self)
    Property target to get the bytes-used value.
    source code
     
    _getBytesAvailable(self)
    Property target to get the bytes-available value.
    source code
     
    _getBoundaries(self)
    Property target to get the boundaries tuple.
    source code
     
    _getTotalCapacity(self)
    Property target to get the total capacity (used + available).
    source code
     
    _getUtilized(self)
    Property target to get the percent of capacity which is utilized.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      bytesUsed
    Space used on disc, in bytes.
      bytesAvailable
    Space available on disc, in bytes.
      boundaries
    Session disc boundaries, in terms of ISO sectors.
      totalCapacity
    Total capacity of the disc, in bytes.
      utilized
    Percentage of the total capacity which is utilized.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, bytesUsed, bytesAvailable, boundaries)
    (Constructor)

    source code 

    Initializes a capacity object.

    Raises:
    • IndexError - If the boundaries tuple does not have enough elements.
    • ValueError - If the boundaries values are not integers.
    • ValueError - If the bytes used and available values are not floats.
    Overrides: object.__init__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    Property Details [hide private]

    bytesUsed

    Space used on disc, in bytes.

    Get Method:
    _getBytesUsed(self) - Property target to get the bytes-used value.

    bytesAvailable

    Space available on disc, in bytes.

    Get Method:
    _getBytesAvailable(self) - Property target to get the bytes-available value.

    boundaries

    Session disc boundaries, in terms of ISO sectors.

    Get Method:
    _getBoundaries(self) - Property target to get the boundaries tuple.

    totalCapacity

    Total capacity of the disc, in bytes.

    Get Method:
    _getTotalCapacity(self) - Property target to get the total capacity (used + available).

    utilized

    Percentage of the total capacity which is utilized.

    Get Method:
    _getUtilized(self) - Property target to get the percent of capacity which is utilized.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3-module.html0000664000175000017500000003531312657665544024630 0ustar pronovicpronovic00000000000000 CedarBackup3
    Package CedarBackup3
    [hide private]
    [frames] | no frames]

    Package CedarBackup3

    source code

    Implements local and remote backups to CD or DVD media.

    Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources.

    Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis.

    Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python programming language.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Submodules [hide private]

    Variables [hide private]
      __package__ = None
    hash(x)
    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.mysql-pysrc.html0000664000175000017500000100614512657665546027141 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.mysql
    Package CedarBackup3 :: Package extend :: Module mysql
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.extend.mysql

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2005,2010,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Official Cedar Backup Extensions 
     30  # Purpose  : Provides an extension to back up MySQL databases. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Provides an extension to back up MySQL databases. 
     40   
     41  This is a Cedar Backup extension used to back up MySQL databases via the Cedar 
     42  Backup command line.  It requires a new configuration section <mysql> and is 
     43  intended to be run either immediately before or immediately after the standard 
     44  collect action.  Aside from its own configuration, it requires the options and 
     45  collect configuration sections in the standard Cedar Backup configuration file. 
     46   
     47  The backup is done via the C{mysqldump} command included with the MySQL 
     48  product.  Output can be compressed using C{gzip} or C{bzip2}.  Administrators 
     49  can configure the extension either to back up all databases or to back up only 
     50  specific databases.  Note that this code always produces a full backup.  There 
     51  is currently no facility for making incremental backups.  If/when someone has a 
     52  need for this and can describe how to do it, I'll update this extension or 
     53  provide another. 
     54   
     55  The extension assumes that all configured databases can be backed up by a 
     56  single user.  Often, the "root" database user will be used.  An alternative is 
     57  to create a separate MySQL "backup" user and grant that user rights to read 
     58  (but not write) various databases as needed.  This second option is probably 
     59  the best choice. 
     60   
     61  The extension accepts a username and password in configuration.  However, you 
     62  probably do not want to provide those values in Cedar Backup configuration. 
     63  This is because Cedar Backup will provide these values to C{mysqldump} via the 
     64  command-line C{--user} and C{--password} switches, which will be visible to 
     65  other users in the process listing. 
     66   
     67  Instead, you should configure the username and password in one of MySQL's 
     68  configuration files.  Typically, that would be done by putting a stanza like 
     69  this in C{/root/.my.cnf}:: 
     70   
     71     [mysqldump] 
     72     user     = root 
     73     password = <secret> 
     74   
     75  Regardless of whether you are using C{~/.my.cnf} or C{/etc/cback3.conf} to store 
     76  database login and password information, you should be careful about who is 
     77  allowed to view that information.  Typically, this means locking down 
     78  permissions so that only the file owner can read the file contents (i.e. use 
     79  mode C{0600}). 
     80   
     81  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     82  """ 
     83   
     84  ######################################################################## 
     85  # Imported modules 
     86  ######################################################################## 
     87   
     88  # System modules 
     89  import os 
     90  import logging 
     91  from gzip import GzipFile 
     92  from bz2 import BZ2File 
     93  from functools import total_ordering 
     94   
     95  # Cedar Backup modules 
     96  from CedarBackup3.xmlutil import createInputDom, addContainerNode, addStringNode, addBooleanNode 
     97  from CedarBackup3.xmlutil import readFirstChild, readString, readStringList, readBoolean 
     98  from CedarBackup3.config import VALID_COMPRESS_MODES 
     99  from CedarBackup3.util import resolveCommand, executeCommand 
    100  from CedarBackup3.util import ObjectTypeList, changeOwnership 
    101   
    102   
    103  ######################################################################## 
    104  # Module-wide constants and variables 
    105  ######################################################################## 
    106   
    107  logger = logging.getLogger("CedarBackup3.log.extend.mysql") 
    108  MYSQLDUMP_COMMAND = [ "mysqldump", ] 
    
    109 110 111 ######################################################################## 112 # MysqlConfig class definition 113 ######################################################################## 114 115 @total_ordering 116 -class MysqlConfig(object):
    117 118 """ 119 Class representing MySQL configuration. 120 121 The MySQL configuration information is used for backing up MySQL databases. 122 123 The following restrictions exist on data in this class: 124 125 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 126 - The 'all' flag must be 'Y' if no databases are defined. 127 - The 'all' flag must be 'N' if any databases are defined. 128 - Any values in the databases list must be strings. 129 130 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, user, 131 password, all, databases 132 """ 133
    134 - def __init__(self, user=None, password=None, compressMode=None, all=None, databases=None): # pylint: disable=W0622
    135 """ 136 Constructor for the C{MysqlConfig} class. 137 138 @param user: User to execute backup as. 139 @param password: Password associated with user. 140 @param compressMode: Compress mode for backed-up files. 141 @param all: Indicates whether to back up all databases. 142 @param databases: List of databases to back up. 143 """ 144 self._user = None 145 self._password = None 146 self._compressMode = None 147 self._all = None 148 self._databases = None 149 self.user = user 150 self.password = password 151 self.compressMode = compressMode 152 self.all = all 153 self.databases = databases
    154
    155 - def __repr__(self):
    156 """ 157 Official string representation for class instance. 158 """ 159 return "MysqlConfig(%s, %s, %s, %s)" % (self.user, self.password, self.all, self.databases)
    160
    161 - def __str__(self):
    162 """ 163 Informal string representation for class instance. 164 """ 165 return self.__repr__()
    166
    167 - def __eq__(self, other):
    168 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 169 return self.__cmp__(other) == 0
    170
    171 - def __lt__(self, other):
    172 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 173 return self.__cmp__(other) < 0
    174
    175 - def __gt__(self, other):
    176 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 177 return self.__cmp__(other) > 0
    178
    179 - def __cmp__(self, other):
    180 """ 181 Original Python 2 comparison operator. 182 @param other: Other object to compare to. 183 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 184 """ 185 if other is None: 186 return 1 187 if self.user != other.user: 188 if str(self.user or "") < str(other.user or ""): 189 return -1 190 else: 191 return 1 192 if self.password != other.password: 193 if str(self.password or "") < str(other.password or ""): 194 return -1 195 else: 196 return 1 197 if self.compressMode != other.compressMode: 198 if str(self.compressMode or "") < str(other.compressMode or ""): 199 return -1 200 else: 201 return 1 202 if self.all != other.all: 203 if self.all < other.all: 204 return -1 205 else: 206 return 1 207 if self.databases != other.databases: 208 if self.databases < other.databases: 209 return -1 210 else: 211 return 1 212 return 0
    213
    214 - def _setUser(self, value):
    215 """ 216 Property target used to set the user value. 217 """ 218 if value is not None: 219 if len(value) < 1: 220 raise ValueError("User must be non-empty string.") 221 self._user = value
    222
    223 - def _getUser(self):
    224 """ 225 Property target used to get the user value. 226 """ 227 return self._user
    228
    229 - def _setPassword(self, value):
    230 """ 231 Property target used to set the password value. 232 """ 233 if value is not None: 234 if len(value) < 1: 235 raise ValueError("Password must be non-empty string.") 236 self._password = value
    237
    238 - def _getPassword(self):
    239 """ 240 Property target used to get the password value. 241 """ 242 return self._password
    243
    244 - def _setCompressMode(self, value):
    245 """ 246 Property target used to set the compress mode. 247 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 248 @raise ValueError: If the value is not valid. 249 """ 250 if value is not None: 251 if value not in VALID_COMPRESS_MODES: 252 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 253 self._compressMode = value
    254
    255 - def _getCompressMode(self):
    256 """ 257 Property target used to get the compress mode. 258 """ 259 return self._compressMode
    260
    261 - def _setAll(self, value):
    262 """ 263 Property target used to set the 'all' flag. 264 No validations, but we normalize the value to C{True} or C{False}. 265 """ 266 if value: 267 self._all = True 268 else: 269 self._all = False
    270
    271 - def _getAll(self):
    272 """ 273 Property target used to get the 'all' flag. 274 """ 275 return self._all
    276
    277 - def _setDatabases(self, value):
    278 """ 279 Property target used to set the databases list. 280 Either the value must be C{None} or each element must be a string. 281 @raise ValueError: If the value is not a string. 282 """ 283 if value is None: 284 self._databases = None 285 else: 286 for database in value: 287 if len(database) < 1: 288 raise ValueError("Each database must be a non-empty string.") 289 try: 290 saved = self._databases 291 self._databases = ObjectTypeList(str, "string") 292 self._databases.extend(value) 293 except Exception as e: 294 self._databases = saved 295 raise e
    296
    297 - def _getDatabases(self):
    298 """ 299 Property target used to get the databases list. 300 """ 301 return self._databases
    302 303 user = property(_getUser, _setUser, None, "User to execute backup as.") 304 password = property(_getPassword, _setPassword, None, "Password associated with user.") 305 compressMode = property(_getCompressMode, _setCompressMode, None, "Compress mode to be used for backed-up files.") 306 all = property(_getAll, _setAll, None, "Indicates whether to back up all databases.") 307 databases = property(_getDatabases, _setDatabases, None, "List of databases to back up.") 308
    309 310 ######################################################################## 311 # LocalConfig class definition 312 ######################################################################## 313 314 @total_ordering 315 -class LocalConfig(object):
    316 317 """ 318 Class representing this extension's configuration document. 319 320 This is not a general-purpose configuration object like the main Cedar 321 Backup configuration object. Instead, it just knows how to parse and emit 322 MySQL-specific configuration values. Third parties who need to read and 323 write configuration related to this extension should access it through the 324 constructor, C{validate} and C{addConfig} methods. 325 326 @note: Lists within this class are "unordered" for equality comparisons. 327 328 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, mysql, 329 validate, addConfig 330 """ 331
    332 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    333 """ 334 Initializes a configuration object. 335 336 If you initialize the object without passing either C{xmlData} or 337 C{xmlPath} then configuration will be empty and will be invalid until it 338 is filled in properly. 339 340 No reference to the original XML data or original path is saved off by 341 this class. Once the data has been parsed (successfully or not) this 342 original information is discarded. 343 344 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 345 method will be called (with its default arguments) against configuration 346 after successfully parsing any passed-in XML. Keep in mind that even if 347 C{validate} is C{False}, it might not be possible to parse the passed-in 348 XML document if lower-level validations fail. 349 350 @note: It is strongly suggested that the C{validate} option always be set 351 to C{True} (the default) unless there is a specific need to read in 352 invalid configuration from disk. 353 354 @param xmlData: XML data representing configuration. 355 @type xmlData: String data. 356 357 @param xmlPath: Path to an XML file on disk. 358 @type xmlPath: Absolute path to a file on disk. 359 360 @param validate: Validate the document after parsing it. 361 @type validate: Boolean true/false. 362 363 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 364 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 365 @raise ValueError: If the parsed configuration document is not valid. 366 """ 367 self._mysql = None 368 self.mysql = None 369 if xmlData is not None and xmlPath is not None: 370 raise ValueError("Use either xmlData or xmlPath, but not both.") 371 if xmlData is not None: 372 self._parseXmlData(xmlData) 373 if validate: 374 self.validate() 375 elif xmlPath is not None: 376 with open(xmlPath) as f: 377 xmlData = f.read() 378 self._parseXmlData(xmlData) 379 if validate: 380 self.validate()
    381
    382 - def __repr__(self):
    383 """ 384 Official string representation for class instance. 385 """ 386 return "LocalConfig(%s)" % (self.mysql)
    387
    388 - def __str__(self):
    389 """ 390 Informal string representation for class instance. 391 """ 392 return self.__repr__()
    393
    394 - def __eq__(self, other):
    395 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 396 return self.__cmp__(other) == 0
    397
    398 - def __lt__(self, other):
    399 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 400 return self.__cmp__(other) < 0
    401
    402 - def __gt__(self, other):
    403 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 404 return self.__cmp__(other) > 0
    405
    406 - def __cmp__(self, other):
    407 """ 408 Original Python 2 comparison operator. 409 Lists within this class are "unordered" for equality comparisons. 410 @param other: Other object to compare to. 411 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 412 """ 413 if other is None: 414 return 1 415 if self.mysql != other.mysql: 416 if self.mysql < other.mysql: 417 return -1 418 else: 419 return 1 420 return 0
    421
    422 - def _setMysql(self, value):
    423 """ 424 Property target used to set the mysql configuration value. 425 If not C{None}, the value must be a C{MysqlConfig} object. 426 @raise ValueError: If the value is not a C{MysqlConfig} 427 """ 428 if value is None: 429 self._mysql = None 430 else: 431 if not isinstance(value, MysqlConfig): 432 raise ValueError("Value must be a C{MysqlConfig} object.") 433 self._mysql = value
    434
    435 - def _getMysql(self):
    436 """ 437 Property target used to get the mysql configuration value. 438 """ 439 return self._mysql
    440 441 mysql = property(_getMysql, _setMysql, None, "Mysql configuration in terms of a C{MysqlConfig} object.") 442
    443 - def validate(self):
    444 """ 445 Validates configuration represented by the object. 446 447 The compress mode must be filled in. Then, if the 'all' flag I{is} set, 448 no databases are allowed, and if the 'all' flag is I{not} set, at least 449 one database is required. 450 451 @raise ValueError: If one of the validations fails. 452 """ 453 if self.mysql is None: 454 raise ValueError("Mysql section is required.") 455 if self.mysql.compressMode is None: 456 raise ValueError("Compress mode value is required.") 457 if self.mysql.all: 458 if self.mysql.databases is not None and self.mysql.databases != []: 459 raise ValueError("Databases cannot be specified if 'all' flag is set.") 460 else: 461 if self.mysql.databases is None or len(self.mysql.databases) < 1: 462 raise ValueError("At least one MySQL database must be indicated if 'all' flag is not set.")
    463
    464 - def addConfig(self, xmlDom, parentNode):
    465 """ 466 Adds a <mysql> configuration section as the next child of a parent. 467 468 Third parties should use this function to write configuration related to 469 this extension. 470 471 We add the following fields to the document:: 472 473 user //cb_config/mysql/user 474 password //cb_config/mysql/password 475 compressMode //cb_config/mysql/compress_mode 476 all //cb_config/mysql/all 477 478 We also add groups of the following items, one list element per 479 item:: 480 481 database //cb_config/mysql/database 482 483 @param xmlDom: DOM tree as from C{impl.createDocument()}. 484 @param parentNode: Parent that the section should be appended to. 485 """ 486 if self.mysql is not None: 487 sectionNode = addContainerNode(xmlDom, parentNode, "mysql") 488 addStringNode(xmlDom, sectionNode, "user", self.mysql.user) 489 addStringNode(xmlDom, sectionNode, "password", self.mysql.password) 490 addStringNode(xmlDom, sectionNode, "compress_mode", self.mysql.compressMode) 491 addBooleanNode(xmlDom, sectionNode, "all", self.mysql.all) 492 if self.mysql.databases is not None: 493 for database in self.mysql.databases: 494 addStringNode(xmlDom, sectionNode, "database", database)
    495
    496 - def _parseXmlData(self, xmlData):
    497 """ 498 Internal method to parse an XML string into the object. 499 500 This method parses the XML document into a DOM tree (C{xmlDom}) and then 501 calls a static method to parse the mysql configuration section. 502 503 @param xmlData: XML data to be parsed 504 @type xmlData: String data 505 506 @raise ValueError: If the XML cannot be successfully parsed. 507 """ 508 (xmlDom, parentNode) = createInputDom(xmlData) 509 self._mysql = LocalConfig._parseMysql(parentNode)
    510 511 @staticmethod
    512 - def _parseMysql(parentNode):
    513 """ 514 Parses a mysql configuration section. 515 516 We read the following fields:: 517 518 user //cb_config/mysql/user 519 password //cb_config/mysql/password 520 compressMode //cb_config/mysql/compress_mode 521 all //cb_config/mysql/all 522 523 We also read groups of the following item, one list element per 524 item:: 525 526 databases //cb_config/mysql/database 527 528 @param parentNode: Parent node to search beneath. 529 530 @return: C{MysqlConfig} object or C{None} if the section does not exist. 531 @raise ValueError: If some filled-in value is invalid. 532 """ 533 mysql = None 534 section = readFirstChild(parentNode, "mysql") 535 if section is not None: 536 mysql = MysqlConfig() 537 mysql.user = readString(section, "user") 538 mysql.password = readString(section, "password") 539 mysql.compressMode = readString(section, "compress_mode") 540 mysql.all = readBoolean(section, "all") 541 mysql.databases = readStringList(section, "database") 542 return mysql
    543
    544 545 ######################################################################## 546 # Public functions 547 ######################################################################## 548 549 ########################### 550 # executeAction() function 551 ########################### 552 553 -def executeAction(configPath, options, config):
    554 """ 555 Executes the MySQL backup action. 556 557 @param configPath: Path to configuration file on disk. 558 @type configPath: String representing a path on disk. 559 560 @param options: Program command-line options. 561 @type options: Options object. 562 563 @param config: Program configuration. 564 @type config: Config object. 565 566 @raise ValueError: Under many generic error conditions 567 @raise IOError: If a backup could not be written for some reason. 568 """ 569 logger.debug("Executing MySQL extended action.") 570 if config.options is None or config.collect is None: 571 raise ValueError("Cedar Backup configuration is not properly filled in.") 572 local = LocalConfig(xmlPath=configPath) 573 if local.mysql.all: 574 logger.info("Backing up all databases.") 575 _backupDatabase(config.collect.targetDir, local.mysql.compressMode, local.mysql.user, local.mysql.password, 576 config.options.backupUser, config.options.backupGroup, None) 577 else: 578 logger.debug("Backing up %d individual databases.", len(local.mysql.databases)) 579 for database in local.mysql.databases: 580 logger.info("Backing up database [%s].", database) 581 _backupDatabase(config.collect.targetDir, local.mysql.compressMode, local.mysql.user, local.mysql.password, 582 config.options.backupUser, config.options.backupGroup, database) 583 logger.info("Executed the MySQL extended action successfully.")
    584
    585 -def _backupDatabase(targetDir, compressMode, user, password, backupUser, backupGroup, database=None):
    586 """ 587 Backs up an individual MySQL database, or all databases. 588 589 This internal method wraps the public method and adds some functionality, 590 like figuring out a filename, etc. 591 592 @param targetDir: Directory into which backups should be written. 593 @param compressMode: Compress mode to be used for backed-up files. 594 @param user: User to use for connecting to the database (if any). 595 @param password: Password associated with user (if any). 596 @param backupUser: User to own resulting file. 597 @param backupGroup: Group to own resulting file. 598 @param database: Name of database, or C{None} for all databases. 599 600 @return: Name of the generated backup file. 601 602 @raise ValueError: If some value is missing or invalid. 603 @raise IOError: If there is a problem executing the MySQL dump. 604 """ 605 (outputFile, filename) = _getOutputFile(targetDir, database, compressMode) 606 with outputFile: 607 backupDatabase(user, password, outputFile, database) 608 if not os.path.exists(filename): 609 raise IOError("Dump file [%s] does not seem to exist after backup completed." % filename) 610 changeOwnership(filename, backupUser, backupGroup)
    611
    612 #pylint: disable=R0204 613 -def _getOutputFile(targetDir, database, compressMode):
    614 """ 615 Opens the output file used for saving the MySQL dump. 616 617 The filename is either C{"mysqldump.txt"} or C{"mysqldump-<database>.txt"}. The 618 C{".bz2"} extension is added if C{compress} is C{True}. 619 620 @param targetDir: Target directory to write file in. 621 @param database: Name of the database (if any) 622 @param compressMode: Compress mode to be used for backed-up files. 623 624 @return: Tuple of (Output file object, filename), file opened in binary mode for use with executeCommand() 625 """ 626 if database is None: 627 filename = os.path.join(targetDir, "mysqldump.txt") 628 else: 629 filename = os.path.join(targetDir, "mysqldump-%s.txt" % database) 630 if compressMode == "gzip": 631 filename = "%s.gz" % filename 632 outputFile = GzipFile(filename, "wb") 633 elif compressMode == "bzip2": 634 filename = "%s.bz2" % filename 635 outputFile = BZ2File(filename, "wb") 636 else: 637 outputFile = open(filename, "wb") 638 logger.debug("MySQL dump file will be [%s].", filename) 639 return (outputFile, filename)
    640
    641 642 ############################ 643 # backupDatabase() function 644 ############################ 645 646 -def backupDatabase(user, password, backupFile, database=None):
    647 """ 648 Backs up an individual MySQL database, or all databases. 649 650 This function backs up either a named local MySQL database or all local 651 MySQL databases, using the passed-in user and password (if provided) for 652 connectivity. This function call I{always} results a full backup. There is 653 no facility for incremental backups. 654 655 The backup data will be written into the passed-in backup file. Normally, 656 this would be an object as returned from C{open()}, but it is possible to 657 use something like a C{GzipFile} to write compressed output. The caller is 658 responsible for closing the passed-in backup file. 659 660 Often, the "root" database user will be used when backing up all databases. 661 An alternative is to create a separate MySQL "backup" user and grant that 662 user rights to read (but not write) all of the databases that will be backed 663 up. 664 665 This function accepts a username and password. However, you probably do not 666 want to pass those values in. This is because they will be provided to 667 C{mysqldump} via the command-line C{--user} and C{--password} switches, 668 which will be visible to other users in the process listing. 669 670 Instead, you should configure the username and password in one of MySQL's 671 configuration files. Typically, this would be done by putting a stanza like 672 this in C{/root/.my.cnf}, to provide C{mysqldump} with the root database 673 username and its password:: 674 675 [mysqldump] 676 user = root 677 password = <secret> 678 679 If you are executing this function as some system user other than root, then 680 the C{.my.cnf} file would be placed in the home directory of that user. In 681 either case, make sure to set restrictive permissions (typically, mode 682 C{0600}) on C{.my.cnf} to make sure that other users cannot read the file. 683 684 @param user: User to use for connecting to the database (if any) 685 @type user: String representing MySQL username, or C{None} 686 687 @param password: Password associated with user (if any) 688 @type password: String representing MySQL password, or C{None} 689 690 @param backupFile: File use for writing backup. 691 @type backupFile: Python file object as from C{open()} or C{file()}. 692 693 @param database: Name of the database to be backed up. 694 @type database: String representing database name, or C{None} for all databases. 695 696 @raise ValueError: If some value is missing or invalid. 697 @raise IOError: If there is a problem executing the MySQL dump. 698 """ 699 args = [ "-all", "--flush-logs", "--opt", ] 700 if user is not None: 701 logger.warning("Warning: MySQL username will be visible in process listing (consider using ~/.my.cnf).") 702 args.append("--user=%s" % user) 703 if password is not None: 704 logger.warning("Warning: MySQL password will be visible in process listing (consider using ~/.my.cnf).") 705 args.append("--password=%s" % password) 706 if database is None: 707 args.insert(0, "--all-databases") 708 else: 709 args.insert(0, "--databases") 710 args.append(database) 711 command = resolveCommand(MYSQLDUMP_COMMAND) 712 result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] 713 if result != 0: 714 if database is None: 715 raise IOError("Error [%d] executing MySQL database dump for all databases." % result) 716 else: 717 raise IOError("Error [%d] executing MySQL database dump for database [%s]." % (result, database))
    718

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.PurgeConfig-class.html0000664000175000017500000005513312657665544030125 0ustar pronovicpronovic00000000000000 CedarBackup3.config.PurgeConfig
    Package CedarBackup3 :: Module config :: Class PurgeConfig
    [hide private]
    [frames] | no frames]

    Class PurgeConfig

    source code

    object --+
             |
            PurgeConfig
    

    Class representing a Cedar Backup purge configuration.

    The following restrictions exist on data in this class:

    • The purge directory list must be a list of PurgeDir objects.

    For the purgeDirs list, validation is accomplished through the util.ObjectTypeList list implementation that overrides common list methods and transparently ensures that each element is a PurgeDir.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, purgeDirs=None)
    Constructor for the Purge class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    _setPurgeDirs(self, value)
    Property target used to set the purge dirs list.
    source code
     
    _getPurgeDirs(self)
    Property target used to get the purge dirs list.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      purgeDirs
    List of directories to purge.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, purgeDirs=None)
    (Constructor)

    source code 

    Constructor for the Purge class.

    Parameters:
    • purgeDirs - List of purge directories.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setPurgeDirs(self, value)

    source code 

    Property target used to set the purge dirs list. Either the value must be None or each element must be a PurgeDir.

    Raises:
    • ValueError - If the value is not a PurgeDir

    Property Details [hide private]

    purgeDirs

    List of directories to purge.

    Get Method:
    _getPurgeDirs(self) - Property target used to get the purge dirs list.
    Set Method:
    _setPurgeDirs(self, value) - Property target used to set the purge dirs list.

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.writers-module.html0000664000175000017500000000216212657665544027105 0ustar pronovicpronovic00000000000000 writers

    Module writers


    Variables


    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.action-pysrc.html0000664000175000017500000004520712657665545025763 0ustar pronovicpronovic00000000000000 CedarBackup3.action
    Package CedarBackup3 :: Module action
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.action

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 3 (>= 3.4) 
    13  # Project  : Cedar Backup, release 3 
    14  # Purpose  : Provides implementation of various backup-related actions. 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Provides interface backwards compatibility. 
    24   
    25  In Cedar Backup 2.10.0, a refactoring effort took place to reorganize the code 
    26  for the standard actions.  The code formerly in action.py was split into 
    27  various other files in the CedarBackup3.actions package.  This mostly-empty 
    28  file remains to preserve the Cedar Backup library interface. 
    29   
    30  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    31  """ 
    32   
    33  ######################################################################## 
    34  # Imported modules 
    35  ######################################################################## 
    36   
    37  # pylint: disable=W0611 
    38  from CedarBackup3.actions.collect import executeCollect 
    39  from CedarBackup3.actions.stage import executeStage 
    40  from CedarBackup3.actions.store import executeStore 
    41  from CedarBackup3.actions.purge import executePurge 
    42  from CedarBackup3.actions.rebuild import executeRebuild 
    43  from CedarBackup3.actions.validate import executeValidate 
    44   
    

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.customize-pysrc.html0000664000175000017500000006635712657665546026542 0ustar pronovicpronovic00000000000000 CedarBackup3.customize
    Package CedarBackup3 :: Module customize
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.customize

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Copyright (c) 2010,2015 Kenneth J. Pronovici. 
    12  # All rights reserved. 
    13  # 
    14  # This program is free software; you can redistribute it and/or 
    15  # modify it under the terms of the GNU General Public License, 
    16  # Version 2, as published by the Free Software Foundation. 
    17  # 
    18  # This program is distributed in the hope that it will be useful, 
    19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
    20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
    21  # 
    22  # Copies of the GNU General Public License are available from 
    23  # the Free Software Foundation website, http://www.gnu.org/. 
    24  # 
    25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    26  # 
    27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    28  # Language : Python 3 (>= 3.4) 
    29  # Project  : Cedar Backup, release 3 
    30  # Purpose  : Implements customized behavior. 
    31  # 
    32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    33   
    34  ######################################################################## 
    35  # Module documentation 
    36  ######################################################################## 
    37   
    38  """ 
    39  Implements customized behavior. 
    40   
    41  Some behaviors need to vary when packaged for certain platforms.  For instance, 
    42  while Cedar Backup generally uses cdrecord and mkisofs, Debian ships compatible 
    43  utilities called wodim and genisoimage. I want there to be one single place 
    44  where Cedar Backup is patched for Debian, rather than having to maintain a 
    45  variety of patches in different places. 
    46   
    47  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    48  """ 
    49   
    50  ######################################################################## 
    51  # Imported modules 
    52  ######################################################################## 
    53   
    54  # System modules 
    55  import logging 
    56   
    57   
    58  ######################################################################## 
    59  # Module-wide constants and variables 
    60  ######################################################################## 
    61   
    62  logger = logging.getLogger("CedarBackup3.log.customize") 
    63   
    64  PLATFORM = "standard" 
    65  #PLATFORM = "debian" 
    66   
    67  DEBIAN_CDRECORD = "/usr/bin/wodim" 
    68  DEBIAN_MKISOFS = "/usr/bin/genisoimage" 
    69   
    70   
    71  ####################################################################### 
    72  # Public functions 
    73  ####################################################################### 
    74   
    75  ################################ 
    76  # customizeOverrides() function 
    77  ################################ 
    78   
    
    79 -def customizeOverrides(config, platform=PLATFORM):
    80 """ 81 Modify command overrides based on the configured platform. 82 83 On some platforms, we want to add command overrides to configuration. Each 84 override will only be added if the configuration does not already contain an 85 override with the same name. That way, the user still has a way to choose 86 their own version of the command if they want. 87 88 @param config: Configuration to modify 89 @param platform: Platform that is in use 90 """ 91 if platform == "debian": 92 logger.info("Overriding cdrecord for Debian platform: %s", DEBIAN_CDRECORD) 93 config.options.addOverride("cdrecord", DEBIAN_CDRECORD) 94 logger.info("Overriding mkisofs for Debian platform: %s", DEBIAN_MKISOFS) 95 config.options.addOverride("mkisofs", DEBIAN_MKISOFS)
    96

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.encrypt.LocalConfig-class.html0000664000175000017500000010360512657665544031600 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.encrypt.LocalConfig
    Package CedarBackup3 :: Package extend :: Module encrypt :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit encrypt-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds an <encrypt> configuration section as the next child of a parent.
    source code
     
    _setEncrypt(self, value)
    Property target used to set the encrypt configuration value.
    source code
     
    _getEncrypt(self)
    Property target used to get the encrypt configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseEncrypt(parent)
    Parses an encrypt configuration section.
    source code
    Properties [hide private]
      encrypt
    Encrypt configuration in terms of a EncryptConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    Encrypt configuration must be filled in. Within that, both the encrypt mode and encrypt target must be filled in.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds an <encrypt> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      encryptMode    //cb_config/encrypt/encrypt_mode
      encryptTarget  //cb_config/encrypt/encrypt_target
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setEncrypt(self, value)

    source code 

    Property target used to set the encrypt configuration value. If not None, the value must be a EncryptConfig object.

    Raises:
    • ValueError - If the value is not a EncryptConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the encrypt configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseEncrypt(parent)
    Static Method

    source code 

    Parses an encrypt configuration section.

    We read the following individual fields:

      encryptMode    //cb_config/encrypt/encrypt_mode
      encryptTarget  //cb_config/encrypt/encrypt_target
    
    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    EncryptConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    Property Details [hide private]

    encrypt

    Encrypt configuration in terms of a EncryptConfig object.

    Get Method:
    _getEncrypt(self) - Property target used to get the encrypt configuration value.
    Set Method:
    _setEncrypt(self, value) - Property target used to set the encrypt configuration value.

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.actions.util-module.html0000664000175000017500000000466712657665544030036 0ustar pronovicpronovic00000000000000 util

    Module util


    Functions

    buildMediaLabel
    checkMediaState
    createWriter
    findDailyDirs
    getBackupFiles
    initializeMediaState
    writeIndicatorFile

    Variables

    MEDIA_LABEL_PREFIX
    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.subversion-pysrc.html0000664000175000017500000232760212657665547030201 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.subversion
    Package CedarBackup3 :: Package extend :: Module subversion
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.extend.subversion

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2005,2007,2010,2015 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python 3 (>= 3.4) 
      29  # Project  : Official Cedar Backup Extensions 
      30  # Purpose  : Provides an extension to back up Subversion repositories. 
      31  # 
      32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      33   
      34  ######################################################################## 
      35  # Module documentation 
      36  ######################################################################## 
      37   
      38  """ 
      39  Provides an extension to back up Subversion repositories. 
      40   
      41  This is a Cedar Backup extension used to back up Subversion repositories via 
      42  the Cedar Backup command line.  Each Subversion repository can be backed using 
      43  the same collect modes allowed for filesystems in the standard Cedar Backup 
      44  collect action: weekly, daily, incremental. 
      45   
      46  This extension requires a new configuration section <subversion> and is 
      47  intended to be run either immediately before or immediately after the standard 
      48  collect action.  Aside from its own configuration, it requires the options and 
      49  collect configuration sections in the standard Cedar Backup configuration file. 
      50   
      51  There are two different kinds of Subversion repositories at this writing: BDB 
      52  (Berkeley Database) and FSFS (a "filesystem within a filesystem").  Although 
      53  the repository type can be specified in configuration, that information is just 
      54  kept around for reference.  It doesn't affect the backup.  Both kinds of 
      55  repositories are backed up in the same way, using C{svnadmin dump} in an 
      56  incremental mode. 
      57   
      58  It turns out that FSFS repositories can also be backed up just like any 
      59  other filesystem directory.  If you would rather do that, then use the normal 
      60  collect action.  This is probably simpler, although it carries its own 
      61  advantages and disadvantages (plus you will have to be careful to exclude 
      62  the working directories Subversion uses when building an update to commit). 
      63  Check the Subversion documentation for more information. 
      64   
      65  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      66  """ 
      67   
      68  ######################################################################## 
      69  # Imported modules 
      70  ######################################################################## 
      71   
      72  # System modules 
      73  import os 
      74  import logging 
      75  import pickle 
      76  from bz2 import BZ2File 
      77  from gzip import GzipFile 
      78  from functools import total_ordering 
      79   
      80  # Cedar Backup modules 
      81  from CedarBackup3.xmlutil import createInputDom, addContainerNode, addStringNode 
      82  from CedarBackup3.xmlutil import isElement, readChildren, readFirstChild, readString, readStringList 
      83  from CedarBackup3.config import VALID_COLLECT_MODES, VALID_COMPRESS_MODES 
      84  from CedarBackup3.filesystem import FilesystemList 
      85  from CedarBackup3.util import UnorderedList, RegexList 
      86  from CedarBackup3.util import isStartOfWeek, buildNormalizedPath 
      87  from CedarBackup3.util import resolveCommand, executeCommand 
      88  from CedarBackup3.util import ObjectTypeList, encodePath, changeOwnership 
      89   
      90   
      91  ######################################################################## 
      92  # Module-wide constants and variables 
      93  ######################################################################## 
      94   
      95  logger = logging.getLogger("CedarBackup3.log.extend.subversion") 
      96   
      97  SVNLOOK_COMMAND      = [ "svnlook", ] 
      98  SVNADMIN_COMMAND     = [ "svnadmin", ] 
      99   
     100  REVISION_PATH_EXTENSION = "svnlast" 
    
    101 102 103 ######################################################################## 104 # RepositoryDir class definition 105 ######################################################################## 106 107 @total_ordering 108 -class RepositoryDir(object):
    109 110 """ 111 Class representing Subversion repository directory. 112 113 A repository directory is a directory that contains one or more Subversion 114 repositories. 115 116 The following restrictions exist on data in this class: 117 118 - The directory path must be absolute. 119 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 120 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 121 122 The repository type value is kept around just for reference. It doesn't 123 affect the behavior of the backup. 124 125 Relative exclusions are allowed here. However, there is no configured 126 ignore file, because repository dir backups are not recursive. 127 128 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 129 directoryPath, collectMode, compressMode 130 """ 131
    132 - def __init__(self, repositoryType=None, directoryPath=None, collectMode=None, compressMode=None, 133 relativeExcludePaths=None, excludePatterns=None):
    134 """ 135 Constructor for the C{RepositoryDir} class. 136 137 @param repositoryType: Type of repository, for reference 138 @param directoryPath: Absolute path of the Subversion parent directory 139 @param collectMode: Overridden collect mode for this directory. 140 @param compressMode: Overridden compression mode for this directory. 141 @param relativeExcludePaths: List of relative paths to exclude. 142 @param excludePatterns: List of regular expression patterns to exclude 143 """ 144 self._repositoryType = None 145 self._directoryPath = None 146 self._collectMode = None 147 self._compressMode = None 148 self._relativeExcludePaths = None 149 self._excludePatterns = None 150 self.repositoryType = repositoryType 151 self.directoryPath = directoryPath 152 self.collectMode = collectMode 153 self.compressMode = compressMode 154 self.relativeExcludePaths = relativeExcludePaths 155 self.excludePatterns = excludePatterns
    156
    157 - def __repr__(self):
    158 """ 159 Official string representation for class instance. 160 """ 161 return "RepositoryDir(%s, %s, %s, %s, %s, %s)" % (self.repositoryType, self.directoryPath, self.collectMode, 162 self.compressMode, self.relativeExcludePaths, self.excludePatterns)
    163
    164 - def __str__(self):
    165 """ 166 Informal string representation for class instance. 167 """ 168 return self.__repr__()
    169
    170 - def __eq__(self, other):
    171 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 172 return self.__cmp__(other) == 0
    173
    174 - def __lt__(self, other):
    175 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 176 return self.__cmp__(other) < 0
    177
    178 - def __gt__(self, other):
    179 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 180 return self.__cmp__(other) > 0
    181
    182 - def __cmp__(self, other):
    183 """ 184 Original Python 2 comparison operator. 185 @param other: Other object to compare to. 186 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 187 """ 188 if other is None: 189 return 1 190 if self.repositoryType != other.repositoryType: 191 if str(self.repositoryType or "") < str(other.repositoryType or ""): 192 return -1 193 else: 194 return 1 195 if self.directoryPath != other.directoryPath: 196 if str(self.directoryPath or "") < str(other.directoryPath or ""): 197 return -1 198 else: 199 return 1 200 if self.collectMode != other.collectMode: 201 if str(self.collectMode or "") < str(other.collectMode or ""): 202 return -1 203 else: 204 return 1 205 if self.compressMode != other.compressMode: 206 if str(self.compressMode or "") < str(other.compressMode or ""): 207 return -1 208 else: 209 return 1 210 if self.relativeExcludePaths != other.relativeExcludePaths: 211 if self.relativeExcludePaths < other.relativeExcludePaths: 212 return -1 213 else: 214 return 1 215 if self.excludePatterns != other.excludePatterns: 216 if self.excludePatterns < other.excludePatterns: 217 return -1 218 else: 219 return 1 220 return 0
    221
    222 - def _setRepositoryType(self, value):
    223 """ 224 Property target used to set the repository type. 225 There is no validation; this value is kept around just for reference. 226 """ 227 self._repositoryType = value
    228
    229 - def _getRepositoryType(self):
    230 """ 231 Property target used to get the repository type. 232 """ 233 return self._repositoryType
    234
    235 - def _setDirectoryPath(self, value):
    236 """ 237 Property target used to set the directory path. 238 The value must be an absolute path if it is not C{None}. 239 It does not have to exist on disk at the time of assignment. 240 @raise ValueError: If the value is not an absolute path. 241 @raise ValueError: If the value cannot be encoded properly. 242 """ 243 if value is not None: 244 if not os.path.isabs(value): 245 raise ValueError("Repository path must be an absolute path.") 246 self._directoryPath = encodePath(value)
    247
    248 - def _getDirectoryPath(self):
    249 """ 250 Property target used to get the repository path. 251 """ 252 return self._directoryPath
    253
    254 - def _setCollectMode(self, value):
    255 """ 256 Property target used to set the collect mode. 257 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 258 @raise ValueError: If the value is not valid. 259 """ 260 if value is not None: 261 if value not in VALID_COLLECT_MODES: 262 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 263 self._collectMode = value
    264
    265 - def _getCollectMode(self):
    266 """ 267 Property target used to get the collect mode. 268 """ 269 return self._collectMode
    270
    271 - def _setCompressMode(self, value):
    272 """ 273 Property target used to set the compress mode. 274 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 275 @raise ValueError: If the value is not valid. 276 """ 277 if value is not None: 278 if value not in VALID_COMPRESS_MODES: 279 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 280 self._compressMode = value
    281
    282 - def _getCompressMode(self):
    283 """ 284 Property target used to get the compress mode. 285 """ 286 return self._compressMode
    287
    288 - def _setRelativeExcludePaths(self, value):
    289 """ 290 Property target used to set the relative exclude paths list. 291 Elements do not have to exist on disk at the time of assignment. 292 """ 293 if value is None: 294 self._relativeExcludePaths = None 295 else: 296 try: 297 saved = self._relativeExcludePaths 298 self._relativeExcludePaths = UnorderedList() 299 self._relativeExcludePaths.extend(value) 300 except Exception as e: 301 self._relativeExcludePaths = saved 302 raise e
    303
    304 - def _getRelativeExcludePaths(self):
    305 """ 306 Property target used to get the relative exclude paths list. 307 """ 308 return self._relativeExcludePaths
    309
    310 - def _setExcludePatterns(self, value):
    311 """ 312 Property target used to set the exclude patterns list. 313 """ 314 if value is None: 315 self._excludePatterns = None 316 else: 317 try: 318 saved = self._excludePatterns 319 self._excludePatterns = RegexList() 320 self._excludePatterns.extend(value) 321 except Exception as e: 322 self._excludePatterns = saved 323 raise e
    324
    325 - def _getExcludePatterns(self):
    326 """ 327 Property target used to get the exclude patterns list. 328 """ 329 return self._excludePatterns
    330 331 repositoryType = property(_getRepositoryType, _setRepositoryType, None, doc="Type of this repository, for reference.") 332 directoryPath = property(_getDirectoryPath, _setDirectoryPath, None, doc="Absolute path of the Subversion parent directory.") 333 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this repository.") 334 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this repository.") 335 relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") 336 excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.")
    337
    338 339 ######################################################################## 340 # Repository class definition 341 ######################################################################## 342 343 @total_ordering 344 -class Repository(object):
    345 346 """ 347 Class representing generic Subversion repository configuration.. 348 349 The following restrictions exist on data in this class: 350 351 - The respository path must be absolute. 352 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 353 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 354 355 The repository type value is kept around just for reference. It doesn't 356 affect the behavior of the backup. 357 358 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 359 repositoryPath, collectMode, compressMode 360 """ 361
    362 - def __init__(self, repositoryType=None, repositoryPath=None, collectMode=None, compressMode=None):
    363 """ 364 Constructor for the C{Repository} class. 365 366 @param repositoryType: Type of repository, for reference 367 @param repositoryPath: Absolute path to a Subversion repository on disk. 368 @param collectMode: Overridden collect mode for this directory. 369 @param compressMode: Overridden compression mode for this directory. 370 """ 371 self._repositoryType = None 372 self._repositoryPath = None 373 self._collectMode = None 374 self._compressMode = None 375 self.repositoryType = repositoryType 376 self.repositoryPath = repositoryPath 377 self.collectMode = collectMode 378 self.compressMode = compressMode
    379
    380 - def __repr__(self):
    381 """ 382 Official string representation for class instance. 383 """ 384 return "Repository(%s, %s, %s, %s)" % (self.repositoryType, self.repositoryPath, self.collectMode, self.compressMode)
    385
    386 - def __str__(self):
    387 """ 388 Informal string representation for class instance. 389 """ 390 return self.__repr__()
    391
    392 - def __eq__(self, other):
    393 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 394 return self.__cmp__(other) == 0
    395
    396 - def __lt__(self, other):
    397 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 398 return self.__cmp__(other) < 0
    399
    400 - def __gt__(self, other):
    401 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 402 return self.__cmp__(other) > 0
    403
    404 - def __cmp__(self, other):
    405 """ 406 Original Python 2 comparison operator. 407 @param other: Other object to compare to. 408 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 409 """ 410 if other is None: 411 return 1 412 if self.repositoryType != other.repositoryType: 413 if str(self.repositoryType or "") < str(other.repositoryType or ""): 414 return -1 415 else: 416 return 1 417 if self.repositoryPath != other.repositoryPath: 418 if str(self.repositoryPath or "") < str(other.repositoryPath or ""): 419 return -1 420 else: 421 return 1 422 if self.collectMode != other.collectMode: 423 if str(self.collectMode or "") < str(other.collectMode or ""): 424 return -1 425 else: 426 return 1 427 if self.compressMode != other.compressMode: 428 if str(self.compressMode or "") < str(other.compressMode or ""): 429 return -1 430 else: 431 return 1 432 return 0
    433
    434 - def _setRepositoryType(self, value):
    435 """ 436 Property target used to set the repository type. 437 There is no validation; this value is kept around just for reference. 438 """ 439 self._repositoryType = value
    440
    441 - def _getRepositoryType(self):
    442 """ 443 Property target used to get the repository type. 444 """ 445 return self._repositoryType
    446
    447 - def _setRepositoryPath(self, value):
    448 """ 449 Property target used to set the repository path. 450 The value must be an absolute path if it is not C{None}. 451 It does not have to exist on disk at the time of assignment. 452 @raise ValueError: If the value is not an absolute path. 453 @raise ValueError: If the value cannot be encoded properly. 454 """ 455 if value is not None: 456 if not os.path.isabs(value): 457 raise ValueError("Repository path must be an absolute path.") 458 self._repositoryPath = encodePath(value)
    459
    460 - def _getRepositoryPath(self):
    461 """ 462 Property target used to get the repository path. 463 """ 464 return self._repositoryPath
    465
    466 - def _setCollectMode(self, value):
    467 """ 468 Property target used to set the collect mode. 469 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 470 @raise ValueError: If the value is not valid. 471 """ 472 if value is not None: 473 if value not in VALID_COLLECT_MODES: 474 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 475 self._collectMode = value
    476
    477 - def _getCollectMode(self):
    478 """ 479 Property target used to get the collect mode. 480 """ 481 return self._collectMode
    482
    483 - def _setCompressMode(self, value):
    484 """ 485 Property target used to set the compress mode. 486 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 487 @raise ValueError: If the value is not valid. 488 """ 489 if value is not None: 490 if value not in VALID_COMPRESS_MODES: 491 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 492 self._compressMode = value
    493
    494 - def _getCompressMode(self):
    495 """ 496 Property target used to get the compress mode. 497 """ 498 return self._compressMode
    499 500 repositoryType = property(_getRepositoryType, _setRepositoryType, None, doc="Type of this repository, for reference.") 501 repositoryPath = property(_getRepositoryPath, _setRepositoryPath, None, doc="Path to the repository to collect.") 502 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this repository.") 503 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this repository.")
    504
    505 506 ######################################################################## 507 # SubversionConfig class definition 508 ######################################################################## 509 510 @total_ordering 511 -class SubversionConfig(object):
    512 513 """ 514 Class representing Subversion configuration. 515 516 Subversion configuration is used for backing up Subversion repositories. 517 518 The following restrictions exist on data in this class: 519 520 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 521 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 522 - The repositories list must be a list of C{Repository} objects. 523 - The repositoryDirs list must be a list of C{RepositoryDir} objects. 524 525 For the two lists, validation is accomplished through the 526 L{util.ObjectTypeList} list implementation that overrides common list 527 methods and transparently ensures that each element has the correct type. 528 529 @note: Lists within this class are "unordered" for equality comparisons. 530 531 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 532 collectMode, compressMode, repositories 533 """ 534
    535 - def __init__(self, collectMode=None, compressMode=None, repositories=None, repositoryDirs=None):
    536 """ 537 Constructor for the C{SubversionConfig} class. 538 539 @param collectMode: Default collect mode. 540 @param compressMode: Default compress mode. 541 @param repositories: List of Subversion repositories to back up. 542 @param repositoryDirs: List of Subversion parent directories to back up. 543 544 @raise ValueError: If one of the values is invalid. 545 """ 546 self._collectMode = None 547 self._compressMode = None 548 self._repositories = None 549 self._repositoryDirs = None 550 self.collectMode = collectMode 551 self.compressMode = compressMode 552 self.repositories = repositories 553 self.repositoryDirs = repositoryDirs
    554
    555 - def __repr__(self):
    556 """ 557 Official string representation for class instance. 558 """ 559 return "SubversionConfig(%s, %s, %s, %s)" % (self.collectMode, self.compressMode, self.repositories, self.repositoryDirs)
    560
    561 - def __str__(self):
    562 """ 563 Informal string representation for class instance. 564 """ 565 return self.__repr__()
    566
    567 - def __eq__(self, other):
    568 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 569 return self.__cmp__(other) == 0
    570
    571 - def __lt__(self, other):
    572 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 573 return self.__cmp__(other) < 0
    574
    575 - def __gt__(self, other):
    576 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 577 return self.__cmp__(other) > 0
    578
    579 - def __cmp__(self, other):
    580 """ 581 Original Python 2 comparison operator. 582 Lists within this class are "unordered" for equality comparisons. 583 @param other: Other object to compare to. 584 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 585 """ 586 if other is None: 587 return 1 588 if self.collectMode != other.collectMode: 589 if str(self.collectMode or "") < str(other.collectMode or ""): 590 return -1 591 else: 592 return 1 593 if self.compressMode != other.compressMode: 594 if str(self.compressMode or "") < str(other.compressMode or ""): 595 return -1 596 else: 597 return 1 598 if self.repositories != other.repositories: 599 if self.repositories < other.repositories: 600 return -1 601 else: 602 return 1 603 if self.repositoryDirs != other.repositoryDirs: 604 if self.repositoryDirs < other.repositoryDirs: 605 return -1 606 else: 607 return 1 608 return 0
    609
    610 - def _setCollectMode(self, value):
    611 """ 612 Property target used to set the collect mode. 613 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 614 @raise ValueError: If the value is not valid. 615 """ 616 if value is not None: 617 if value not in VALID_COLLECT_MODES: 618 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 619 self._collectMode = value
    620
    621 - def _getCollectMode(self):
    622 """ 623 Property target used to get the collect mode. 624 """ 625 return self._collectMode
    626
    627 - def _setCompressMode(self, value):
    628 """ 629 Property target used to set the compress mode. 630 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 631 @raise ValueError: If the value is not valid. 632 """ 633 if value is not None: 634 if value not in VALID_COMPRESS_MODES: 635 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 636 self._compressMode = value
    637
    638 - def _getCompressMode(self):
    639 """ 640 Property target used to get the compress mode. 641 """ 642 return self._compressMode
    643
    644 - def _setRepositories(self, value):
    645 """ 646 Property target used to set the repositories list. 647 Either the value must be C{None} or each element must be a C{Repository}. 648 @raise ValueError: If the value is not a C{Repository} 649 """ 650 if value is None: 651 self._repositories = None 652 else: 653 try: 654 saved = self._repositories 655 self._repositories = ObjectTypeList(Repository, "Repository") 656 self._repositories.extend(value) 657 except Exception as e: 658 self._repositories = saved 659 raise e
    660
    661 - def _getRepositories(self):
    662 """ 663 Property target used to get the repositories list. 664 """ 665 return self._repositories
    666
    667 - def _setRepositoryDirs(self, value):
    668 """ 669 Property target used to set the repositoryDirs list. 670 Either the value must be C{None} or each element must be a C{Repository}. 671 @raise ValueError: If the value is not a C{Repository} 672 """ 673 if value is None: 674 self._repositoryDirs = None 675 else: 676 try: 677 saved = self._repositoryDirs 678 self._repositoryDirs = ObjectTypeList(RepositoryDir, "RepositoryDir") 679 self._repositoryDirs.extend(value) 680 except Exception as e: 681 self._repositoryDirs = saved 682 raise e
    683
    684 - def _getRepositoryDirs(self):
    685 """ 686 Property target used to get the repositoryDirs list. 687 """ 688 return self._repositoryDirs
    689 690 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Default collect mode.") 691 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Default compress mode.") 692 repositories = property(_getRepositories, _setRepositories, None, doc="List of Subversion repositories to back up.") 693 repositoryDirs = property(_getRepositoryDirs, _setRepositoryDirs, None, doc="List of Subversion parent directories to back up.")
    694
    695 696 ######################################################################## 697 # LocalConfig class definition 698 ######################################################################## 699 700 @total_ordering 701 -class LocalConfig(object):
    702 703 """ 704 Class representing this extension's configuration document. 705 706 This is not a general-purpose configuration object like the main Cedar 707 Backup configuration object. Instead, it just knows how to parse and emit 708 Subversion-specific configuration values. Third parties who need to read 709 and write configuration related to this extension should access it through 710 the constructor, C{validate} and C{addConfig} methods. 711 712 @note: Lists within this class are "unordered" for equality comparisons. 713 714 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 715 subversion, validate, addConfig 716 """ 717
    718 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    719 """ 720 Initializes a configuration object. 721 722 If you initialize the object without passing either C{xmlData} or 723 C{xmlPath} then configuration will be empty and will be invalid until it 724 is filled in properly. 725 726 No reference to the original XML data or original path is saved off by 727 this class. Once the data has been parsed (successfully or not) this 728 original information is discarded. 729 730 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 731 method will be called (with its default arguments) against configuration 732 after successfully parsing any passed-in XML. Keep in mind that even if 733 C{validate} is C{False}, it might not be possible to parse the passed-in 734 XML document if lower-level validations fail. 735 736 @note: It is strongly suggested that the C{validate} option always be set 737 to C{True} (the default) unless there is a specific need to read in 738 invalid configuration from disk. 739 740 @param xmlData: XML data representing configuration. 741 @type xmlData: String data. 742 743 @param xmlPath: Path to an XML file on disk. 744 @type xmlPath: Absolute path to a file on disk. 745 746 @param validate: Validate the document after parsing it. 747 @type validate: Boolean true/false. 748 749 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 750 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 751 @raise ValueError: If the parsed configuration document is not valid. 752 """ 753 self._subversion = None 754 self.subversion = None 755 if xmlData is not None and xmlPath is not None: 756 raise ValueError("Use either xmlData or xmlPath, but not both.") 757 if xmlData is not None: 758 self._parseXmlData(xmlData) 759 if validate: 760 self.validate() 761 elif xmlPath is not None: 762 with open(xmlPath) as f: 763 xmlData = f.read() 764 self._parseXmlData(xmlData) 765 if validate: 766 self.validate()
    767
    768 - def __repr__(self):
    769 """ 770 Official string representation for class instance. 771 """ 772 return "LocalConfig(%s)" % (self.subversion)
    773
    774 - def __str__(self):
    775 """ 776 Informal string representation for class instance. 777 """ 778 return self.__repr__()
    779
    780 - def __eq__(self, other):
    781 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 782 return self.__cmp__(other) == 0
    783
    784 - def __lt__(self, other):
    785 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 786 return self.__cmp__(other) < 0
    787
    788 - def __gt__(self, other):
    789 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 790 return self.__cmp__(other) > 0
    791
    792 - def __cmp__(self, other):
    793 """ 794 Original Python 2 comparison operator. 795 Lists within this class are "unordered" for equality comparisons. 796 @param other: Other object to compare to. 797 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 798 """ 799 if other is None: 800 return 1 801 if self.subversion != other.subversion: 802 if self.subversion < other.subversion: 803 return -1 804 else: 805 return 1 806 return 0
    807
    808 - def _setSubversion(self, value):
    809 """ 810 Property target used to set the subversion configuration value. 811 If not C{None}, the value must be a C{SubversionConfig} object. 812 @raise ValueError: If the value is not a C{SubversionConfig} 813 """ 814 if value is None: 815 self._subversion = None 816 else: 817 if not isinstance(value, SubversionConfig): 818 raise ValueError("Value must be a C{SubversionConfig} object.") 819 self._subversion = value
    820
    821 - def _getSubversion(self):
    822 """ 823 Property target used to get the subversion configuration value. 824 """ 825 return self._subversion
    826 827 subversion = property(_getSubversion, _setSubversion, None, "Subversion configuration in terms of a C{SubversionConfig} object.") 828
    829 - def validate(self):
    830 """ 831 Validates configuration represented by the object. 832 833 Subversion configuration must be filled in. Within that, the collect 834 mode and compress mode are both optional, but the list of repositories 835 must contain at least one entry. 836 837 Each repository must contain a repository path, and then must be either 838 able to take collect mode and compress mode configuration from the parent 839 C{SubversionConfig} object, or must set each value on its own. 840 841 @raise ValueError: If one of the validations fails. 842 """ 843 if self.subversion is None: 844 raise ValueError("Subversion section is required.") 845 if ((self.subversion.repositories is None or len(self.subversion.repositories) < 1) and 846 (self.subversion.repositoryDirs is None or len(self.subversion.repositoryDirs) <1)): 847 raise ValueError("At least one Subversion repository must be configured.") 848 if self.subversion.repositories is not None: 849 for repository in self.subversion.repositories: 850 if repository.repositoryPath is None: 851 raise ValueError("Each repository must set a repository path.") 852 if self.subversion.collectMode is None and repository.collectMode is None: 853 raise ValueError("Collect mode must either be set in parent section or individual repository.") 854 if self.subversion.compressMode is None and repository.compressMode is None: 855 raise ValueError("Compress mode must either be set in parent section or individual repository.") 856 if self.subversion.repositoryDirs is not None: 857 for repositoryDir in self.subversion.repositoryDirs: 858 if repositoryDir.directoryPath is None: 859 raise ValueError("Each repository directory must set a directory path.") 860 if self.subversion.collectMode is None and repositoryDir.collectMode is None: 861 raise ValueError("Collect mode must either be set in parent section or repository directory.") 862 if self.subversion.compressMode is None and repositoryDir.compressMode is None: 863 raise ValueError("Compress mode must either be set in parent section or repository directory.")
    864
    865 - def addConfig(self, xmlDom, parentNode):
    866 """ 867 Adds a <subversion> configuration section as the next child of a parent. 868 869 Third parties should use this function to write configuration related to 870 this extension. 871 872 We add the following fields to the document:: 873 874 collectMode //cb_config/subversion/collectMode 875 compressMode //cb_config/subversion/compressMode 876 877 We also add groups of the following items, one list element per 878 item:: 879 880 repository //cb_config/subversion/repository 881 repository_dir //cb_config/subversion/repository_dir 882 883 @param xmlDom: DOM tree as from C{impl.createDocument()}. 884 @param parentNode: Parent that the section should be appended to. 885 """ 886 if self.subversion is not None: 887 sectionNode = addContainerNode(xmlDom, parentNode, "subversion") 888 addStringNode(xmlDom, sectionNode, "collect_mode", self.subversion.collectMode) 889 addStringNode(xmlDom, sectionNode, "compress_mode", self.subversion.compressMode) 890 if self.subversion.repositories is not None: 891 for repository in self.subversion.repositories: 892 LocalConfig._addRepository(xmlDom, sectionNode, repository) 893 if self.subversion.repositoryDirs is not None: 894 for repositoryDir in self.subversion.repositoryDirs: 895 LocalConfig._addRepositoryDir(xmlDom, sectionNode, repositoryDir)
    896
    897 - def _parseXmlData(self, xmlData):
    898 """ 899 Internal method to parse an XML string into the object. 900 901 This method parses the XML document into a DOM tree (C{xmlDom}) and then 902 calls a static method to parse the subversion configuration section. 903 904 @param xmlData: XML data to be parsed 905 @type xmlData: String data 906 907 @raise ValueError: If the XML cannot be successfully parsed. 908 """ 909 (xmlDom, parentNode) = createInputDom(xmlData) 910 self._subversion = LocalConfig._parseSubversion(parentNode)
    911 912 @staticmethod
    913 - def _parseSubversion(parent):
    914 """ 915 Parses a subversion configuration section. 916 917 We read the following individual fields:: 918 919 collectMode //cb_config/subversion/collect_mode 920 compressMode //cb_config/subversion/compress_mode 921 922 We also read groups of the following item, one list element per 923 item:: 924 925 repositories //cb_config/subversion/repository 926 repository_dirs //cb_config/subversion/repository_dir 927 928 The repositories are parsed by L{_parseRepositories}, and the repository 929 dirs are parsed by L{_parseRepositoryDirs}. 930 931 @param parent: Parent node to search beneath. 932 933 @return: C{SubversionConfig} object or C{None} if the section does not exist. 934 @raise ValueError: If some filled-in value is invalid. 935 """ 936 subversion = None 937 section = readFirstChild(parent, "subversion") 938 if section is not None: 939 subversion = SubversionConfig() 940 subversion.collectMode = readString(section, "collect_mode") 941 subversion.compressMode = readString(section, "compress_mode") 942 subversion.repositories = LocalConfig._parseRepositories(section) 943 subversion.repositoryDirs = LocalConfig._parseRepositoryDirs(section) 944 return subversion
    945 946 @staticmethod
    947 - def _parseRepositories(parent):
    948 """ 949 Reads a list of C{Repository} objects from immediately beneath the parent. 950 951 We read the following individual fields:: 952 953 repositoryType type 954 repositoryPath abs_path 955 collectMode collect_mode 956 compressMode compess_mode 957 958 The type field is optional, and its value is kept around only for 959 reference. 960 961 @param parent: Parent node to search beneath. 962 963 @return: List of C{Repository} objects or C{None} if none are found. 964 @raise ValueError: If some filled-in value is invalid. 965 """ 966 lst = [] 967 for entry in readChildren(parent, "repository"): 968 if isElement(entry): 969 repository = Repository() 970 repository.repositoryType = readString(entry, "type") 971 repository.repositoryPath = readString(entry, "abs_path") 972 repository.collectMode = readString(entry, "collect_mode") 973 repository.compressMode = readString(entry, "compress_mode") 974 lst.append(repository) 975 if lst == []: 976 lst = None 977 return lst
    978 979 @staticmethod
    980 - def _addRepository(xmlDom, parentNode, repository):
    981 """ 982 Adds a repository container as the next child of a parent. 983 984 We add the following fields to the document:: 985 986 repositoryType repository/type 987 repositoryPath repository/abs_path 988 collectMode repository/collect_mode 989 compressMode repository/compress_mode 990 991 The <repository> node itself is created as the next child of the parent 992 node. This method only adds one repository node. The parent must loop 993 for each repository in the C{SubversionConfig} object. 994 995 If C{repository} is C{None}, this method call will be a no-op. 996 997 @param xmlDom: DOM tree as from C{impl.createDocument()}. 998 @param parentNode: Parent that the section should be appended to. 999 @param repository: Repository to be added to the document. 1000 """ 1001 if repository is not None: 1002 sectionNode = addContainerNode(xmlDom, parentNode, "repository") 1003 addStringNode(xmlDom, sectionNode, "type", repository.repositoryType) 1004 addStringNode(xmlDom, sectionNode, "abs_path", repository.repositoryPath) 1005 addStringNode(xmlDom, sectionNode, "collect_mode", repository.collectMode) 1006 addStringNode(xmlDom, sectionNode, "compress_mode", repository.compressMode)
    1007 1008 @staticmethod
    1009 - def _parseRepositoryDirs(parent):
    1010 """ 1011 Reads a list of C{RepositoryDir} objects from immediately beneath the parent. 1012 1013 We read the following individual fields:: 1014 1015 repositoryType type 1016 directoryPath abs_path 1017 collectMode collect_mode 1018 compressMode compess_mode 1019 1020 We also read groups of the following items, one list element per 1021 item:: 1022 1023 relativeExcludePaths exclude/rel_path 1024 excludePatterns exclude/pattern 1025 1026 The exclusions are parsed by L{_parseExclusions}. 1027 1028 The type field is optional, and its value is kept around only for 1029 reference. 1030 1031 @param parent: Parent node to search beneath. 1032 1033 @return: List of C{RepositoryDir} objects or C{None} if none are found. 1034 @raise ValueError: If some filled-in value is invalid. 1035 """ 1036 lst = [] 1037 for entry in readChildren(parent, "repository_dir"): 1038 if isElement(entry): 1039 repositoryDir = RepositoryDir() 1040 repositoryDir.repositoryType = readString(entry, "type") 1041 repositoryDir.directoryPath = readString(entry, "abs_path") 1042 repositoryDir.collectMode = readString(entry, "collect_mode") 1043 repositoryDir.compressMode = readString(entry, "compress_mode") 1044 (repositoryDir.relativeExcludePaths, repositoryDir.excludePatterns) = LocalConfig._parseExclusions(entry) 1045 lst.append(repositoryDir) 1046 if lst == []: 1047 lst = None 1048 return lst
    1049 1050 @staticmethod
    1051 - def _parseExclusions(parentNode):
    1052 """ 1053 Reads exclusions data from immediately beneath the parent. 1054 1055 We read groups of the following items, one list element per item:: 1056 1057 relative exclude/rel_path 1058 patterns exclude/pattern 1059 1060 If there are none of some pattern (i.e. no relative path items) then 1061 C{None} will be returned for that item in the tuple. 1062 1063 @param parentNode: Parent node to search beneath. 1064 1065 @return: Tuple of (relative, patterns) exclusions. 1066 """ 1067 section = readFirstChild(parentNode, "exclude") 1068 if section is None: 1069 return (None, None) 1070 else: 1071 relative = readStringList(section, "rel_path") 1072 patterns = readStringList(section, "pattern") 1073 return (relative, patterns)
    1074 1075 @staticmethod
    1076 - def _addRepositoryDir(xmlDom, parentNode, repositoryDir):
    1077 """ 1078 Adds a repository dir container as the next child of a parent. 1079 1080 We add the following fields to the document:: 1081 1082 repositoryType repository_dir/type 1083 directoryPath repository_dir/abs_path 1084 collectMode repository_dir/collect_mode 1085 compressMode repository_dir/compress_mode 1086 1087 We also add groups of the following items, one list element per item:: 1088 1089 relativeExcludePaths dir/exclude/rel_path 1090 excludePatterns dir/exclude/pattern 1091 1092 The <repository_dir> node itself is created as the next child of the 1093 parent node. This method only adds one repository node. The parent must 1094 loop for each repository dir in the C{SubversionConfig} object. 1095 1096 If C{repositoryDir} is C{None}, this method call will be a no-op. 1097 1098 @param xmlDom: DOM tree as from C{impl.createDocument()}. 1099 @param parentNode: Parent that the section should be appended to. 1100 @param repositoryDir: Repository dir to be added to the document. 1101 """ 1102 if repositoryDir is not None: 1103 sectionNode = addContainerNode(xmlDom, parentNode, "repository_dir") 1104 addStringNode(xmlDom, sectionNode, "type", repositoryDir.repositoryType) 1105 addStringNode(xmlDom, sectionNode, "abs_path", repositoryDir.directoryPath) 1106 addStringNode(xmlDom, sectionNode, "collect_mode", repositoryDir.collectMode) 1107 addStringNode(xmlDom, sectionNode, "compress_mode", repositoryDir.compressMode) 1108 if ((repositoryDir.relativeExcludePaths is not None and repositoryDir.relativeExcludePaths != []) or 1109 (repositoryDir.excludePatterns is not None and repositoryDir.excludePatterns != [])): 1110 excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") 1111 if repositoryDir.relativeExcludePaths is not None: 1112 for relativePath in repositoryDir.relativeExcludePaths: 1113 addStringNode(xmlDom, excludeNode, "rel_path", relativePath) 1114 if repositoryDir.excludePatterns is not None: 1115 for pattern in repositoryDir.excludePatterns: 1116 addStringNode(xmlDom, excludeNode, "pattern", pattern)
    1117
    1118 1119 ######################################################################## 1120 # Public functions 1121 ######################################################################## 1122 1123 ########################### 1124 # executeAction() function 1125 ########################### 1126 1127 -def executeAction(configPath, options, config):
    1128 """ 1129 Executes the Subversion backup action. 1130 1131 @param configPath: Path to configuration file on disk. 1132 @type configPath: String representing a path on disk. 1133 1134 @param options: Program command-line options. 1135 @type options: Options object. 1136 1137 @param config: Program configuration. 1138 @type config: Config object. 1139 1140 @raise ValueError: Under many generic error conditions 1141 @raise IOError: If a backup could not be written for some reason. 1142 """ 1143 logger.debug("Executing Subversion extended action.") 1144 if config.options is None or config.collect is None: 1145 raise ValueError("Cedar Backup configuration is not properly filled in.") 1146 local = LocalConfig(xmlPath=configPath) 1147 todayIsStart = isStartOfWeek(config.options.startingDay) 1148 fullBackup = options.full or todayIsStart 1149 logger.debug("Full backup flag is [%s]", fullBackup) 1150 if local.subversion.repositories is not None: 1151 for repository in local.subversion.repositories: 1152 _backupRepository(config, local, todayIsStart, fullBackup, repository) 1153 if local.subversion.repositoryDirs is not None: 1154 for repositoryDir in local.subversion.repositoryDirs: 1155 logger.debug("Working with repository directory [%s].", repositoryDir.directoryPath) 1156 for repositoryPath in _getRepositoryPaths(repositoryDir): 1157 repository = Repository(repositoryDir.repositoryType, repositoryPath, 1158 repositoryDir.collectMode, repositoryDir.compressMode) 1159 _backupRepository(config, local, todayIsStart, fullBackup, repository) 1160 logger.info("Completed backing up Subversion repository directory [%s].", repositoryDir.directoryPath) 1161 logger.info("Executed the Subversion extended action successfully.")
    1162
    1163 -def _getCollectMode(local, repository):
    1164 """ 1165 Gets the collect mode that should be used for a repository. 1166 Use repository's if possible, otherwise take from subversion section. 1167 @param repository: Repository object. 1168 @return: Collect mode to use. 1169 """ 1170 if repository.collectMode is None: 1171 collectMode = local.subversion.collectMode 1172 else: 1173 collectMode = repository.collectMode 1174 logger.debug("Collect mode is [%s]", collectMode) 1175 return collectMode
    1176
    1177 -def _getCompressMode(local, repository):
    1178 """ 1179 Gets the compress mode that should be used for a repository. 1180 Use repository's if possible, otherwise take from subversion section. 1181 @param local: LocalConfig object. 1182 @param repository: Repository object. 1183 @return: Compress mode to use. 1184 """ 1185 if repository.compressMode is None: 1186 compressMode = local.subversion.compressMode 1187 else: 1188 compressMode = repository.compressMode 1189 logger.debug("Compress mode is [%s]", compressMode) 1190 return compressMode
    1191
    1192 -def _getRevisionPath(config, repository):
    1193 """ 1194 Gets the path to the revision file associated with a repository. 1195 @param config: Config object. 1196 @param repository: Repository object. 1197 @return: Absolute path to the revision file associated with the repository. 1198 """ 1199 normalized = buildNormalizedPath(repository.repositoryPath) 1200 filename = "%s.%s" % (normalized, REVISION_PATH_EXTENSION) 1201 revisionPath = os.path.join(config.options.workingDir, filename) 1202 logger.debug("Revision file path is [%s]", revisionPath) 1203 return revisionPath
    1204
    1205 -def _getBackupPath(config, repositoryPath, compressMode, startRevision, endRevision):
    1206 """ 1207 Gets the backup file path (including correct extension) associated with a repository. 1208 @param config: Config object. 1209 @param repositoryPath: Path to the indicated repository 1210 @param compressMode: Compress mode to use for this repository. 1211 @param startRevision: Starting repository revision. 1212 @param endRevision: Ending repository revision. 1213 @return: Absolute path to the backup file associated with the repository. 1214 """ 1215 normalizedPath = buildNormalizedPath(repositoryPath) 1216 filename = "svndump-%d:%d-%s.txt" % (startRevision, endRevision, normalizedPath) 1217 if compressMode == 'gzip': 1218 filename = "%s.gz" % filename 1219 elif compressMode == 'bzip2': 1220 filename = "%s.bz2" % filename 1221 backupPath = os.path.join(config.collect.targetDir, filename) 1222 logger.debug("Backup file path is [%s]", backupPath) 1223 return backupPath
    1224
    1225 -def _getRepositoryPaths(repositoryDir):
    1226 """ 1227 Gets a list of child repository paths within a repository directory. 1228 @param repositoryDir: RepositoryDirectory 1229 """ 1230 (excludePaths, excludePatterns) = _getExclusions(repositoryDir) 1231 fsList = FilesystemList() 1232 fsList.excludeFiles = True 1233 fsList.excludeLinks = True 1234 fsList.excludePaths = excludePaths 1235 fsList.excludePatterns = excludePatterns 1236 fsList.addDirContents(path=repositoryDir.directoryPath, recursive=False, addSelf=False) 1237 return fsList
    1238
    1239 -def _getExclusions(repositoryDir):
    1240 """ 1241 Gets exclusions (file and patterns) associated with an repository directory. 1242 1243 The returned files value is a list of absolute paths to be excluded from the 1244 backup for a given directory. It is derived from the repository directory's 1245 relative exclude paths. 1246 1247 The returned patterns value is a list of patterns to be excluded from the 1248 backup for a given directory. It is derived from the repository directory's 1249 list of patterns. 1250 1251 @param repositoryDir: Repository directory object. 1252 1253 @return: Tuple (files, patterns) indicating what to exclude. 1254 """ 1255 paths = [] 1256 if repositoryDir.relativeExcludePaths is not None: 1257 for relativePath in repositoryDir.relativeExcludePaths: 1258 paths.append(os.path.join(repositoryDir.directoryPath, relativePath)) 1259 patterns = [] 1260 if repositoryDir.excludePatterns is not None: 1261 patterns.extend(repositoryDir.excludePatterns) 1262 logger.debug("Exclude paths: %s", paths) 1263 logger.debug("Exclude patterns: %s", patterns) 1264 return(paths, patterns)
    1265
    1266 -def _backupRepository(config, local, todayIsStart, fullBackup, repository):
    1267 """ 1268 Backs up an individual Subversion repository. 1269 1270 This internal method wraps the public methods and adds some functionality 1271 to work better with the extended action itself. 1272 1273 @param config: Cedar Backup configuration. 1274 @param local: Local configuration 1275 @param todayIsStart: Indicates whether today is start of week 1276 @param fullBackup: Full backup flag 1277 @param repository: Repository to operate on 1278 1279 @raise ValueError: If some value is missing or invalid. 1280 @raise IOError: If there is a problem executing the Subversion dump. 1281 """ 1282 logger.debug("Working with repository [%s]", repository.repositoryPath) 1283 logger.debug("Repository type is [%s]", repository.repositoryType) 1284 collectMode = _getCollectMode(local, repository) 1285 compressMode = _getCompressMode(local, repository) 1286 revisionPath = _getRevisionPath(config, repository) 1287 if not (fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart)): 1288 logger.debug("Repository will not be backed up, per collect mode.") 1289 return 1290 logger.debug("Repository meets criteria to be backed up today.") 1291 if collectMode != "incr" or fullBackup: 1292 startRevision = 0 1293 endRevision = getYoungestRevision(repository.repositoryPath) 1294 logger.debug("Using full backup, revision: (%d, %d).", startRevision, endRevision) 1295 else: 1296 if fullBackup: 1297 startRevision = 0 1298 endRevision = getYoungestRevision(repository.repositoryPath) 1299 else: 1300 startRevision = _loadLastRevision(revisionPath) + 1 1301 endRevision = getYoungestRevision(repository.repositoryPath) 1302 if startRevision > endRevision: 1303 logger.info("No need to back up repository [%s]; no new revisions.", repository.repositoryPath) 1304 return 1305 logger.debug("Using incremental backup, revision: (%d, %d).", startRevision, endRevision) 1306 backupPath = _getBackupPath(config, repository.repositoryPath, compressMode, startRevision, endRevision) 1307 with _getOutputFile(backupPath, compressMode) as outputFile: 1308 backupRepository(repository.repositoryPath, outputFile, startRevision, endRevision) 1309 if not os.path.exists(backupPath): 1310 raise IOError("Dump file [%s] does not seem to exist after backup completed." % backupPath) 1311 changeOwnership(backupPath, config.options.backupUser, config.options.backupGroup) 1312 if collectMode == "incr": 1313 _writeLastRevision(config, revisionPath, endRevision) 1314 logger.info("Completed backing up Subversion repository [%s].", repository.repositoryPath)
    1315
    1316 -def _getOutputFile(backupPath, compressMode):
    1317 """ 1318 Opens the output file used for saving the Subversion dump. 1319 1320 If the compress mode is "gzip", we'll open a C{GzipFile}, and if the 1321 compress mode is "bzip2", we'll open a C{BZ2File}. Otherwise, we'll just 1322 return an object from the normal C{open()} method. 1323 1324 @param backupPath: Path to file to open. 1325 @param compressMode: Compress mode of file ("none", "gzip", "bzip"). 1326 1327 @return: Output file object, opened in binary mode for use with executeCommand() 1328 """ 1329 if compressMode == "gzip": 1330 return GzipFile(backupPath, "wb") 1331 elif compressMode == "bzip2": 1332 return BZ2File(backupPath, "wb") 1333 else: 1334 return open(backupPath, "wb")
    1335
    1336 -def _loadLastRevision(revisionPath):
    1337 """ 1338 Loads the indicated revision file from disk into an integer. 1339 1340 If we can't load the revision file successfully (either because it doesn't 1341 exist or for some other reason), then a revision of -1 will be returned - 1342 but the condition will be logged. This way, we err on the side of backing 1343 up too much, because anyone using this will presumably be adding 1 to the 1344 revision, so they don't duplicate any backups. 1345 1346 @param revisionPath: Path to the revision file on disk. 1347 1348 @return: Integer representing last backed-up revision, -1 on error or if none can be read. 1349 """ 1350 if not os.path.isfile(revisionPath): 1351 startRevision = -1 1352 logger.debug("Revision file [%s] does not exist on disk.", revisionPath) 1353 else: 1354 try: 1355 with open(revisionPath, "rb") as f: 1356 startRevision = pickle.load(f, fix_imports=True) # be compatible with Python 2 1357 logger.debug("Loaded revision file [%s] from disk: %d.", revisionPath, startRevision) 1358 except Exception as e: 1359 startRevision = -1 1360 logger.error("Failed loading revision file [%s] from disk: %s", revisionPath, e) 1361 return startRevision
    1362
    1363 -def _writeLastRevision(config, revisionPath, endRevision):
    1364 """ 1365 Writes the end revision to the indicated revision file on disk. 1366 1367 If we can't write the revision file successfully for any reason, we'll log 1368 the condition but won't throw an exception. 1369 1370 @param config: Config object. 1371 @param revisionPath: Path to the revision file on disk. 1372 @param endRevision: Last revision backed up on this run. 1373 """ 1374 try: 1375 with open(revisionPath, "wb") as f: 1376 pickle.dump(endRevision, f, 0, fix_imports=True) 1377 changeOwnership(revisionPath, config.options.backupUser, config.options.backupGroup) 1378 logger.debug("Wrote new revision file [%s] to disk: %d.", revisionPath, endRevision) 1379 except Exception as e: 1380 logger.error("Failed to write revision file [%s] to disk: %s", revisionPath, e)
    1381
    1382 1383 ############################## 1384 # backupRepository() function 1385 ############################## 1386 1387 -def backupRepository(repositoryPath, backupFile, startRevision=None, endRevision=None):
    1388 """ 1389 Backs up an individual Subversion repository. 1390 1391 The starting and ending revision values control an incremental backup. If 1392 the starting revision is not passed in, then revision zero (the start of the 1393 repository) is assumed. If the ending revision is not passed in, then the 1394 youngest revision in the database will be used as the endpoint. 1395 1396 The backup data will be written into the passed-in back file. Normally, 1397 this would be an object as returned from C{open}, but it is possible to use 1398 something like a C{GzipFile} to write compressed output. The caller is 1399 responsible for closing the passed-in backup file. 1400 1401 @note: This function should either be run as root or as the owner of the 1402 Subversion repository. 1403 1404 @note: It is apparently I{not} a good idea to interrupt this function. 1405 Sometimes, this leaves the repository in a "wedged" state, which requires 1406 recovery using C{svnadmin recover}. 1407 1408 @param repositoryPath: Path to Subversion repository to back up 1409 @type repositoryPath: String path representing Subversion repository on disk. 1410 1411 @param backupFile: Python file object to use for writing backup. 1412 @type backupFile: Python file object as from C{open()} or C{file()}. 1413 1414 @param startRevision: Starting repository revision to back up (for incremental backups) 1415 @type startRevision: Integer value >= 0. 1416 1417 @param endRevision: Ending repository revision to back up (for incremental backups) 1418 @type endRevision: Integer value >= 0. 1419 1420 @raise ValueError: If some value is missing or invalid. 1421 @raise IOError: If there is a problem executing the Subversion dump. 1422 """ 1423 if startRevision is None: 1424 startRevision = 0 1425 if endRevision is None: 1426 endRevision = getYoungestRevision(repositoryPath) 1427 if int(startRevision) < 0: 1428 raise ValueError("Start revision must be >= 0.") 1429 if int(endRevision) < 0: 1430 raise ValueError("End revision must be >= 0.") 1431 if startRevision > endRevision: 1432 raise ValueError("Start revision must be <= end revision.") 1433 args = [ "dump", "--quiet", "-r%s:%s" % (startRevision, endRevision), "--incremental", repositoryPath, ] 1434 command = resolveCommand(SVNADMIN_COMMAND) 1435 result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] 1436 if result != 0: 1437 raise IOError("Error [%d] executing Subversion dump for repository [%s]." % (result, repositoryPath)) 1438 logger.debug("Completed dumping subversion repository [%s].", repositoryPath)
    1439
    1440 1441 ################################# 1442 # getYoungestRevision() function 1443 ################################# 1444 1445 -def getYoungestRevision(repositoryPath):
    1446 """ 1447 Gets the youngest (newest) revision in a Subversion repository using C{svnlook}. 1448 1449 @note: This function should either be run as root or as the owner of the 1450 Subversion repository. 1451 1452 @param repositoryPath: Path to Subversion repository to look in. 1453 @type repositoryPath: String path representing Subversion repository on disk. 1454 1455 @return: Youngest revision as an integer. 1456 1457 @raise ValueError: If there is a problem parsing the C{svnlook} output. 1458 @raise IOError: If there is a problem executing the C{svnlook} command. 1459 """ 1460 args = [ 'youngest', repositoryPath, ] 1461 command = resolveCommand(SVNLOOK_COMMAND) 1462 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) 1463 if result != 0: 1464 raise IOError("Error [%d] executing 'svnlook youngest' for repository [%s]." % (result, repositoryPath)) 1465 if len(output) != 1: 1466 raise ValueError("Unable to parse 'svnlook youngest' output.") 1467 return int(output[0])
    1468
    1469 1470 ######################################################################## 1471 # Deprecated functionality 1472 ######################################################################## 1473 1474 -class BDBRepository(Repository):
    1475 1476 """ 1477 Class representing Subversion BDB (Berkeley Database) repository configuration. 1478 This object is deprecated. Use a simple L{Repository} instead. 1479 """ 1480
    1481 - def __init__(self, repositoryPath=None, collectMode=None, compressMode=None):
    1482 """ 1483 Constructor for the C{BDBRepository} class. 1484 """ 1485 super(BDBRepository, self).__init__("BDB", repositoryPath, collectMode, compressMode)
    1486
    1487 - def __repr__(self):
    1488 """ 1489 Official string representation for class instance. 1490 """ 1491 return "BDBRepository(%s, %s, %s)" % (self.repositoryPath, self.collectMode, self.compressMode)
    1492
    1493 1494 -class FSFSRepository(Repository):
    1495 1496 """ 1497 Class representing Subversion FSFS repository configuration. 1498 This object is deprecated. Use a simple L{Repository} instead. 1499 """ 1500
    1501 - def __init__(self, repositoryPath=None, collectMode=None, compressMode=None):
    1502 """ 1503 Constructor for the C{FSFSRepository} class. 1504 """ 1505 super(FSFSRepository, self).__init__("FSFS", repositoryPath, collectMode, compressMode)
    1506
    1507 - def __repr__(self):
    1508 """ 1509 Official string representation for class instance. 1510 """ 1511 return "FSFSRepository(%s, %s, %s)" % (self.repositoryPath, self.collectMode, self.compressMode)
    1512
    1513 1514 -def backupBDBRepository(repositoryPath, backupFile, startRevision=None, endRevision=None):
    1515 """ 1516 Backs up an individual Subversion BDB repository. 1517 This function is deprecated. Use L{backupRepository} instead. 1518 """ 1519 return backupRepository(repositoryPath, backupFile, startRevision, endRevision)
    1520
    1521 1522 -def backupFSFSRepository(repositoryPath, backupFile, startRevision=None, endRevision=None):
    1523 """ 1524 Backs up an individual Subversion FSFS repository. 1525 This function is deprecated. Use L{backupRepository} instead. 1526 """ 1527 return backupRepository(repositoryPath, backupFile, startRevision, endRevision)
    1528

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.cli-module.html0000664000175000017500000014541112657665544025377 0ustar pronovicpronovic00000000000000 CedarBackup3.cli
    Package CedarBackup3 :: Module cli
    [hide private]
    [frames] | no frames]

    Module cli

    source code

    Provides command-line interface implementation for the cback3 script.

    Summary

    The functionality in this module encapsulates the command-line interface for the cback3 script. The cback3 script itself is very short, basically just an invokation of one function implemented here. That, in turn, makes it simpler to validate the command line interface (for instance, it's easier to run pychecker against a module, and unit tests are easier, too).

    The objects and functions implemented in this module are probably not useful to any code external to Cedar Backup. Anyone else implementing their own command-line interface would have to reimplement (or at least enhance) all of this anyway.

    Backwards Compatibility

    The command line interface has changed between Cedar Backup 1.x and Cedar Backup 2.x. Some new switches have been added, and the actions have become simple arguments rather than switches (which is a much more standard command line format). Old 1.x command lines are generally no longer valid.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      Options
    Class representing command-line options for the cback3 script.
      _ActionItem
    Class representing a single action to be executed.
      _ManagedActionItem
    Class representing a single action to be executed on a managed peer.
      _ActionSet
    Class representing a set of local actions to be executed.
    Functions [hide private]
     
    cli()
    Implements the command-line interface for the cback3 script.
    source code
     
    _usage(fd=sys.stdout)
    Prints usage information for the cback3 script.
    source code
     
    _version(fd=sys.stdout)
    Prints version information for the cback3 script.
    source code
     
    _diagnostics(fd=sys.stdout)
    Prints runtime diagnostics information.
    source code
     
    setupLogging(options)
    Set up logging based on command-line options.
    source code
     
    _setupLogfile(options)
    Sets up and creates logfile as needed.
    source code
     
    _setupFlowLogging(logfile, options)
    Sets up flow logging.
    source code
     
    _setupOutputLogging(logfile, options)
    Sets up command output logging.
    source code
     
    _setupDiskFlowLogging(flowLogger, logfile, options)
    Sets up on-disk flow logging.
    source code
     
    _setupScreenFlowLogging(flowLogger, options)
    Sets up on-screen flow logging.
    source code
     
    _setupDiskOutputLogging(outputLogger, logfile, options)
    Sets up on-disk command output logging.
    source code
     
    setupPathResolver(config)
    Set up the path resolver singleton based on configuration.
    source code
    Variables [hide private]
      DEFAULT_CONFIG = '/etc/cback3.conf'
    The default configuration file.
      DEFAULT_LOGFILE = '/var/log/cback3.log'
    The default log file path.
      DEFAULT_OWNERSHIP = ['root', 'adm']
    Default ownership for the logfile.
      DEFAULT_MODE = 416
    Default file permissions mode on the logfile.
      VALID_ACTIONS = ['collect', 'stage', 'store', 'purge', 'rebuil...
    List of valid actions.
      COMBINE_ACTIONS = ['collect', 'stage', 'store', 'purge']
    List of actions which can be combined with other actions.
      NONCOMBINE_ACTIONS = ['rebuild', 'validate', 'initialize', 'all']
    List of actions which cannot be combined with other actions.
      logger = logging.getLogger("CedarBackup3.log.cli")
      DISK_LOG_FORMAT = '%(asctime)s --> [%(levelname)-7s] %(message)s'
      DISK_OUTPUT_FORMAT = '%(message)s'
      SCREEN_LOG_FORMAT = '%(message)s'
      SCREEN_LOG_STREAM = sys.stdout
      DATE_FORMAT = '%Y-%m-%dT%H:%M:%S %Z'
      REBUILD_INDEX = 0
      VALIDATE_INDEX = 0
      INITIALIZE_INDEX = 0
      COLLECT_INDEX = 100
      STAGE_INDEX = 200
      STORE_INDEX = 300
      PURGE_INDEX = 400
      SHORT_SWITCHES = 'hVbqc:fMNl:o:m:OdsD'
      LONG_SWITCHES = ['help', 'version', 'verbose', 'quiet', 'confi...
      __package__ = 'CedarBackup3'
    Function Details [hide private]

    cli()

    source code 

    Implements the command-line interface for the cback3 script.

    Essentially, this is the "main routine" for the cback3 script. It does all of the argument processing for the script, and then sets about executing the indicated actions.

    As a general rule, only the actions indicated on the command line will be executed. We will accept any of the built-in actions and any of the configured extended actions (which makes action list verification a two- step process).

    The 'all' action has a special meaning: it means that the built-in set of actions (collect, stage, store, purge) will all be executed, in that order. Extended actions will be ignored as part of the 'all' action.

    Raised exceptions always result in an immediate return. Otherwise, we generally return when all specified actions have been completed. Actions are ignored if the help, version or validate flags are set.

    A different error code is returned for each type of failure:

    • 1: The Python interpreter version is < 3.4
    • 2: Error processing command-line arguments
    • 3: Error configuring logging
    • 4: Error parsing indicated configuration file
    • 5: Backup was interrupted with a CTRL-C or similar
    • 6: Error executing specified backup actions
    Returns:
    Error code as described above.
    Notes:
    • This function contains a good amount of logging at the INFO level, because this is the right place to document high-level flow of control (i.e. what the command-line options were, what config file was being used, etc.)
    • We assume that anything that must be seen on the screen is logged at the ERROR level. Errors that occur before logging can be configured are written to sys.stderr.

    _usage(fd=sys.stdout)

    source code 

    Prints usage information for the cback3 script.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _version(fd=sys.stdout)

    source code 

    Prints version information for the cback3 script.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _diagnostics(fd=sys.stdout)

    source code 

    Prints runtime diagnostics information.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    setupLogging(options)

    source code 

    Set up logging based on command-line options.

    There are two kinds of logging: flow logging and output logging. Output logging contains information about system commands executed by Cedar Backup, for instance the calls to mkisofs or mount, etc. Flow logging contains error and informational messages used to understand program flow. Flow log messages and output log messages are written to two different loggers target (CedarBackup3.log and CedarBackup3.output). Flow log messages are written at the ERROR, INFO and DEBUG log levels, while output log messages are generally only written at the INFO log level.

    By default, output logging is disabled. When the options.output or options.debug flags are set, output logging will be written to the configured logfile. Output logging is never written to the screen.

    By default, flow logging is enabled at the ERROR level to the screen and at the INFO level to the configured logfile. If the options.quiet flag is set, flow logging is enabled at the INFO level to the configured logfile only (i.e. no output will be sent to the screen). If the options.verbose flag is set, flow logging is enabled at the INFO level to both the screen and the configured logfile. If the options.debug flag is set, flow logging is enabled at the DEBUG level to both the screen and the configured logfile.

    Parameters:
    • options (Options object) - Command-line options.
    Returns:
    Path to logfile on disk.

    _setupLogfile(options)

    source code 

    Sets up and creates logfile as needed.

    If the logfile already exists on disk, it will be left as-is, under the assumption that it was created with appropriate ownership and permissions. If the logfile does not exist on disk, it will be created as an empty file. Ownership and permissions will remain at their defaults unless user/group and/or mode are set in the options. We ignore errors setting the indicated user and group.

    Parameters:
    • options - Command-line options.
    Returns:
    Path to logfile on disk.

    Note: This function is vulnerable to a race condition. If the log file does not exist when the function is run, it will attempt to create the file as safely as possible (using O_CREAT). If two processes attempt to create the file at the same time, then one of them will fail. In practice, this shouldn't really be a problem, but it might happen occassionally if two instances of cback3 run concurrently or if cback3 collides with logrotate or something.

    _setupFlowLogging(logfile, options)

    source code 

    Sets up flow logging.

    Parameters:
    • logfile - Path to logfile on disk.
    • options - Command-line options.

    _setupOutputLogging(logfile, options)

    source code 

    Sets up command output logging.

    Parameters:
    • logfile - Path to logfile on disk.
    • options - Command-line options.

    _setupDiskFlowLogging(flowLogger, logfile, options)

    source code 

    Sets up on-disk flow logging.

    Parameters:
    • flowLogger - Python flow logger object.
    • logfile - Path to logfile on disk.
    • options - Command-line options.

    _setupScreenFlowLogging(flowLogger, options)

    source code 

    Sets up on-screen flow logging.

    Parameters:
    • flowLogger - Python flow logger object.
    • options - Command-line options.

    _setupDiskOutputLogging(outputLogger, logfile, options)

    source code 

    Sets up on-disk command output logging.

    Parameters:
    • outputLogger - Python command output logger object.
    • logfile - Path to logfile on disk.
    • options - Command-line options.

    setupPathResolver(config)

    source code 

    Set up the path resolver singleton based on configuration.

    Cedar Backup's path resolver is implemented in terms of a singleton, the PathResolverSingleton class. This function takes options configuration, converts it into the dictionary form needed by the singleton, and then initializes the singleton. After that, any function that needs to resolve the path of a command can use the singleton.

    Parameters:
    • config (Config object) - Configuration

    Variables Details [hide private]

    VALID_ACTIONS

    List of valid actions.
    Value:
    ['collect',
     'stage',
     'store',
     'purge',
     'rebuild',
     'validate',
     'initialize',
     'all']
    

    LONG_SWITCHES

    Value:
    ['help',
     'version',
     'verbose',
     'quiet',
     'config=',
     'full',
     'managed',
     'managed-only',
    ...
    

    CedarBackup3-3.1.6/doc/interface/toc-CedarBackup3.cli-module.html0000664000175000017500000001273712657665544026166 0ustar pronovicpronovic00000000000000 cli

    Module cli


    Classes

    Options

    Functions

    cli
    setupLogging
    setupPathResolver

    Variables

    COLLECT_INDEX
    COMBINE_ACTIONS
    DATE_FORMAT
    DEFAULT_CONFIG
    DEFAULT_LOGFILE
    DEFAULT_MODE
    DEFAULT_OWNERSHIP
    DISK_LOG_FORMAT
    DISK_OUTPUT_FORMAT
    INITIALIZE_INDEX
    LONG_SWITCHES
    NONCOMBINE_ACTIONS
    PURGE_INDEX
    REBUILD_INDEX
    SCREEN_LOG_FORMAT
    SCREEN_LOG_STREAM
    SHORT_SWITCHES
    STAGE_INDEX
    STORE_INDEX
    VALIDATE_INDEX
    VALID_ACTIONS
    __package__
    logger

    [hide private] CedarBackup3-3.1.6/doc/interface/CedarBackup3.peer.RemotePeer-class.html0000664000175000017500000027044212657665545027455 0ustar pronovicpronovic00000000000000 CedarBackup3.peer.RemotePeer
    Package CedarBackup3 :: Module peer :: Class RemotePeer
    [hide private]
    [frames] | no frames]

    Class RemotePeer

    source code

    object --+
             |
            RemotePeer
    

    Backup peer representing a remote peer in a backup pool.

    This is a class representing a remote (networked) peer in a backup pool. Remote peers are backed up using an rcp-compatible copy command. A remote peer has associated with it a name (which must be a valid hostname), a collect directory, a working directory and a copy method (an rcp-compatible command).

    You can also set an optional local user value. This username will be used as the local user for any remote copies that are required. It can only be used if the root user is executing the backup. The root user will su to the local user and execute the remote copies as that user.

    The copy method is associated with the peer and not with the actual request to copy, because we can envision that each remote host might have a different connect method.

    The public methods other than the constructor are part of a "backup peer" interface shared with the LocalPeer class.

    Instance Methods [hide private]
     
    __init__(self, name=None, collectDir=None, workingDir=None, remoteUser=None, rcpCommand=None, localUser=None, rshCommand=None, cbackCommand=None, ignoreFailureMode=None)
    Initializes a remote backup peer.
    source code
     
    stagePeer(self, targetDir, ownership=None, permissions=None)
    Stages data from the peer into the indicated local target directory.
    source code
     
    checkCollectIndicator(self, collectIndicator=None)
    Checks the collect indicator in the peer's staging directory.
    source code
     
    writeStageIndicator(self, stageIndicator=None)
    Writes the stage indicator in the peer's staging directory.
    source code
     
    executeRemoteCommand(self, command)
    Executes a command on the peer via remote shell.
    source code
     
    executeManagedAction(self, action, fullBackup)
    Executes a managed action on this peer.
    source code
     
    _setName(self, value)
    Property target used to set the peer name.
    source code
     
    _getName(self)
    Property target used to get the peer name.
    source code
     
    _setCollectDir(self, value)
    Property target used to set the collect directory.
    source code
     
    _getCollectDir(self)
    Property target used to get the collect directory.
    source code
     
    _setWorkingDir(self, value)
    Property target used to set the working directory.
    source code
     
    _getWorkingDir(self)
    Property target used to get the working directory.
    source code
     
    _setRemoteUser(self, value)
    Property target used to set the remote user.
    source code
     
    _getRemoteUser(self)
    Property target used to get the remote user.
    source code
     
    _setLocalUser(self, value)
    Property target used to set the local user.
    source code
     
    _getLocalUser(self)
    Property target used to get the local user.
    source code
     
    _setRcpCommand(self, value)
    Property target to set the rcp command.
    source code
     
    _getRcpCommand(self)
    Property target used to get the rcp command.
    source code
     
    _setRshCommand(self, value)
    Property target to set the rsh command.
    source code
     
    _getRshCommand(self)
    Property target used to get the rsh command.
    source code
     
    _setCbackCommand(self, value)
    Property target to set the cback command.
    source code
     
    _getCbackCommand(self)
    Property target used to get the cback command.
    source code
     
    _setIgnoreFailureMode(self, value)
    Property target used to set the ignoreFailure mode.
    source code
     
    _getIgnoreFailureMode(self)
    Property target used to get the ignoreFailure mode.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _getDirContents(path)
    Returns the contents of a directory in terms of a Set.
    source code
     
    _copyRemoteDir(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceDir, targetDir, ownership=None, permissions=None)
    Copies files from the source directory to the target directory.
    source code
     
    _copyRemoteFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, ownership=None, permissions=None, overwrite=True)
    Copies a remote source file to a target file.
    source code
     
    _pushLocalFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, overwrite=True)
    Copies a local source file to a remote host.
    source code
     
    _executeRemoteCommand(remoteUser, localUser, remoteHost, rshCommand, rshCommandList, remoteCommand)
    Executes a command on the peer via remote shell.
    source code
     
    _buildCbackCommand(cbackCommand, action, fullBackup)
    Builds a Cedar Backup command line for the named action.
    source code
    Properties [hide private]
      name
    Name of the peer (a valid DNS hostname).
      collectDir
    Path to the peer's collect directory (an absolute local path).
      remoteUser
    Name of the Cedar Backup user on the remote peer.
      rcpCommand
    An rcp-compatible copy command to use for copying files.
      rshCommand
    An rsh-compatible command to use for remote shells to the peer.
      cbackCommand
    A chack-compatible command to use for executing managed actions.
      workingDir
    Path to the peer's working directory (an absolute local path).
      localUser
    Name of the Cedar Backup user on the current host.
      ignoreFailureMode
    Ignore failure mode for peer.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name=None, collectDir=None, workingDir=None, remoteUser=None, rcpCommand=None, localUser=None, rshCommand=None, cbackCommand=None, ignoreFailureMode=None)
    (Constructor)

    source code 

    Initializes a remote backup peer.

    Parameters:
    • name (String, must be a valid DNS hostname) - Name of the backup peer
    • collectDir (String representing an absolute path on the remote peer) - Path to the peer's collect directory
    • workingDir (String representing an absolute path on the current host.) - Working directory that can be used to create temporary files, etc.
    • remoteUser (String representing a username, valid via remote shell to the peer) - Name of the Cedar Backup user on the remote peer
    • localUser (String representing a username, valid on the current host) - Name of the Cedar Backup user on the current host
    • rcpCommand (String representing a system command including required arguments) - An rcp-compatible copy command to use for copying files from the peer
    • rshCommand (String representing a system command including required arguments) - An rsh-compatible copy command to use for remote shells to the peer
    • cbackCommand (String representing a system command including required arguments) - A chack-compatible command to use for executing managed actions
    • ignoreFailureMode (One of VALID_FAILURE_MODES) - Ignore failure mode for this peer
    Raises:
    • ValueError - If collect directory is not an absolute path
    Overrides: object.__init__

    Note: If provided, each command will eventually be parsed into a list of strings suitable for passing to util.executeCommand in order to avoid security holes related to shell interpolation. This parsing will be done by the util.splitCommandLine function. See the documentation for that function for some important notes about its limitations.

    stagePeer(self, targetDir, ownership=None, permissions=None)

    source code 

    Stages data from the peer into the indicated local target directory.

    The target directory must already exist before this method is called. If passed in, ownership and permissions will be applied to the files that are copied.

    Parameters:
    • targetDir (String representing a directory on disk) - Target directory to write data into
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the staged files should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    Returns:
    Number of files copied from the source directory to the target directory.
    Raises:
    • ValueError - If target directory is not a directory, does not exist or is not absolute.
    • ValueError - If a path cannot be encoded properly.
    • IOError - If there were no files to stage (i.e. the directory was empty)
    • IOError - If there is an IO error copying a file.
    • OSError - If there is an OS error copying or changing permissions on a file
    Notes:
    • The returned count of copied files might be inaccurate if some of the copied files already existed in the staging directory prior to the copy taking place. We don't clear the staging directory first, because some extension might also be using it.
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.
    • Unlike the local peer version of this method, an I/O error might or might not be raised if the directory is empty. Since we're using a remote copy method, we just don't have the fine-grained control over our exceptions that's available when we can look directly at the filesystem, and we can't control whether the remote copy method thinks an empty directory is an error.

    checkCollectIndicator(self, collectIndicator=None)

    source code 

    Checks the collect indicator in the peer's staging directory.

    When a peer has completed collecting its backup files, it will write an empty indicator file into its collect directory. This method checks to see whether that indicator has been written. If the remote copy command fails, we return False as if the file weren't there.

    If you need to, you can override the name of the collect indicator file by passing in a different name.

    Parameters:
    • collectIndicator (String representing name of a file in the collect directory) - Name of the collect indicator file to check
    Returns:
    Boolean true/false depending on whether the indicator exists.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    Note: Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the scp command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. Because of this, the implementation of this method is rather convoluted.

    writeStageIndicator(self, stageIndicator=None)

    source code 

    Writes the stage indicator in the peer's staging directory.

    When the master has completed collecting its backup files, it will write an empty indicator file into the peer's collect directory. The presence of this file implies that the staging process is complete.

    If you need to, you can override the name of the stage indicator file by passing in a different name.

    Parameters:
    • stageIndicator (String representing name of a file in the collect directory) - Name of the indicator file to write
    Raises:
    • ValueError - If a path cannot be encoded properly.
    • IOError - If there is an IO error creating the file.
    • OSError - If there is an OS error creating or changing permissions on the file

    Note: If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.

    executeRemoteCommand(self, command)

    source code 

    Executes a command on the peer via remote shell.

    Parameters:
    • command (String command-line suitable for use with rsh.) - Command to execute
    Raises:
    • IOError - If there is an error executing the command on the remote peer.

    executeManagedAction(self, action, fullBackup)

    source code 

    Executes a managed action on this peer.

    Parameters:
    • action - Name of the action to execute.
    • fullBackup - Whether a full backup should be executed.
    Raises:
    • IOError - If there is an error executing the action on the remote peer.

    _getDirContents(path)
    Static Method

    source code 

    Returns the contents of a directory in terms of a Set.

    The directory's contents are read as a FilesystemList containing only files, and then the list is converted into a set object for later use.

    Parameters:
    • path (String representing a path on disk) - Directory path to get contents for
    Returns:
    Set of files in the directory
    Raises:
    • ValueError - If path is not a directory or does not exist.

    _copyRemoteDir(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceDir, targetDir, ownership=None, permissions=None)
    Static Method

    source code 

    Copies files from the source directory to the target directory.

    This function is not recursive. Only the files in the directory will be copied. Ownership and permissions will be left at their default values if new values are not specified. Behavior when copying soft links from the collect directory is dependent on the behavior of the specified rcp command.

    Parameters:
    • remoteUser (String representing a username, valid via the copy command) - Name of the Cedar Backup user on the remote peer
    • localUser (String representing a username, valid on the current host) - Name of the Cedar Backup user on the current host
    • remoteHost (String representing a hostname, accessible via the copy command) - Hostname of the remote peer
    • rcpCommand (String representing a system command including required arguments) - An rcp-compatible copy command to use for copying files from the peer
    • rcpCommandList (Command as a list to be passed to util.executeCommand) - An rcp-compatible copy command to use for copying files
    • sourceDir (String representing a directory on disk) - Source directory
    • targetDir (String representing a directory on disk) - Target directory
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the copied files should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    Returns:
    Number of files copied from the source directory to the target directory.
    Raises:
    • ValueError - If source or target is not a directory or does not exist.
    • IOError - If there is an IO error copying the files.
    Notes:
    • The returned count of copied files might be inaccurate if some of the copied files already existed in the staging directory prior to the copy taking place. We don't clear the staging directory first, because some extension might also be using it.
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.
    • We don't have a good way of knowing exactly what files we copied down from the remote peer, unless we want to parse the output of the rcp command (ugh). We could change permissions on everything in the target directory, but that's kind of ugly too. Instead, we use Python's set functionality to figure out what files were added while we executed the rcp command. This isn't perfect - for instance, it's not correct if someone else is messing with the directory at the same time we're doing the remote copy - but it's about as good as we're going to get.
    • Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the scp command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. We try to work around this by issuing IOError if we don't copy any files from the remote host.

    _copyRemoteFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, ownership=None, permissions=None, overwrite=True)
    Static Method

    source code 

    Copies a remote source file to a target file.

    Parameters:
    • remoteUser (String representing a username, valid via the copy command) - Name of the Cedar Backup user on the remote peer
    • remoteHost (String representing a hostname, accessible via the copy command) - Hostname of the remote peer
    • localUser (String representing a username, valid on the current host) - Name of the Cedar Backup user on the current host
    • rcpCommand (String representing a system command including required arguments) - An rcp-compatible copy command to use for copying files from the peer
    • rcpCommandList (Command as a list to be passed to util.executeCommand) - An rcp-compatible copy command to use for copying files
    • sourceFile (String representing a file on disk, as an absolute path) - Source file to copy
    • targetFile (String representing a file on disk, as an absolute path) - Target file to create
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the copied should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    • overwrite (Boolean true/false.) - Indicates whether it's OK to overwrite the target file.
    Raises:
    • IOError - If the target file already exists.
    • IOError - If there is an IO error copying the file
    • OSError - If there is an OS error changing permissions on the file
    Notes:
    • Internally, we have to go through and escape any spaces in the source path with double-backslash, otherwise things get screwed up. It doesn't seem to be required in the target path. I hope this is portable to various different rcp methods, but I guess it might not be (all I have to test with is OpenSSH).
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.
    • We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception.
    • Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the scp command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. We try to work around this by issuing IOError the target file does not exist when we're done.

    _pushLocalFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, overwrite=True)
    Static Method

    source code 

    Copies a local source file to a remote host.

    Parameters:
    • remoteUser (String representing a username, valid via the copy command) - Name of the Cedar Backup user on the remote peer
    • localUser (String representing a username, valid on the current host) - Name of the Cedar Backup user on the current host
    • remoteHost (String representing a hostname, accessible via the copy command) - Hostname of the remote peer
    • rcpCommand (String representing a system command including required arguments) - An rcp-compatible copy command to use for copying files from the peer
    • rcpCommandList (Command as a list to be passed to util.executeCommand) - An rcp-compatible copy command to use for copying files
    • sourceFile (String representing a file on disk, as an absolute path) - Source file to copy
    • targetFile (String representing a file on disk, as an absolute path) - Target file to create
    • overwrite (Boolean true/false.) - Indicates whether it's OK to overwrite the target file.
    Raises:
    • IOError - If there is an IO error copying the file
    • OSError - If there is an OS error changing permissions on the file
    Notes:
    • We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception.
    • Internally, we have to go through and escape any spaces in the source and target paths with double-backslash, otherwise things get screwed up. I hope this is portable to various different rcp methods, but I guess it might not be (all I have to test with is OpenSSH).
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.

    _setName(self, value)

    source code 

    Property target used to set the peer name. The value must be a non-empty string and cannot be None.

    Raises:
    • ValueError - If the value is an empty string or None.

    _setCollectDir(self, value)

    source code 

    Property target used to set the collect directory. The value must be an absolute path and cannot be None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is None or is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setWorkingDir(self, value)

    source code 

    Property target used to set the working directory. The value must be an absolute path and cannot be None.

    Raises:
    • ValueError - If the value is None or is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setRemoteUser(self, value)

    source code 

    Property target used to set the remote user. The value must be a non-empty string and cannot be None.

    Raises:
    • ValueError - If the value is an empty string or None.

    _setLocalUser(self, value)

    source code 

    Property target used to set the local user. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setRcpCommand(self, value)

    source code 

    Property target to set the rcp command.

    The value must be a non-empty string or None. Its value is stored in the two forms: "raw" as provided by the client, and "parsed" into a list suitable for being passed to util.executeCommand via util.splitCommandLine.

    However, all the caller will ever see via the property is the actual value they set (which includes seeing None, even if we translate that internally to DEF_RCP_COMMAND). Internally, we should always use self._rcpCommandList if we want the actual command list.

    Raises:
    • ValueError - If the value is an empty string.

    _setRshCommand(self, value)

    source code 

    Property target to set the rsh command.

    The value must be a non-empty string or None. Its value is stored in the two forms: "raw" as provided by the client, and "parsed" into a list suitable for being passed to util.executeCommand via util.splitCommandLine.

    However, all the caller will ever see via the property is the actual value they set (which includes seeing None, even if we translate that internally to DEF_RSH_COMMAND). Internally, we should always use self._rshCommandList if we want the actual command list.

    Raises:
    • ValueError - If the value is an empty string.

    _setCbackCommand(self, value)

    source code 

    Property target to set the cback command.

    The value must be a non-empty string or None. Unlike the other command, this value is only stored in the "raw" form provided by the client.

    Raises:
    • ValueError - If the value is an empty string.

    _setIgnoreFailureMode(self, value)

    source code 

    Property target used to set the ignoreFailure mode. If not None, the mode must be one of the values in VALID_FAILURE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _executeRemoteCommand(remoteUser, localUser, remoteHost, rshCommand, rshCommandList, remoteCommand)
    Static Method

    source code 

    Executes a command on the peer via remote shell.

    Parameters:
    • remoteUser (String representing a username, valid on the remote host) - Name of the Cedar Backup user on the remote peer
    • localUser (String representing a username, valid on the current host) - Name of the Cedar Backup user on the current host
    • remoteHost (String representing a hostname, accessible via the copy command) - Hostname of the remote peer
    • rshCommand (String representing a system command including required arguments) - An rsh-compatible copy command to use for remote shells to the peer
    • rshCommandList (Command as a list to be passed to util.executeCommand) - An rsh-compatible copy command to use for remote shells to the peer
    • remoteCommand (String command-line, with no special shell characters ($, <, etc.)) - The command to be executed on the remote host
    Raises:
    • IOError - If there is an error executing the remote command

    _buildCbackCommand(cbackCommand, action, fullBackup)
    Static Method

    source code 

    Builds a Cedar Backup command line for the named action.

    Parameters:
    • cbackCommand - cback command to execute, including required options
    • action - Name of the action to execute.
    • fullBackup - Whether a full backup should be executed.
    Returns:
    String suitable for passing to _executeRemoteCommand as remoteCommand.
    Raises:
    • ValueError - If action is None.

    Note: If the cback command is None, then DEF_CBACK_COMMAND is used.


    Property Details [hide private]

    name

    Name of the peer (a valid DNS hostname).

    Get Method:
    _getName(self) - Property target used to get the peer name.
    Set Method:
    _setName(self, value) - Property target used to set the peer name.

    collectDir

    Path to the peer's collect directory (an absolute local path).

    Get Method:
    _getCollectDir(self) - Property target used to get the collect directory.
    Set Method:
    _setCollectDir(self, value) - Property target used to set the collect directory.

    remoteUser

    Name of the Cedar Backup user on the remote peer.

    Get Method:
    _getRemoteUser(self) - Property target used to get the remote user.
    Set Method:
    _setRemoteUser(self, value) - Property target used to set the remote user.

    rcpCommand

    An rcp-compatible copy command to use for copying files.

    Get Method:
    _getRcpCommand(self) - Property target used to get the rcp command.
    Set Method:
    _setRcpCommand(self, value) - Property target to set the rcp command.

    rshCommand

    An rsh-compatible command to use for remote shells to the peer.

    Get Method:
    _getRshCommand(self) - Property target used to get the rsh command.
    Set Method:
    _setRshCommand(self, value) - Property target to set the rsh command.

    cbackCommand

    A chack-compatible command to use for executing managed actions.

    Get Method:
    _getCbackCommand(self) - Property target used to get the cback command.
    Set Method:
    _setCbackCommand(self, value) - Property target to set the cback command.

    workingDir

    Path to the peer's working directory (an absolute local path).

    Get Method:
    _getWorkingDir(self) - Property target used to get the working directory.
    Set Method:
    _setWorkingDir(self, value) - Property target used to set the working directory.

    localUser

    Name of the Cedar Backup user on the current host.

    Get Method:
    _getLocalUser(self) - Property target used to get the local user.
    Set Method:
    _setLocalUser(self, value) - Property target used to set the local user.

    ignoreFailureMode

    Ignore failure mode for peer.

    Get Method:
    _getIgnoreFailureMode(self) - Property target used to get the ignoreFailure mode.
    Set Method:
    _setIgnoreFailureMode(self, value) - Property target used to set the ignoreFailure mode.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.BlankBehavior-class.html0000664000175000017500000006504612657665544030430 0ustar pronovicpronovic00000000000000 CedarBackup3.config.BlankBehavior
    Package CedarBackup3 :: Module config :: Class BlankBehavior
    [hide private]
    [frames] | no frames]

    Class BlankBehavior

    source code

    object --+
             |
            BlankBehavior
    

    Class representing optimized store-action media blanking behavior.

    The following restrictions exist on data in this class:

    • The blanking mode must be a one of the values in VALID_BLANK_MODES
    • The blanking factor must be a positive floating point number
    Instance Methods [hide private]
     
    __init__(self, blankMode=None, blankFactor=None)
    Constructor for the BlankBehavior class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    _setBlankMode(self, value)
    Property target used to set the blanking mode.
    source code
     
    _getBlankMode(self)
    Property target used to get the blanking mode.
    source code
     
    _setBlankFactor(self, value)
    Property target used to set the blanking factor.
    source code
     
    _getBlankFactor(self)
    Property target used to get the blanking factor.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      blankMode
    Blanking mode
      blankFactor
    Blanking factor

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, blankMode=None, blankFactor=None)
    (Constructor)

    source code 

    Constructor for the BlankBehavior class.

    Parameters:
    • blankMode - Blanking mode
    • blankFactor - Blanking factor
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setBlankMode(self, value)

    source code 

    Property target used to set the blanking mode. The value must be one of VALID_BLANK_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setBlankFactor(self, value)

    source code 

    Property target used to set the blanking factor. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.
    • ValueError - If the value is not a valid floating point number
    • ValueError - If the value is less than zero

    Property Details [hide private]

    blankMode

    Blanking mode

    Get Method:
    _getBlankMode(self) - Property target used to get the blanking mode.
    Set Method:
    _setBlankMode(self, value) - Property target used to set the blanking mode.

    blankFactor

    Blanking factor

    Get Method:
    _getBlankFactor(self) - Property target used to get the blanking factor.
    Set Method:
    _setBlankFactor(self, value) - Property target used to set the blanking factor.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.split-pysrc.html0000664000175000017500000056760712657665545027144 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.split
    Package CedarBackup3 :: Package extend :: Module split
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.extend.split

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2007,2010,2013,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Official Cedar Backup Extensions 
     30  # Purpose  : Provides an extension to split up large files in staging directories. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Provides an extension to split up large files in staging directories. 
     40   
     41  When this extension is executed, it will look through the configured Cedar 
     42  Backup staging directory for files exceeding a specified size limit, and split 
     43  them down into smaller files using the 'split' utility.  Any directory which 
     44  has already been split (as indicated by the C{cback.split} file) will be 
     45  ignored. 
     46   
     47  This extension requires a new configuration section <split> and is intended 
     48  to be run immediately after the standard stage action or immediately before the 
     49  standard store action.  Aside from its own configuration, it requires the 
     50  options and staging configuration sections in the standard Cedar Backup 
     51  configuration file. 
     52   
     53  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     54  """ 
     55   
     56  ######################################################################## 
     57  # Imported modules 
     58  ######################################################################## 
     59   
     60  # System modules 
     61  import os 
     62  import re 
     63  import logging 
     64  from functools import total_ordering 
     65   
     66  # Cedar Backup modules 
     67  from CedarBackup3.util import resolveCommand, executeCommand, changeOwnership 
     68  from CedarBackup3.xmlutil import createInputDom, addContainerNode 
     69  from CedarBackup3.xmlutil import readFirstChild 
     70  from CedarBackup3.actions.util import findDailyDirs, writeIndicatorFile, getBackupFiles 
     71  from CedarBackup3.config import ByteQuantity, readByteQuantity, addByteQuantityNode 
     72   
     73   
     74  ######################################################################## 
     75  # Module-wide constants and variables 
     76  ######################################################################## 
     77   
     78  logger = logging.getLogger("CedarBackup3.log.extend.split") 
     79   
     80  SPLIT_COMMAND = [ "split", ] 
     81  SPLIT_INDICATOR = "cback.split" 
    
    82 83 84 ######################################################################## 85 # SplitConfig class definition 86 ######################################################################## 87 88 @total_ordering 89 -class SplitConfig(object):
    90 91 """ 92 Class representing split configuration. 93 94 Split configuration is used for splitting staging directories. 95 96 The following restrictions exist on data in this class: 97 98 - The size limit must be a ByteQuantity 99 - The split size must be a ByteQuantity 100 101 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, 102 sizeLimit, splitSize 103 """ 104
    105 - def __init__(self, sizeLimit=None, splitSize=None):
    106 """ 107 Constructor for the C{SplitCOnfig} class. 108 109 @param sizeLimit: Size limit of the files, in bytes 110 @param splitSize: Size that files exceeding the limit will be split into, in bytes 111 112 @raise ValueError: If one of the values is invalid. 113 """ 114 self._sizeLimit = None 115 self._splitSize = None 116 self.sizeLimit = sizeLimit 117 self.splitSize = splitSize
    118
    119 - def __repr__(self):
    120 """ 121 Official string representation for class instance. 122 """ 123 return "SplitConfig(%s, %s)" % (self.sizeLimit, self.splitSize)
    124
    125 - def __str__(self):
    126 """ 127 Informal string representation for class instance. 128 """ 129 return self.__repr__()
    130
    131 - def __eq__(self, other):
    132 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 133 return self.__cmp__(other) == 0
    134
    135 - def __lt__(self, other):
    136 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 137 return self.__cmp__(other) < 0
    138
    139 - def __gt__(self, other):
    140 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 141 return self.__cmp__(other) > 0
    142
    143 - def __cmp__(self, other):
    144 """ 145 Original Python 2 comparison operator. 146 Lists within this class are "unordered" for equality comparisons. 147 @param other: Other object to compare to. 148 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 149 """ 150 if other is None: 151 return 1 152 if self.sizeLimit != other.sizeLimit: 153 if (self.sizeLimit or ByteQuantity()) < (other.sizeLimit or ByteQuantity()): 154 return -1 155 else: 156 return 1 157 if self.splitSize != other.splitSize: 158 if (self.splitSize or ByteQuantity()) < (other.splitSize or ByteQuantity()): 159 return -1 160 else: 161 return 1 162 return 0
    163
    164 - def _setSizeLimit(self, value):
    165 """ 166 Property target used to set the size limit. 167 If not C{None}, the value must be a C{ByteQuantity} object. 168 @raise ValueError: If the value is not a C{ByteQuantity} 169 """ 170 if value is None: 171 self._sizeLimit = None 172 else: 173 if not isinstance(value, ByteQuantity): 174 raise ValueError("Value must be a C{ByteQuantity} object.") 175 self._sizeLimit = value
    176
    177 - def _getSizeLimit(self):
    178 """ 179 Property target used to get the size limit. 180 """ 181 return self._sizeLimit
    182
    183 - def _setSplitSize(self, value):
    184 """ 185 Property target used to set the split size. 186 If not C{None}, the value must be a C{ByteQuantity} object. 187 @raise ValueError: If the value is not a C{ByteQuantity} 188 """ 189 if value is None: 190 self._splitSize = None 191 else: 192 if not isinstance(value, ByteQuantity): 193 raise ValueError("Value must be a C{ByteQuantity} object.") 194 self._splitSize = value
    195
    196 - def _getSplitSize(self):
    197 """ 198 Property target used to get the split size. 199 """ 200 return self._splitSize
    201 202 sizeLimit = property(_getSizeLimit, _setSizeLimit, None, doc="Size limit, as a ByteQuantity") 203 splitSize = property(_getSplitSize, _setSplitSize, None, doc="Split size, as a ByteQuantity")
    204
    205 206 ######################################################################## 207 # LocalConfig class definition 208 ######################################################################## 209 210 @total_ordering 211 -class LocalConfig(object):
    212 213 """ 214 Class representing this extension's configuration document. 215 216 This is not a general-purpose configuration object like the main Cedar 217 Backup configuration object. Instead, it just knows how to parse and emit 218 split-specific configuration values. Third parties who need to read and 219 write configuration related to this extension should access it through the 220 constructor, C{validate} and C{addConfig} methods. 221 222 @note: Lists within this class are "unordered" for equality comparisons. 223 224 @sort: __init__, __repr__, __str__, __cmp__, __eq__, __lt__, __gt__, split, 225 validate, addConfig 226 """ 227
    228 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    229 """ 230 Initializes a configuration object. 231 232 If you initialize the object without passing either C{xmlData} or 233 C{xmlPath} then configuration will be empty and will be invalid until it 234 is filled in properly. 235 236 No reference to the original XML data or original path is saved off by 237 this class. Once the data has been parsed (successfully or not) this 238 original information is discarded. 239 240 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 241 method will be called (with its default arguments) against configuration 242 after successfully parsing any passed-in XML. Keep in mind that even if 243 C{validate} is C{False}, it might not be possible to parse the passed-in 244 XML document if lower-level validations fail. 245 246 @note: It is strongly suggested that the C{validate} option always be set 247 to C{True} (the default) unless there is a specific need to read in 248 invalid configuration from disk. 249 250 @param xmlData: XML data representing configuration. 251 @type xmlData: String data. 252 253 @param xmlPath: Path to an XML file on disk. 254 @type xmlPath: Absolute path to a file on disk. 255 256 @param validate: Validate the document after parsing it. 257 @type validate: Boolean true/false. 258 259 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 260 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 261 @raise ValueError: If the parsed configuration document is not valid. 262 """ 263 self._split = None 264 self.split = None 265 if xmlData is not None and xmlPath is not None: 266 raise ValueError("Use either xmlData or xmlPath, but not both.") 267 if xmlData is not None: 268 self._parseXmlData(xmlData) 269 if validate: 270 self.validate() 271 elif xmlPath is not None: 272 with open(xmlPath) as f: 273 xmlData = f.read() 274 self._parseXmlData(xmlData) 275 if validate: 276 self.validate()
    277
    278 - def __repr__(self):
    279 """ 280 Official string representation for class instance. 281 """ 282 return "LocalConfig(%s)" % (self.split)
    283
    284 - def __str__(self):
    285 """ 286 Informal string representation for class instance. 287 """ 288 return self.__repr__()
    289
    290 - def __eq__(self, other):
    291 """Equals operator, iplemented in terms of original Python 2 compare operator.""" 292 return self.__cmp__(other) == 0
    293
    294 - def __lt__(self, other):
    295 """Less-than operator, iplemented in terms of original Python 2 compare operator.""" 296 return self.__cmp__(other) < 0
    297
    298 - def __gt__(self, other):
    299 """Greater-than operator, iplemented in terms of original Python 2 compare operator.""" 300 return self.__cmp__(other) > 0
    301
    302 - def __cmp__(self, other):
    303 """ 304 Original Python 2 comparison operator. 305 Lists within this class are "unordered" for equality comparisons. 306 @param other: Other object to compare to. 307 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 308 """ 309 if other is None: 310 return 1 311 if self.split != other.split: 312 if self.split < other.split: 313 return -1 314 else: 315 return 1 316 return 0
    317
    318 - def _setSplit(self, value):
    319 """ 320 Property target used to set the split configuration value. 321 If not C{None}, the value must be a C{SplitConfig} object. 322 @raise ValueError: If the value is not a C{SplitConfig} 323 """ 324 if value is None: 325 self._split = None 326 else: 327 if not isinstance(value, SplitConfig): 328 raise ValueError("Value must be a C{SplitConfig} object.") 329 self._split = value
    330
    331 - def _getSplit(self):
    332 """ 333 Property target used to get the split configuration value. 334 """ 335 return self._split
    336 337 split = property(_getSplit, _setSplit, None, "Split configuration in terms of a C{SplitConfig} object.") 338
    339 - def validate(self):
    340 """ 341 Validates configuration represented by the object. 342 343 Split configuration must be filled in. Within that, both the size limit 344 and split size must be filled in. 345 346 @raise ValueError: If one of the validations fails. 347 """ 348 if self.split is None: 349 raise ValueError("Split section is required.") 350 if self.split.sizeLimit is None: 351 raise ValueError("Size limit must be set.") 352 if self.split.splitSize is None: 353 raise ValueError("Split size must be set.")
    354
    355 - def addConfig(self, xmlDom, parentNode):
    356 """ 357 Adds a <split> configuration section as the next child of a parent. 358 359 Third parties should use this function to write configuration related to 360 this extension. 361 362 We add the following fields to the document:: 363 364 sizeLimit //cb_config/split/size_limit 365 splitSize //cb_config/split/split_size 366 367 @param xmlDom: DOM tree as from C{impl.createDocument()}. 368 @param parentNode: Parent that the section should be appended to. 369 """ 370 if self.split is not None: 371 sectionNode = addContainerNode(xmlDom, parentNode, "split") 372 addByteQuantityNode(xmlDom, sectionNode, "size_limit", self.split.sizeLimit) 373 addByteQuantityNode(xmlDom, sectionNode, "split_size", self.split.splitSize)
    374
    375 - def _parseXmlData(self, xmlData):
    376 """ 377 Internal method to parse an XML string into the object. 378 379 This method parses the XML document into a DOM tree (C{xmlDom}) and then 380 calls a static method to parse the split configuration section. 381 382 @param xmlData: XML data to be parsed 383 @type xmlData: String data 384 385 @raise ValueError: If the XML cannot be successfully parsed. 386 """ 387 (xmlDom, parentNode) = createInputDom(xmlData) 388 self._split = LocalConfig._parseSplit(parentNode)
    389 390 @staticmethod
    391 - def _parseSplit(parent):
    392 """ 393 Parses an split configuration section. 394 395 We read the following individual fields:: 396 397 sizeLimit //cb_config/split/size_limit 398 splitSize //cb_config/split/split_size 399 400 @param parent: Parent node to search beneath. 401 402 @return: C{EncryptConfig} object or C{None} if the section does not exist. 403 @raise ValueError: If some filled-in value is invalid. 404 """ 405 split = None 406 section = readFirstChild(parent, "split") 407 if section is not None: 408 split = SplitConfig() 409 split.sizeLimit = readByteQuantity(section, "size_limit") 410 split.splitSize = readByteQuantity(section, "split_size") 411 return split
    412
    413 414 ######################################################################## 415 # Public functions 416 ######################################################################## 417 418 ########################### 419 # executeAction() function 420 ########################### 421 422 -def executeAction(configPath, options, config):
    423 """ 424 Executes the split backup action. 425 426 @param configPath: Path to configuration file on disk. 427 @type configPath: String representing a path on disk. 428 429 @param options: Program command-line options. 430 @type options: Options object. 431 432 @param config: Program configuration. 433 @type config: Config object. 434 435 @raise ValueError: Under many generic error conditions 436 @raise IOError: If there are I/O problems reading or writing files 437 """ 438 logger.debug("Executing split extended action.") 439 if config.options is None or config.stage is None: 440 raise ValueError("Cedar Backup configuration is not properly filled in.") 441 local = LocalConfig(xmlPath=configPath) 442 dailyDirs = findDailyDirs(config.stage.targetDir, SPLIT_INDICATOR) 443 for dailyDir in dailyDirs: 444 _splitDailyDir(dailyDir, local.split.sizeLimit, local.split.splitSize, 445 config.options.backupUser, config.options.backupGroup) 446 writeIndicatorFile(dailyDir, SPLIT_INDICATOR, config.options.backupUser, config.options.backupGroup) 447 logger.info("Executed the split extended action successfully.")
    448
    449 450 ############################## 451 # _splitDailyDir() function 452 ############################## 453 454 -def _splitDailyDir(dailyDir, sizeLimit, splitSize, backupUser, backupGroup):
    455 """ 456 Splits large files in a daily staging directory. 457 458 Files that match INDICATOR_PATTERNS (i.e. C{"cback.store"}, 459 C{"cback.stage"}, etc.) are assumed to be indicator files and are ignored. 460 All other files are split. 461 462 @param dailyDir: Daily directory to encrypt 463 @param sizeLimit: Size limit, in bytes 464 @param splitSize: Split size, in bytes 465 @param backupUser: User that target files should be owned by 466 @param backupGroup: Group that target files should be owned by 467 468 @raise ValueError: If the encrypt mode is not supported. 469 @raise ValueError: If the daily staging directory does not exist. 470 """ 471 logger.debug("Begin splitting contents of [%s].", dailyDir) 472 fileList = getBackupFiles(dailyDir) # ignores indicator files 473 for path in fileList: 474 size = float(os.stat(path).st_size) 475 if size > sizeLimit: 476 _splitFile(path, splitSize, backupUser, backupGroup, removeSource=True) 477 logger.debug("Completed splitting contents of [%s].", dailyDir)
    478
    479 480 ######################## 481 # _splitFile() function 482 ######################## 483 484 -def _splitFile(sourcePath, splitSize, backupUser, backupGroup, removeSource=False):
    485 """ 486 Splits the source file into chunks of the indicated size. 487 488 The split files will be owned by the indicated backup user and group. If 489 C{removeSource} is C{True}, then the source file will be removed after it is 490 successfully split. 491 492 @param sourcePath: Absolute path of the source file to split 493 @param splitSize: Encryption mode (only "gpg" is allowed) 494 @param backupUser: User that target files should be owned by 495 @param backupGroup: Group that target files should be owned by 496 @param removeSource: Indicates whether to remove the source file 497 498 @raise IOError: If there is a problem accessing, splitting or removing the source file. 499 """ 500 cwd = os.getcwd() 501 try: 502 if not os.path.exists(sourcePath): 503 raise ValueError("Source path [%s] does not exist." % sourcePath) 504 dirname = os.path.dirname(sourcePath) 505 filename = os.path.basename(sourcePath) 506 prefix = "%s_" % filename 507 bytes = int(splitSize.bytes) # pylint: disable=W0622 508 os.chdir(dirname) # need to operate from directory that we want files written to 509 command = resolveCommand(SPLIT_COMMAND) 510 args = [ "--verbose", "--numeric-suffixes", "--suffix-length=5", "--bytes=%d" % bytes, filename, prefix, ] 511 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=False) 512 if result != 0: 513 raise IOError("Error [%d] calling split for [%s]." % (result, sourcePath)) 514 pattern = re.compile(r"(creating file [`'])(%s)(.*)(')" % prefix) 515 match = pattern.search(output[-1:][0]) 516 if match is None: 517 raise IOError("Unable to parse output from split command.") 518 value = int(match.group(3).strip()) 519 for index in range(0, value): 520 path = "%s%05d" % (prefix, index) 521 if not os.path.exists(path): 522 raise IOError("After call to split, expected file [%s] does not exist." % path) 523 changeOwnership(path, backupUser, backupGroup) 524 if removeSource: 525 if os.path.exists(sourcePath): 526 try: 527 os.remove(sourcePath) 528 logger.debug("Completed removing old file [%s].", sourcePath) 529 except: 530 raise IOError("Failed to remove file [%s] after splitting it." % (sourcePath)) 531 finally: 532 os.chdir(cwd)
    533

    CedarBackup3-3.1.6/doc/interface/crarr.png0000664000175000017500000000052412657665544022043 0ustar pronovicpronovic00000000000000PNG  IHDR eE,tEXtCreation TimeTue 22 Aug 2006 00:43:10 -0500` XtIME)} pHYsnu>gAMA aEPLTEðf4sW ЊrD`@bCܖX{`,lNo@xdE螊dƴ~TwvtRNS@fMIDATxc`@0&+(;; /EXؑ? n  b;'+Y#(r<"IENDB`CedarBackup3-3.1.6/doc/interface/CedarBackup3.peer-pysrc.html0000664000175000017500000127770012657665546025450 0ustar pronovicpronovic00000000000000 CedarBackup3.peer
    Package CedarBackup3 :: Module peer
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.peer

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2008,2010,2015 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python 3 (>= 3.4) 
      29  # Project  : Cedar Backup, release 3 
      30  # Purpose  : Provides backup peer-related objects. 
      31  # 
      32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      33   
      34  ######################################################################## 
      35  # Module documentation 
      36  ######################################################################## 
      37   
      38  """ 
      39  Provides backup peer-related objects and utility functions. 
      40   
      41  @sort: LocalPeer, RemotePeer 
      42   
      43  @var DEF_COLLECT_INDICATOR: Name of the default collect indicator file. 
      44  @var DEF_STAGE_INDICATOR: Name of the default stage indicator file. 
      45   
      46  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      47  """ 
      48   
      49   
      50  ######################################################################## 
      51  # Imported modules 
      52  ######################################################################## 
      53   
      54  # System modules 
      55  import os 
      56  import logging 
      57  import shutil 
      58   
      59  # Cedar Backup modules 
      60  from CedarBackup3.filesystem import FilesystemList 
      61  from CedarBackup3.util import resolveCommand, executeCommand, isRunningAsRoot 
      62  from CedarBackup3.util import splitCommandLine, encodePath 
      63  from CedarBackup3.config import VALID_FAILURE_MODES 
      64   
      65   
      66  ######################################################################## 
      67  # Module-wide constants and variables 
      68  ######################################################################## 
      69   
      70  logger                  = logging.getLogger("CedarBackup3.log.peer") 
      71   
      72  DEF_RCP_COMMAND         = [ "/usr/bin/scp", "-B", "-q", "-C" ] 
      73  DEF_RSH_COMMAND         = [ "/usr/bin/ssh", ] 
      74  DEF_CBACK_COMMAND       = "/usr/bin/cback3" 
      75   
      76  DEF_COLLECT_INDICATOR   = "cback.collect" 
      77  DEF_STAGE_INDICATOR     = "cback.stage" 
      78   
      79  SU_COMMAND              = [ "su" ] 
    
    80 81 82 ######################################################################## 83 # LocalPeer class definition 84 ######################################################################## 85 86 -class LocalPeer(object):
    87 88 ###################### 89 # Class documentation 90 ###################### 91 92 """ 93 Backup peer representing a local peer in a backup pool. 94 95 This is a class representing a local (non-network) peer in a backup pool. 96 Local peers are backed up by simple filesystem copy operations. A local 97 peer has associated with it a name (typically, but not necessarily, a 98 hostname) and a collect directory. 99 100 The public methods other than the constructor are part of a "backup peer" 101 interface shared with the C{RemotePeer} class. 102 103 @sort: __init__, stagePeer, checkCollectIndicator, writeStageIndicator, 104 _copyLocalDir, _copyLocalFile, name, collectDir 105 """ 106 107 ############## 108 # Constructor 109 ############## 110
    111 - def __init__(self, name, collectDir, ignoreFailureMode=None):
    112 """ 113 Initializes a local backup peer. 114 115 Note that the collect directory must be an absolute path, but does not 116 have to exist when the object is instantiated. We do a lazy validation 117 on this value since we could (potentially) be creating peer objects 118 before an ongoing backup completed. 119 120 @param name: Name of the backup peer 121 @type name: String, typically a hostname 122 123 @param collectDir: Path to the peer's collect directory 124 @type collectDir: String representing an absolute local path on disk 125 126 @param ignoreFailureMode: Ignore failure mode for this peer 127 @type ignoreFailureMode: One of VALID_FAILURE_MODES 128 129 @raise ValueError: If the name is empty. 130 @raise ValueError: If collect directory is not an absolute path. 131 """ 132 self._name = None 133 self._collectDir = None 134 self._ignoreFailureMode = None 135 self.name = name 136 self.collectDir = collectDir 137 self.ignoreFailureMode = ignoreFailureMode
    138 139 140 ############# 141 # Properties 142 ############# 143
    144 - def _setName(self, value):
    145 """ 146 Property target used to set the peer name. 147 The value must be a non-empty string and cannot be C{None}. 148 @raise ValueError: If the value is an empty string or C{None}. 149 """ 150 if value is None or len(value) < 1: 151 raise ValueError("Peer name must be a non-empty string.") 152 self._name = value
    153
    154 - def _getName(self):
    155 """ 156 Property target used to get the peer name. 157 """ 158 return self._name
    159
    160 - def _setCollectDir(self, value):
    161 """ 162 Property target used to set the collect directory. 163 The value must be an absolute path and cannot be C{None}. 164 It does not have to exist on disk at the time of assignment. 165 @raise ValueError: If the value is C{None} or is not an absolute path. 166 @raise ValueError: If a path cannot be encoded properly. 167 """ 168 if value is None or not os.path.isabs(value): 169 raise ValueError("Collect directory must be an absolute path.") 170 self._collectDir = encodePath(value)
    171
    172 - def _getCollectDir(self):
    173 """ 174 Property target used to get the collect directory. 175 """ 176 return self._collectDir
    177
    178 - def _setIgnoreFailureMode(self, value):
    179 """ 180 Property target used to set the ignoreFailure mode. 181 If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. 182 @raise ValueError: If the value is not valid. 183 """ 184 if value is not None: 185 if value not in VALID_FAILURE_MODES: 186 raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) 187 self._ignoreFailureMode = value
    188
    189 - def _getIgnoreFailureMode(self):
    190 """ 191 Property target used to get the ignoreFailure mode. 192 """ 193 return self._ignoreFailureMode
    194 195 name = property(_getName, _setName, None, "Name of the peer.") 196 collectDir = property(_getCollectDir, _setCollectDir, None, "Path to the peer's collect directory (an absolute local path).") 197 ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") 198 199 200 ################# 201 # Public methods 202 ################# 203
    204 - def stagePeer(self, targetDir, ownership=None, permissions=None):
    205 """ 206 Stages data from the peer into the indicated local target directory. 207 208 The collect and target directories must both already exist before this 209 method is called. If passed in, ownership and permissions will be 210 applied to the files that are copied. 211 212 @note: The caller is responsible for checking that the indicator exists, 213 if they care. This function only stages the files within the directory. 214 215 @note: If you have user/group as strings, call the L{util.getUidGid} function 216 to get the associated uid/gid as an ownership tuple. 217 218 @param targetDir: Target directory to write data into 219 @type targetDir: String representing a directory on disk 220 221 @param ownership: Owner and group that the staged files should have 222 @type ownership: Tuple of numeric ids C{(uid, gid)} 223 224 @param permissions: Permissions that the staged files should have 225 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 226 227 @return: Number of files copied from the source directory to the target directory. 228 229 @raise ValueError: If collect directory is not a directory or does not exist 230 @raise ValueError: If target directory is not a directory, does not exist or is not absolute. 231 @raise ValueError: If a path cannot be encoded properly. 232 @raise IOError: If there were no files to stage (i.e. the directory was empty) 233 @raise IOError: If there is an IO error copying a file. 234 @raise OSError: If there is an OS error copying or changing permissions on a file 235 """ 236 targetDir = encodePath(targetDir) 237 if not os.path.isabs(targetDir): 238 logger.debug("Target directory [%s] not an absolute path.", targetDir) 239 raise ValueError("Target directory must be an absolute path.") 240 if not os.path.exists(self.collectDir) or not os.path.isdir(self.collectDir): 241 logger.debug("Collect directory [%s] is not a directory or does not exist on disk.", self.collectDir) 242 raise ValueError("Collect directory is not a directory or does not exist on disk.") 243 if not os.path.exists(targetDir) or not os.path.isdir(targetDir): 244 logger.debug("Target directory [%s] is not a directory or does not exist on disk.", targetDir) 245 raise ValueError("Target directory is not a directory or does not exist on disk.") 246 count = LocalPeer._copyLocalDir(self.collectDir, targetDir, ownership, permissions) 247 if count == 0: 248 raise IOError("Did not copy any files from local peer.") 249 return count
    250
    251 - def checkCollectIndicator(self, collectIndicator=None):
    252 """ 253 Checks the collect indicator in the peer's staging directory. 254 255 When a peer has completed collecting its backup files, it will write an 256 empty indicator file into its collect directory. This method checks to 257 see whether that indicator has been written. We're "stupid" here - if 258 the collect directory doesn't exist, you'll naturally get back C{False}. 259 260 If you need to, you can override the name of the collect indicator file 261 by passing in a different name. 262 263 @param collectIndicator: Name of the collect indicator file to check 264 @type collectIndicator: String representing name of a file in the collect directory 265 266 @return: Boolean true/false depending on whether the indicator exists. 267 @raise ValueError: If a path cannot be encoded properly. 268 """ 269 collectIndicator = encodePath(collectIndicator) 270 if collectIndicator is None: 271 return os.path.exists(os.path.join(self.collectDir, DEF_COLLECT_INDICATOR)) 272 else: 273 return os.path.exists(os.path.join(self.collectDir, collectIndicator))
    274
    275 - def writeStageIndicator(self, stageIndicator=None, ownership=None, permissions=None):
    276 """ 277 Writes the stage indicator in the peer's staging directory. 278 279 When the master has completed collecting its backup files, it will write 280 an empty indicator file into the peer's collect directory. The presence 281 of this file implies that the staging process is complete. 282 283 If you need to, you can override the name of the stage indicator file by 284 passing in a different name. 285 286 @note: If you have user/group as strings, call the L{util.getUidGid} 287 function to get the associated uid/gid as an ownership tuple. 288 289 @param stageIndicator: Name of the indicator file to write 290 @type stageIndicator: String representing name of a file in the collect directory 291 292 @param ownership: Owner and group that the indicator file should have 293 @type ownership: Tuple of numeric ids C{(uid, gid)} 294 295 @param permissions: Permissions that the indicator file should have 296 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 297 298 @raise ValueError: If collect directory is not a directory or does not exist 299 @raise ValueError: If a path cannot be encoded properly. 300 @raise IOError: If there is an IO error creating the file. 301 @raise OSError: If there is an OS error creating or changing permissions on the file 302 """ 303 stageIndicator = encodePath(stageIndicator) 304 if not os.path.exists(self.collectDir) or not os.path.isdir(self.collectDir): 305 logger.debug("Collect directory [%s] is not a directory or does not exist on disk.", self.collectDir) 306 raise ValueError("Collect directory is not a directory or does not exist on disk.") 307 if stageIndicator is None: 308 fileName = os.path.join(self.collectDir, DEF_STAGE_INDICATOR) 309 else: 310 fileName = os.path.join(self.collectDir, stageIndicator) 311 LocalPeer._copyLocalFile(None, fileName, ownership, permissions) # None for sourceFile results in an empty target
    312 313 314 ################## 315 # Private methods 316 ################## 317 318 @staticmethod
    319 - def _copyLocalDir(sourceDir, targetDir, ownership=None, permissions=None):
    320 """ 321 Copies files from the source directory to the target directory. 322 323 This function is not recursive. Only the files in the directory will be 324 copied. Ownership and permissions will be left at their default values 325 if new values are not specified. The source and target directories are 326 allowed to be soft links to a directory, but besides that soft links are 327 ignored. 328 329 @note: If you have user/group as strings, call the L{util.getUidGid} 330 function to get the associated uid/gid as an ownership tuple. 331 332 @param sourceDir: Source directory 333 @type sourceDir: String representing a directory on disk 334 335 @param targetDir: Target directory 336 @type targetDir: String representing a directory on disk 337 338 @param ownership: Owner and group that the copied files should have 339 @type ownership: Tuple of numeric ids C{(uid, gid)} 340 341 @param permissions: Permissions that the staged files should have 342 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 343 344 @return: Number of files copied from the source directory to the target directory. 345 346 @raise ValueError: If source or target is not a directory or does not exist. 347 @raise ValueError: If a path cannot be encoded properly. 348 @raise IOError: If there is an IO error copying the files. 349 @raise OSError: If there is an OS error copying or changing permissions on a files 350 """ 351 filesCopied = 0 352 sourceDir = encodePath(sourceDir) 353 targetDir = encodePath(targetDir) 354 for fileName in os.listdir(sourceDir): 355 sourceFile = os.path.join(sourceDir, fileName) 356 targetFile = os.path.join(targetDir, fileName) 357 LocalPeer._copyLocalFile(sourceFile, targetFile, ownership, permissions) 358 filesCopied += 1 359 return filesCopied
    360 361 @staticmethod
    362 - def _copyLocalFile(sourceFile=None, targetFile=None, ownership=None, permissions=None, overwrite=True):
    363 """ 364 Copies a source file to a target file. 365 366 If the source file is C{None} then the target file will be created or 367 overwritten as an empty file. If the target file is C{None}, this method 368 is a no-op. Attempting to copy a soft link or a directory will result in 369 an exception. 370 371 @note: If you have user/group as strings, call the L{util.getUidGid} 372 function to get the associated uid/gid as an ownership tuple. 373 374 @note: We will not overwrite a target file that exists when this method 375 is invoked. If the target already exists, we'll raise an exception. 376 377 @param sourceFile: Source file to copy 378 @type sourceFile: String representing a file on disk, as an absolute path 379 380 @param targetFile: Target file to create 381 @type targetFile: String representing a file on disk, as an absolute path 382 383 @param ownership: Owner and group that the copied should have 384 @type ownership: Tuple of numeric ids C{(uid, gid)} 385 386 @param permissions: Permissions that the staged files should have 387 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 388 389 @param overwrite: Indicates whether it's OK to overwrite the target file. 390 @type overwrite: Boolean true/false. 391 392 @raise ValueError: If the passed-in source file is not a regular file. 393 @raise ValueError: If a path cannot be encoded properly. 394 @raise IOError: If the target file already exists. 395 @raise IOError: If there is an IO error copying the file 396 @raise OSError: If there is an OS error copying or changing permissions on a file 397 """ 398 targetFile = encodePath(targetFile) 399 sourceFile = encodePath(sourceFile) 400 if targetFile is None: 401 return 402 if not overwrite: 403 if os.path.exists(targetFile): 404 raise IOError("Target file [%s] already exists." % targetFile) 405 if sourceFile is None: 406 with open(targetFile, "w") as f: 407 f.write("") 408 else: 409 if os.path.isfile(sourceFile) and not os.path.islink(sourceFile): 410 shutil.copy(sourceFile, targetFile) 411 else: 412 logger.debug("Source [%s] is not a regular file.", sourceFile) 413 raise ValueError("Source is not a regular file.") 414 if ownership is not None: 415 os.chown(targetFile, ownership[0], ownership[1]) 416 if permissions is not None: 417 os.chmod(targetFile, permissions)
    418
    419 420 ######################################################################## 421 # RemotePeer class definition 422 ######################################################################## 423 424 -class RemotePeer(object):
    425 426 ###################### 427 # Class documentation 428 ###################### 429 430 """ 431 Backup peer representing a remote peer in a backup pool. 432 433 This is a class representing a remote (networked) peer in a backup pool. 434 Remote peers are backed up using an rcp-compatible copy command. A remote 435 peer has associated with it a name (which must be a valid hostname), a 436 collect directory, a working directory and a copy method (an rcp-compatible 437 command). 438 439 You can also set an optional local user value. This username will be used 440 as the local user for any remote copies that are required. It can only be 441 used if the root user is executing the backup. The root user will C{su} to 442 the local user and execute the remote copies as that user. 443 444 The copy method is associated with the peer and not with the actual request 445 to copy, because we can envision that each remote host might have a 446 different connect method. 447 448 The public methods other than the constructor are part of a "backup peer" 449 interface shared with the C{LocalPeer} class. 450 451 @sort: __init__, stagePeer, checkCollectIndicator, writeStageIndicator, 452 executeRemoteCommand, executeManagedAction, _getDirContents, 453 _copyRemoteDir, _copyRemoteFile, _pushLocalFile, name, collectDir, 454 remoteUser, rcpCommand, rshCommand, cbackCommand 455 """ 456 457 ############## 458 # Constructor 459 ############## 460
    461 - def __init__(self, name=None, collectDir=None, workingDir=None, remoteUser=None, 462 rcpCommand=None, localUser=None, rshCommand=None, cbackCommand=None, 463 ignoreFailureMode=None):
    464 """ 465 Initializes a remote backup peer. 466 467 @note: If provided, each command will eventually be parsed into a list of 468 strings suitable for passing to C{util.executeCommand} in order to avoid 469 security holes related to shell interpolation. This parsing will be 470 done by the L{util.splitCommandLine} function. See the documentation for 471 that function for some important notes about its limitations. 472 473 @param name: Name of the backup peer 474 @type name: String, must be a valid DNS hostname 475 476 @param collectDir: Path to the peer's collect directory 477 @type collectDir: String representing an absolute path on the remote peer 478 479 @param workingDir: Working directory that can be used to create temporary files, etc. 480 @type workingDir: String representing an absolute path on the current host. 481 482 @param remoteUser: Name of the Cedar Backup user on the remote peer 483 @type remoteUser: String representing a username, valid via remote shell to the peer 484 485 @param localUser: Name of the Cedar Backup user on the current host 486 @type localUser: String representing a username, valid on the current host 487 488 @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer 489 @type rcpCommand: String representing a system command including required arguments 490 491 @param rshCommand: An rsh-compatible copy command to use for remote shells to the peer 492 @type rshCommand: String representing a system command including required arguments 493 494 @param cbackCommand: A chack-compatible command to use for executing managed actions 495 @type cbackCommand: String representing a system command including required arguments 496 497 @param ignoreFailureMode: Ignore failure mode for this peer 498 @type ignoreFailureMode: One of VALID_FAILURE_MODES 499 500 @raise ValueError: If collect directory is not an absolute path 501 """ 502 self._name = None 503 self._collectDir = None 504 self._workingDir = None 505 self._remoteUser = None 506 self._localUser = None 507 self._rcpCommand = None 508 self._rcpCommandList = None 509 self._rshCommand = None 510 self._rshCommandList = None 511 self._cbackCommand = None 512 self._ignoreFailureMode = None 513 self.name = name 514 self.collectDir = collectDir 515 self.workingDir = workingDir 516 self.remoteUser = remoteUser 517 self.localUser = localUser 518 self.rcpCommand = rcpCommand 519 self.rshCommand = rshCommand 520 self.cbackCommand = cbackCommand 521 self.ignoreFailureMode = ignoreFailureMode
    522 523 524 ############# 525 # Properties 526 ############# 527
    528 - def _setName(self, value):
    529 """ 530 Property target used to set the peer name. 531 The value must be a non-empty string and cannot be C{None}. 532 @raise ValueError: If the value is an empty string or C{None}. 533 """ 534 if value is None or len(value) < 1: 535 raise ValueError("Peer name must be a non-empty string.") 536 self._name = value
    537
    538 - def _getName(self):
    539 """ 540 Property target used to get the peer name. 541 """ 542 return self._name
    543
    544 - def _setCollectDir(self, value):
    545 """ 546 Property target used to set the collect directory. 547 The value must be an absolute path and cannot be C{None}. 548 It does not have to exist on disk at the time of assignment. 549 @raise ValueError: If the value is C{None} or is not an absolute path. 550 @raise ValueError: If the value cannot be encoded properly. 551 """ 552 if value is not None: 553 if not os.path.isabs(value): 554 raise ValueError("Collect directory must be an absolute path.") 555 self._collectDir = encodePath(value)
    556
    557 - def _getCollectDir(self):
    558 """ 559 Property target used to get the collect directory. 560 """ 561 return self._collectDir
    562
    563 - def _setWorkingDir(self, value):
    564 """ 565 Property target used to set the working directory. 566 The value must be an absolute path and cannot be C{None}. 567 @raise ValueError: If the value is C{None} or is not an absolute path. 568 @raise ValueError: If the value cannot be encoded properly. 569 """ 570 if value is not None: 571 if not os.path.isabs(value): 572 raise ValueError("Working directory must be an absolute path.") 573 self._workingDir = encodePath(value)
    574
    575 - def _getWorkingDir(self):
    576 """ 577 Property target used to get the working directory. 578 """ 579 return self._workingDir
    580
    581 - def _setRemoteUser(self, value):
    582 """ 583 Property target used to set the remote user. 584 The value must be a non-empty string and cannot be C{None}. 585 @raise ValueError: If the value is an empty string or C{None}. 586 """ 587 if value is None or len(value) < 1: 588 raise ValueError("Peer remote user must be a non-empty string.") 589 self._remoteUser = value
    590
    591 - def _getRemoteUser(self):
    592 """ 593 Property target used to get the remote user. 594 """ 595 return self._remoteUser
    596
    597 - def _setLocalUser(self, value):
    598 """ 599 Property target used to set the local user. 600 The value must be a non-empty string if it is not C{None}. 601 @raise ValueError: If the value is an empty string. 602 """ 603 if value is not None: 604 if len(value) < 1: 605 raise ValueError("Peer local user must be a non-empty string.") 606 self._localUser = value
    607
    608 - def _getLocalUser(self):
    609 """ 610 Property target used to get the local user. 611 """ 612 return self._localUser
    613
    614 - def _setRcpCommand(self, value):
    615 """ 616 Property target to set the rcp command. 617 618 The value must be a non-empty string or C{None}. Its value is stored in 619 the two forms: "raw" as provided by the client, and "parsed" into a list 620 suitable for being passed to L{util.executeCommand} via 621 L{util.splitCommandLine}. 622 623 However, all the caller will ever see via the property is the actual 624 value they set (which includes seeing C{None}, even if we translate that 625 internally to C{DEF_RCP_COMMAND}). Internally, we should always use 626 C{self._rcpCommandList} if we want the actual command list. 627 628 @raise ValueError: If the value is an empty string. 629 """ 630 if value is None: 631 self._rcpCommand = None 632 self._rcpCommandList = DEF_RCP_COMMAND 633 else: 634 if len(value) >= 1: 635 self._rcpCommand = value 636 self._rcpCommandList = splitCommandLine(self._rcpCommand) 637 else: 638 raise ValueError("The rcp command must be a non-empty string.")
    639
    640 - def _getRcpCommand(self):
    641 """ 642 Property target used to get the rcp command. 643 """ 644 return self._rcpCommand
    645
    646 - def _setRshCommand(self, value):
    647 """ 648 Property target to set the rsh command. 649 650 The value must be a non-empty string or C{None}. Its value is stored in 651 the two forms: "raw" as provided by the client, and "parsed" into a list 652 suitable for being passed to L{util.executeCommand} via 653 L{util.splitCommandLine}. 654 655 However, all the caller will ever see via the property is the actual 656 value they set (which includes seeing C{None}, even if we translate that 657 internally to C{DEF_RSH_COMMAND}). Internally, we should always use 658 C{self._rshCommandList} if we want the actual command list. 659 660 @raise ValueError: If the value is an empty string. 661 """ 662 if value is None: 663 self._rshCommand = None 664 self._rshCommandList = DEF_RSH_COMMAND 665 else: 666 if len(value) >= 1: 667 self._rshCommand = value 668 self._rshCommandList = splitCommandLine(self._rshCommand) 669 else: 670 raise ValueError("The rsh command must be a non-empty string.")
    671
    672 - def _getRshCommand(self):
    673 """ 674 Property target used to get the rsh command. 675 """ 676 return self._rshCommand
    677
    678 - def _setCbackCommand(self, value):
    679 """ 680 Property target to set the cback command. 681 682 The value must be a non-empty string or C{None}. Unlike the other 683 command, this value is only stored in the "raw" form provided by the 684 client. 685 686 @raise ValueError: If the value is an empty string. 687 """ 688 if value is None: 689 self._cbackCommand = None 690 else: 691 if len(value) >= 1: 692 self._cbackCommand = value 693 else: 694 raise ValueError("The cback command must be a non-empty string.")
    695
    696 - def _getCbackCommand(self):
    697 """ 698 Property target used to get the cback command. 699 """ 700 return self._cbackCommand
    701
    702 - def _setIgnoreFailureMode(self, value):
    703 """ 704 Property target used to set the ignoreFailure mode. 705 If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. 706 @raise ValueError: If the value is not valid. 707 """ 708 if value is not None: 709 if value not in VALID_FAILURE_MODES: 710 raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) 711 self._ignoreFailureMode = value
    712
    713 - def _getIgnoreFailureMode(self):
    714 """ 715 Property target used to get the ignoreFailure mode. 716 """ 717 return self._ignoreFailureMode
    718 719 name = property(_getName, _setName, None, "Name of the peer (a valid DNS hostname).") 720 collectDir = property(_getCollectDir, _setCollectDir, None, "Path to the peer's collect directory (an absolute local path).") 721 workingDir = property(_getWorkingDir, _setWorkingDir, None, "Path to the peer's working directory (an absolute local path).") 722 remoteUser = property(_getRemoteUser, _setRemoteUser, None, "Name of the Cedar Backup user on the remote peer.") 723 localUser = property(_getLocalUser, _setLocalUser, None, "Name of the Cedar Backup user on the current host.") 724 rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "An rcp-compatible copy command to use for copying files.") 725 rshCommand = property(_getRshCommand, _setRshCommand, None, "An rsh-compatible command to use for remote shells to the peer.") 726 cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "A chack-compatible command to use for executing managed actions.") 727 ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") 728 729 730 ################# 731 # Public methods 732 ################# 733
    734 - def stagePeer(self, targetDir, ownership=None, permissions=None):
    735 """ 736 Stages data from the peer into the indicated local target directory. 737 738 The target directory must already exist before this method is called. If 739 passed in, ownership and permissions will be applied to the files that 740 are copied. 741 742 @note: The returned count of copied files might be inaccurate if some of 743 the copied files already existed in the staging directory prior to the 744 copy taking place. We don't clear the staging directory first, because 745 some extension might also be using it. 746 747 @note: If you have user/group as strings, call the L{util.getUidGid} function 748 to get the associated uid/gid as an ownership tuple. 749 750 @note: Unlike the local peer version of this method, an I/O error might 751 or might not be raised if the directory is empty. Since we're using a 752 remote copy method, we just don't have the fine-grained control over our 753 exceptions that's available when we can look directly at the filesystem, 754 and we can't control whether the remote copy method thinks an empty 755 directory is an error. 756 757 @param targetDir: Target directory to write data into 758 @type targetDir: String representing a directory on disk 759 760 @param ownership: Owner and group that the staged files should have 761 @type ownership: Tuple of numeric ids C{(uid, gid)} 762 763 @param permissions: Permissions that the staged files should have 764 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 765 766 @return: Number of files copied from the source directory to the target directory. 767 768 @raise ValueError: If target directory is not a directory, does not exist or is not absolute. 769 @raise ValueError: If a path cannot be encoded properly. 770 @raise IOError: If there were no files to stage (i.e. the directory was empty) 771 @raise IOError: If there is an IO error copying a file. 772 @raise OSError: If there is an OS error copying or changing permissions on a file 773 """ 774 targetDir = encodePath(targetDir) 775 if not os.path.isabs(targetDir): 776 logger.debug("Target directory [%s] not an absolute path.", targetDir) 777 raise ValueError("Target directory must be an absolute path.") 778 if not os.path.exists(targetDir) or not os.path.isdir(targetDir): 779 logger.debug("Target directory [%s] is not a directory or does not exist on disk.", targetDir) 780 raise ValueError("Target directory is not a directory or does not exist on disk.") 781 count = RemotePeer._copyRemoteDir(self.remoteUser, self.localUser, self.name, 782 self._rcpCommand, self._rcpCommandList, 783 self.collectDir, targetDir, 784 ownership, permissions) 785 if count == 0: 786 raise IOError("Did not copy any files from local peer.") 787 return count
    788
    789 - def checkCollectIndicator(self, collectIndicator=None):
    790 """ 791 Checks the collect indicator in the peer's staging directory. 792 793 When a peer has completed collecting its backup files, it will write an 794 empty indicator file into its collect directory. This method checks to 795 see whether that indicator has been written. If the remote copy command 796 fails, we return C{False} as if the file weren't there. 797 798 If you need to, you can override the name of the collect indicator file 799 by passing in a different name. 800 801 @note: Apparently, we can't count on all rcp-compatible implementations 802 to return sensible errors for some error conditions. As an example, the 803 C{scp} command in Debian 'woody' returns a zero (normal) status even when 804 it can't find a host or if the login or path is invalid. Because of 805 this, the implementation of this method is rather convoluted. 806 807 @param collectIndicator: Name of the collect indicator file to check 808 @type collectIndicator: String representing name of a file in the collect directory 809 810 @return: Boolean true/false depending on whether the indicator exists. 811 @raise ValueError: If a path cannot be encoded properly. 812 """ 813 try: 814 if collectIndicator is None: 815 sourceFile = os.path.join(self.collectDir, DEF_COLLECT_INDICATOR) 816 targetFile = os.path.join(self.workingDir, DEF_COLLECT_INDICATOR) 817 else: 818 collectIndicator = encodePath(collectIndicator) 819 sourceFile = os.path.join(self.collectDir, collectIndicator) 820 targetFile = os.path.join(self.workingDir, collectIndicator) 821 logger.debug("Fetch remote [%s] into [%s].", sourceFile, targetFile) 822 if os.path.exists(targetFile): 823 try: 824 os.remove(targetFile) 825 except: 826 raise Exception("Error: collect indicator [%s] already exists!" % targetFile) 827 try: 828 RemotePeer._copyRemoteFile(self.remoteUser, self.localUser, self.name, 829 self._rcpCommand, self._rcpCommandList, 830 sourceFile, targetFile, 831 overwrite=False) 832 if os.path.exists(targetFile): 833 return True 834 else: 835 return False 836 except Exception as e: 837 logger.info("Failed looking for collect indicator: %s", e) 838 return False 839 finally: 840 if os.path.exists(targetFile): 841 try: 842 os.remove(targetFile) 843 except: pass
    844
    845 - def writeStageIndicator(self, stageIndicator=None):
    846 """ 847 Writes the stage indicator in the peer's staging directory. 848 849 When the master has completed collecting its backup files, it will write 850 an empty indicator file into the peer's collect directory. The presence 851 of this file implies that the staging process is complete. 852 853 If you need to, you can override the name of the stage indicator file by 854 passing in a different name. 855 856 @note: If you have user/group as strings, call the L{util.getUidGid} function 857 to get the associated uid/gid as an ownership tuple. 858 859 @param stageIndicator: Name of the indicator file to write 860 @type stageIndicator: String representing name of a file in the collect directory 861 862 @raise ValueError: If a path cannot be encoded properly. 863 @raise IOError: If there is an IO error creating the file. 864 @raise OSError: If there is an OS error creating or changing permissions on the file 865 """ 866 stageIndicator = encodePath(stageIndicator) 867 if stageIndicator is None: 868 sourceFile = os.path.join(self.workingDir, DEF_STAGE_INDICATOR) 869 targetFile = os.path.join(self.collectDir, DEF_STAGE_INDICATOR) 870 else: 871 sourceFile = os.path.join(self.workingDir, DEF_STAGE_INDICATOR) 872 targetFile = os.path.join(self.collectDir, stageIndicator) 873 try: 874 if not os.path.exists(sourceFile): 875 with open(sourceFile, "w") as f: 876 f.write("") 877 RemotePeer._pushLocalFile(self.remoteUser, self.localUser, self.name, 878 self._rcpCommand, self._rcpCommandList, 879 sourceFile, targetFile) 880 finally: 881 if os.path.exists(sourceFile): 882 try: 883 os.remove(sourceFile) 884 except: pass
    885
    886 - def executeRemoteCommand(self, command):
    887 """ 888 Executes a command on the peer via remote shell. 889 890 @param command: Command to execute 891 @type command: String command-line suitable for use with rsh. 892 893 @raise IOError: If there is an error executing the command on the remote peer. 894 """ 895 RemotePeer._executeRemoteCommand(self.remoteUser, self.localUser, 896 self.name, self._rshCommand, 897 self._rshCommandList, command)
    898
    899 - def executeManagedAction(self, action, fullBackup):
    900 """ 901 Executes a managed action on this peer. 902 903 @param action: Name of the action to execute. 904 @param fullBackup: Whether a full backup should be executed. 905 906 @raise IOError: If there is an error executing the action on the remote peer. 907 """ 908 try: 909 command = RemotePeer._buildCbackCommand(self.cbackCommand, action, fullBackup) 910 self.executeRemoteCommand(command) 911 except IOError as e: 912 logger.info(e) 913 raise IOError("Failed to execute action [%s] on managed client [%s]." % (action, self.name))
    914 915 916 ################## 917 # Private methods 918 ################## 919 920 @staticmethod
    921 - def _getDirContents(path):
    922 """ 923 Returns the contents of a directory in terms of a Set. 924 925 The directory's contents are read as a L{FilesystemList} containing only 926 files, and then the list is converted into a set object for later use. 927 928 @param path: Directory path to get contents for 929 @type path: String representing a path on disk 930 931 @return: Set of files in the directory 932 @raise ValueError: If path is not a directory or does not exist. 933 """ 934 contents = FilesystemList() 935 contents.excludeDirs = True 936 contents.excludeLinks = True 937 contents.addDirContents(path) 938 return set(contents)
    939 940 @staticmethod
    941 - def _copyRemoteDir(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, 942 sourceDir, targetDir, ownership=None, permissions=None):
    943 """ 944 Copies files from the source directory to the target directory. 945 946 This function is not recursive. Only the files in the directory will be 947 copied. Ownership and permissions will be left at their default values 948 if new values are not specified. Behavior when copying soft links from 949 the collect directory is dependent on the behavior of the specified rcp 950 command. 951 952 @note: The returned count of copied files might be inaccurate if some of 953 the copied files already existed in the staging directory prior to the 954 copy taking place. We don't clear the staging directory first, because 955 some extension might also be using it. 956 957 @note: If you have user/group as strings, call the L{util.getUidGid} function 958 to get the associated uid/gid as an ownership tuple. 959 960 @note: We don't have a good way of knowing exactly what files we copied 961 down from the remote peer, unless we want to parse the output of the rcp 962 command (ugh). We could change permissions on everything in the target 963 directory, but that's kind of ugly too. Instead, we use Python's set 964 functionality to figure out what files were added while we executed the 965 rcp command. This isn't perfect - for instance, it's not correct if 966 someone else is messing with the directory at the same time we're doing 967 the remote copy - but it's about as good as we're going to get. 968 969 @note: Apparently, we can't count on all rcp-compatible implementations 970 to return sensible errors for some error conditions. As an example, the 971 C{scp} command in Debian 'woody' returns a zero (normal) status even 972 when it can't find a host or if the login or path is invalid. We try 973 to work around this by issuing C{IOError} if we don't copy any files from 974 the remote host. 975 976 @param remoteUser: Name of the Cedar Backup user on the remote peer 977 @type remoteUser: String representing a username, valid via the copy command 978 979 @param localUser: Name of the Cedar Backup user on the current host 980 @type localUser: String representing a username, valid on the current host 981 982 @param remoteHost: Hostname of the remote peer 983 @type remoteHost: String representing a hostname, accessible via the copy command 984 985 @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer 986 @type rcpCommand: String representing a system command including required arguments 987 988 @param rcpCommandList: An rcp-compatible copy command to use for copying files 989 @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} 990 991 @param sourceDir: Source directory 992 @type sourceDir: String representing a directory on disk 993 994 @param targetDir: Target directory 995 @type targetDir: String representing a directory on disk 996 997 @param ownership: Owner and group that the copied files should have 998 @type ownership: Tuple of numeric ids C{(uid, gid)} 999 1000 @param permissions: Permissions that the staged files should have 1001 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 1002 1003 @return: Number of files copied from the source directory to the target directory. 1004 1005 @raise ValueError: If source or target is not a directory or does not exist. 1006 @raise IOError: If there is an IO error copying the files. 1007 """ 1008 beforeSet = RemotePeer._getDirContents(targetDir) 1009 if localUser is not None: 1010 try: 1011 if not isRunningAsRoot(): 1012 raise IOError("Only root can remote copy as another user.") 1013 except AttributeError: pass 1014 actualCommand = "%s %s@%s:%s/* %s" % (rcpCommand, remoteUser, remoteHost, sourceDir, targetDir) 1015 command = resolveCommand(SU_COMMAND) 1016 result = executeCommand(command, [localUser, "-c", actualCommand])[0] 1017 if result != 0: 1018 raise IOError("Error (%d) copying files from remote host as local user [%s]." % (result, localUser)) 1019 else: 1020 copySource = "%s@%s:%s/*" % (remoteUser, remoteHost, sourceDir) 1021 command = resolveCommand(rcpCommandList) 1022 result = executeCommand(command, [copySource, targetDir])[0] 1023 if result != 0: 1024 raise IOError("Error (%d) copying files from remote host." % result) 1025 afterSet = RemotePeer._getDirContents(targetDir) 1026 if len(afterSet) == 0: 1027 raise IOError("Did not copy any files from remote peer.") 1028 differenceSet = afterSet.difference(beforeSet) # files we added as part of copy 1029 if len(differenceSet) == 0: 1030 raise IOError("Apparently did not copy any new files from remote peer.") 1031 for targetFile in differenceSet: 1032 if ownership is not None: 1033 os.chown(targetFile, ownership[0], ownership[1]) 1034 if permissions is not None: 1035 os.chmod(targetFile, permissions) 1036 return len(differenceSet)
    1037 1038 @staticmethod
    1039 - def _copyRemoteFile(remoteUser, localUser, remoteHost, 1040 rcpCommand, rcpCommandList, 1041 sourceFile, targetFile, ownership=None, 1042 permissions=None, overwrite=True):
    1043 """ 1044 Copies a remote source file to a target file. 1045 1046 @note: Internally, we have to go through and escape any spaces in the 1047 source path with double-backslash, otherwise things get screwed up. It 1048 doesn't seem to be required in the target path. I hope this is portable 1049 to various different rcp methods, but I guess it might not be (all I have 1050 to test with is OpenSSH). 1051 1052 @note: If you have user/group as strings, call the L{util.getUidGid} function 1053 to get the associated uid/gid as an ownership tuple. 1054 1055 @note: We will not overwrite a target file that exists when this method 1056 is invoked. If the target already exists, we'll raise an exception. 1057 1058 @note: Apparently, we can't count on all rcp-compatible implementations 1059 to return sensible errors for some error conditions. As an example, the 1060 C{scp} command in Debian 'woody' returns a zero (normal) status even when 1061 it can't find a host or if the login or path is invalid. We try to work 1062 around this by issuing C{IOError} the target file does not exist when 1063 we're done. 1064 1065 @param remoteUser: Name of the Cedar Backup user on the remote peer 1066 @type remoteUser: String representing a username, valid via the copy command 1067 1068 @param remoteHost: Hostname of the remote peer 1069 @type remoteHost: String representing a hostname, accessible via the copy command 1070 1071 @param localUser: Name of the Cedar Backup user on the current host 1072 @type localUser: String representing a username, valid on the current host 1073 1074 @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer 1075 @type rcpCommand: String representing a system command including required arguments 1076 1077 @param rcpCommandList: An rcp-compatible copy command to use for copying files 1078 @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} 1079 1080 @param sourceFile: Source file to copy 1081 @type sourceFile: String representing a file on disk, as an absolute path 1082 1083 @param targetFile: Target file to create 1084 @type targetFile: String representing a file on disk, as an absolute path 1085 1086 @param ownership: Owner and group that the copied should have 1087 @type ownership: Tuple of numeric ids C{(uid, gid)} 1088 1089 @param permissions: Permissions that the staged files should have 1090 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 1091 1092 @param overwrite: Indicates whether it's OK to overwrite the target file. 1093 @type overwrite: Boolean true/false. 1094 1095 @raise IOError: If the target file already exists. 1096 @raise IOError: If there is an IO error copying the file 1097 @raise OSError: If there is an OS error changing permissions on the file 1098 """ 1099 if not overwrite: 1100 if os.path.exists(targetFile): 1101 raise IOError("Target file [%s] already exists." % targetFile) 1102 if localUser is not None: 1103 try: 1104 if not isRunningAsRoot(): 1105 raise IOError("Only root can remote copy as another user.") 1106 except AttributeError: pass 1107 actualCommand = "%s %s@%s:%s %s" % (rcpCommand, remoteUser, remoteHost, sourceFile.replace(" ", "\\ "), targetFile) 1108 command = resolveCommand(SU_COMMAND) 1109 result = executeCommand(command, [localUser, "-c", actualCommand])[0] 1110 if result != 0: 1111 raise IOError("Error (%d) copying [%s] from remote host as local user [%s]." % (result, sourceFile, localUser)) 1112 else: 1113 copySource = "%s@%s:%s" % (remoteUser, remoteHost, sourceFile.replace(" ", "\\ ")) 1114 command = resolveCommand(rcpCommandList) 1115 result = executeCommand(command, [copySource, targetFile])[0] 1116 if result != 0: 1117 raise IOError("Error (%d) copying [%s] from remote host." % (result, sourceFile)) 1118 if not os.path.exists(targetFile): 1119 raise IOError("Apparently unable to copy file from remote host.") 1120 if ownership is not None: 1121 os.chown(targetFile, ownership[0], ownership[1]) 1122 if permissions is not None: 1123 os.chmod(targetFile, permissions)
    1124 1125 @staticmethod
    1126 - def _pushLocalFile(remoteUser, localUser, remoteHost, 1127 rcpCommand, rcpCommandList, 1128 sourceFile, targetFile, overwrite=True):
    1129 """ 1130 Copies a local source file to a remote host. 1131 1132 @note: We will not overwrite a target file that exists when this method 1133 is invoked. If the target already exists, we'll raise an exception. 1134 1135 @note: Internally, we have to go through and escape any spaces in the 1136 source and target paths with double-backslash, otherwise things get 1137 screwed up. I hope this is portable to various different rcp methods, 1138 but I guess it might not be (all I have to test with is OpenSSH). 1139 1140 @note: If you have user/group as strings, call the L{util.getUidGid} function 1141 to get the associated uid/gid as an ownership tuple. 1142 1143 @param remoteUser: Name of the Cedar Backup user on the remote peer 1144 @type remoteUser: String representing a username, valid via the copy command 1145 1146 @param localUser: Name of the Cedar Backup user on the current host 1147 @type localUser: String representing a username, valid on the current host 1148 1149 @param remoteHost: Hostname of the remote peer 1150 @type remoteHost: String representing a hostname, accessible via the copy command 1151 1152 @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer 1153 @type rcpCommand: String representing a system command including required arguments 1154 1155 @param rcpCommandList: An rcp-compatible copy command to use for copying files 1156 @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} 1157 1158 @param sourceFile: Source file to copy 1159 @type sourceFile: String representing a file on disk, as an absolute path 1160 1161 @param targetFile: Target file to create 1162 @type targetFile: String representing a file on disk, as an absolute path 1163 1164 @param overwrite: Indicates whether it's OK to overwrite the target file. 1165 @type overwrite: Boolean true/false. 1166 1167 @raise IOError: If there is an IO error copying the file 1168 @raise OSError: If there is an OS error changing permissions on the file 1169 """ 1170 if not overwrite: 1171 if os.path.exists(targetFile): 1172 raise IOError("Target file [%s] already exists." % targetFile) 1173 if localUser is not None: 1174 try: 1175 if not isRunningAsRoot(): 1176 raise IOError("Only root can remote copy as another user.") 1177 except AttributeError: pass 1178 actualCommand = '%s "%s" "%s@%s:%s"' % (rcpCommand, sourceFile, remoteUser, remoteHost, targetFile) 1179 command = resolveCommand(SU_COMMAND) 1180 result = executeCommand(command, [localUser, "-c", actualCommand])[0] 1181 if result != 0: 1182 raise IOError("Error (%d) copying [%s] to remote host as local user [%s]." % (result, sourceFile, localUser)) 1183 else: 1184 copyTarget = "%s@%s:%s" % (remoteUser, remoteHost, targetFile.replace(" ", "\\ ")) 1185 command = resolveCommand(rcpCommandList) 1186 result = executeCommand(command, [sourceFile.replace(" ", "\\ "), copyTarget])[0] 1187 if result != 0: 1188 raise IOError("Error (%d) copying [%s] to remote host." % (result, sourceFile))
    1189 1190 @staticmethod
    1191 - def _executeRemoteCommand(remoteUser, localUser, remoteHost, rshCommand, rshCommandList, remoteCommand):
    1192 """ 1193 Executes a command on the peer via remote shell. 1194 1195 @param remoteUser: Name of the Cedar Backup user on the remote peer 1196 @type remoteUser: String representing a username, valid on the remote host 1197 1198 @param localUser: Name of the Cedar Backup user on the current host 1199 @type localUser: String representing a username, valid on the current host 1200 1201 @param remoteHost: Hostname of the remote peer 1202 @type remoteHost: String representing a hostname, accessible via the copy command 1203 1204 @param rshCommand: An rsh-compatible copy command to use for remote shells to the peer 1205 @type rshCommand: String representing a system command including required arguments 1206 1207 @param rshCommandList: An rsh-compatible copy command to use for remote shells to the peer 1208 @type rshCommandList: Command as a list to be passed to L{util.executeCommand} 1209 1210 @param remoteCommand: The command to be executed on the remote host 1211 @type remoteCommand: String command-line, with no special shell characters ($, <, etc.) 1212 1213 @raise IOError: If there is an error executing the remote command 1214 """ 1215 actualCommand = "%s %s@%s '%s'" % (rshCommand, remoteUser, remoteHost, remoteCommand) 1216 if localUser is not None: 1217 try: 1218 if not isRunningAsRoot(): 1219 raise IOError("Only root can remote shell as another user.") 1220 except AttributeError: pass 1221 command = resolveCommand(SU_COMMAND) 1222 result = executeCommand(command, [localUser, "-c", actualCommand])[0] 1223 if result != 0: 1224 raise IOError("Command failed [su -c %s \"%s\"]" % (localUser, actualCommand)) 1225 else: 1226 command = resolveCommand(rshCommandList) 1227 result = executeCommand(command, ["%s@%s" % (remoteUser, remoteHost), "%s" % remoteCommand])[0] 1228 if result != 0: 1229 raise IOError("Command failed [%s]" % (actualCommand))
    1230 1231 @staticmethod
    1232 - def _buildCbackCommand(cbackCommand, action, fullBackup):
    1233 """ 1234 Builds a Cedar Backup command line for the named action. 1235 1236 @note: If the cback command is None, then DEF_CBACK_COMMAND is used. 1237 1238 @param cbackCommand: cback command to execute, including required options 1239 @param action: Name of the action to execute. 1240 @param fullBackup: Whether a full backup should be executed. 1241 1242 @return: String suitable for passing to L{_executeRemoteCommand} as remoteCommand. 1243 @raise ValueError: If action is None. 1244 """ 1245 if action is None: 1246 raise ValueError("Action cannot be None.") 1247 if cbackCommand is None: 1248 cbackCommand = DEF_CBACK_COMMAND 1249 if fullBackup: 1250 return "%s --full %s" % (cbackCommand, action) 1251 else: 1252 return "%s %s" % (cbackCommand, action)
    1253

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config.CollectDir-class.html0000664000175000017500000016717212657665544027750 0ustar pronovicpronovic00000000000000 CedarBackup3.config.CollectDir
    Package CedarBackup3 :: Module config :: Class CollectDir
    [hide private]
    [frames] | no frames]

    Class CollectDir

    source code

    object --+
             |
            CollectDir
    

    Class representing a Cedar Backup collect directory.

    The following restrictions exist on data in this class:

    • Absolute paths must be absolute
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The archive mode must be one of the values in VALID_ARCHIVE_MODES.
    • The ignore file must be a non-empty string.

    For the absoluteExcludePaths list, validation is accomplished through the util.AbsolutePathList list implementation that overrides common list methods and transparently does the absolute path validation for us.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, absolutePath=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, relativeExcludePaths=None, excludePatterns=None, linkDepth=None, dereference=False, recursionLevel=None)
    Constructor for the CollectDir class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setArchiveMode(self, value)
    Property target used to set the archive mode.
    source code
     
    _getArchiveMode(self)
    Property target used to get the archive mode.
    source code
     
    _setIgnoreFile(self, value)
    Property target used to set the ignore file.
    source code
     
    _getIgnoreFile(self)
    Property target used to get the ignore file.
    source code
     
    _setLinkDepth(self, value)
    Property target used to set the link depth.
    source code
     
    _getLinkDepth(self)
    Property target used to get the action linkDepth.
    source code
     
    _setDereference(self, value)
    Property target used to set the dereference flag.
    source code
     
    _getDereference(self)
    Property target used to get the dereference flag.
    source code
     
    _setRecursionLevel(self, value)
    Property target used to set the recursionLevel.
    source code
     
    _getRecursionLevel(self)
    Property target used to get the action recursionLevel.
    source code
     
    _setAbsoluteExcludePaths(self, value)
    Property target used to set the absolute exclude paths list.
    source code
     
    _getAbsoluteExcludePaths(self)
    Property target used to get the absolute exclude paths list.
    source code
     
    _setRelativeExcludePaths(self, value)
    Property target used to set the relative exclude paths list.
    source code
     
    _getRelativeExcludePaths(self)
    Property target used to get the relative exclude paths list.
    source code
     
    _setExcludePatterns(self, value)
    Property target used to set the exclude patterns list.
    source code
     
    _getExcludePatterns(self)
    Property target used to get the exclude patterns list.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      absolutePath
    Absolute path of the directory to collect.
      collectMode
    Overridden collect mode for this directory.
      archiveMode
    Overridden archive mode for this directory.
      ignoreFile
    Overridden ignore file name for this directory.
      linkDepth
    Maximum at which soft links should be followed.
      dereference
    Whether to dereference links that are followed.
      absoluteExcludePaths
    List of absolute paths to exclude.
      relativeExcludePaths
    List of relative paths to exclude.
      excludePatterns
    List of regular expression patterns to exclude.
      recursionLevel
    Recursion level to use for recursive directory collection

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, absolutePath=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, relativeExcludePaths=None, excludePatterns=None, linkDepth=None, dereference=False, recursionLevel=None)
    (Constructor)

    source code 

    Constructor for the CollectDir class.

    Parameters:
    • absolutePath - Absolute path of the directory to collect.
    • collectMode - Overridden collect mode for this directory.
    • archiveMode - Overridden archive mode for this directory.
    • ignoreFile - Overidden ignore file name for this directory.
    • linkDepth - Maximum at which soft links should be followed.
    • dereference - Whether to dereference links that are followed.
    • absoluteExcludePaths - List of absolute paths to exclude.
    • relativeExcludePaths - List of relative paths to exclude.
    • excludePatterns - List of regular expression patterns to exclude.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setArchiveMode(self, value)

    source code 

    Property target used to set the archive mode. If not None, the mode must be one of the values in VALID_ARCHIVE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setIgnoreFile(self, value)

    source code 

    Property target used to set the ignore file. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setLinkDepth(self, value)

    source code 

    Property target used to set the link depth. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    _setDereference(self, value)

    source code 

    Property target used to set the dereference flag. No validations, but we normalize the value to True or False.

    _setRecursionLevel(self, value)

    source code 

    Property target used to set the recursionLevel. The value must be an integer.

    Raises:
    • ValueError - If the value is not valid.

    _setAbsoluteExcludePaths(self, value)

    source code 

    Property target used to set the absolute exclude paths list. Either the value must be None or each element must be an absolute path. Elements do not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.

    _setRelativeExcludePaths(self, value)

    source code 

    Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment.


    Property Details [hide private]

    absolutePath

    Absolute path of the directory to collect.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    collectMode

    Overridden collect mode for this directory.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    archiveMode

    Overridden archive mode for this directory.

    Get Method:
    _getArchiveMode(self) - Property target used to get the archive mode.
    Set Method:
    _setArchiveMode(self, value) - Property target used to set the archive mode.

    ignoreFile

    Overridden ignore file name for this directory.

    Get Method:
    _getIgnoreFile(self) - Property target used to get the ignore file.
    Set Method:
    _setIgnoreFile(self, value) - Property target used to set the ignore file.

    linkDepth

    Maximum at which soft links should be followed.

    Get Method:
    _getLinkDepth(self) - Property target used to get the action linkDepth.
    Set Method:
    _setLinkDepth(self, value) - Property target used to set the link depth.

    dereference

    Whether to dereference links that are followed.

    Get Method:
    _getDereference(self) - Property target used to get the dereference flag.
    Set Method:
    _setDereference(self, value) - Property target used to set the dereference flag.

    absoluteExcludePaths

    List of absolute paths to exclude.

    Get Method:
    _getAbsoluteExcludePaths(self) - Property target used to get the absolute exclude paths list.
    Set Method:
    _setAbsoluteExcludePaths(self, value) - Property target used to set the absolute exclude paths list.

    relativeExcludePaths

    List of relative paths to exclude.

    Get Method:
    _getRelativeExcludePaths(self) - Property target used to get the relative exclude paths list.
    Set Method:
    _setRelativeExcludePaths(self, value) - Property target used to set the relative exclude paths list.

    excludePatterns

    List of regular expression patterns to exclude.

    Get Method:
    _getExcludePatterns(self) - Property target used to get the exclude patterns list.
    Set Method:
    _setExcludePatterns(self, value) - Property target used to set the exclude patterns list.

    recursionLevel

    Recursion level to use for recursive directory collection

    Get Method:
    _getRecursionLevel(self) - Property target used to get the action recursionLevel.
    Set Method:
    _setRecursionLevel(self, value) - Property target used to set the recursionLevel.

    CedarBackup3-3.1.6/doc/interface/epydoc.js0000664000175000017500000002452512657665544022054 0ustar pronovicpronovic00000000000000function toggle_private() { // Search for any private/public links on this page. Store // their old text in "cmd," so we will know what action to // take; and change their text to the opposite action. var cmd = "?"; var elts = document.getElementsByTagName("a"); for(var i=0; i...
    "; elt.innerHTML = s; } } function toggle(id) { elt = document.getElementById(id+"-toggle"); if (elt.innerHTML == "-") collapse(id); else expand(id); return false; } function highlight(id) { var elt = document.getElementById(id+"-def"); if (elt) elt.className = "py-highlight-hdr"; var elt = document.getElementById(id+"-expanded"); if (elt) elt.className = "py-highlight"; var elt = document.getElementById(id+"-collapsed"); if (elt) elt.className = "py-highlight"; } function num_lines(s) { var n = 1; var pos = s.indexOf("\n"); while ( pos > 0) { n += 1; pos = s.indexOf("\n", pos+1); } return n; } // Collapse all blocks that mave more than `min_lines` lines. function collapse_all(min_lines) { var elts = document.getElementsByTagName("div"); for (var i=0; i 0) if (elt.id.substring(split, elt.id.length) == "-expanded") if (num_lines(elt.innerHTML) > min_lines) collapse(elt.id.substring(0, split)); } } function expandto(href) { var start = href.indexOf("#")+1; if (start != 0 && start != href.length) { if (href.substring(start, href.length) != "-") { collapse_all(4); pos = href.indexOf(".", start); while (pos != -1) { var id = href.substring(start, pos); expand(id); pos = href.indexOf(".", pos+1); } var id = href.substring(start, href.length); expand(id); highlight(id); } } } function kill_doclink(id) { var parent = document.getElementById(id); parent.removeChild(parent.childNodes.item(0)); } function auto_kill_doclink(ev) { if (!ev) var ev = window.event; if (!this.contains(ev.toElement)) { var parent = document.getElementById(this.parentID); parent.removeChild(parent.childNodes.item(0)); } } function doclink(id, name, targets_id) { var elt = document.getElementById(id); // If we already opened the box, then destroy it. // (This case should never occur, but leave it in just in case.) if (elt.childNodes.length > 1) { elt.removeChild(elt.childNodes.item(0)); } else { // The outer box: relative + inline positioning. var box1 = document.createElement("div"); box1.style.position = "relative"; box1.style.display = "inline"; box1.style.top = 0; box1.style.left = 0; // A shadow for fun var shadow = document.createElement("div"); shadow.style.position = "absolute"; shadow.style.left = "-1.3em"; shadow.style.top = "-1.3em"; shadow.style.background = "#404040"; // The inner box: absolute positioning. var box2 = document.createElement("div"); box2.style.position = "relative"; box2.style.border = "1px solid #a0a0a0"; box2.style.left = "-.2em"; box2.style.top = "-.2em"; box2.style.background = "white"; box2.style.padding = ".3em .4em .3em .4em"; box2.style.fontStyle = "normal"; box2.onmouseout=auto_kill_doclink; box2.parentID = id; // Get the targets var targets_elt = document.getElementById(targets_id); var targets = targets_elt.getAttribute("targets"); var links = ""; target_list = targets.split(","); for (var i=0; i" + target[0] + ""; } // Put it all together. elt.insertBefore(box1, elt.childNodes.item(0)); //box1.appendChild(box2); box1.appendChild(shadow); shadow.appendChild(box2); box2.innerHTML = "Which "+name+" do you want to see documentation for?" + ""; } return false; } function get_anchor() { var href = location.href; var start = href.indexOf("#")+1; if ((start != 0) && (start != href.length)) return href.substring(start, href.length); } function redirect_url(dottedName) { // Scan through each element of the "pages" list, and check // if "name" matches with any of them. for (var i=0; i-m" or "-c"; // extract the portion & compare it to dottedName. var pagename = pages[i].substring(0, pages[i].length-2); if (pagename == dottedName.substring(0,pagename.length)) { // We've found a page that matches `dottedName`; // construct its URL, using leftover `dottedName` // content to form an anchor. var pagetype = pages[i].charAt(pages[i].length-1); var url = pagename + ((pagetype=="m")?"-module.html": "-class.html"); if (dottedName.length > pagename.length) url += "#" + dottedName.substring(pagename.length+1, dottedName.length); return url; } } } CedarBackup3-3.1.6/doc/interface/CedarBackup3.util.RegexList-class.html0000664000175000017500000003770312657665545027337 0ustar pronovicpronovic00000000000000 CedarBackup3.util.RegexList
    Package CedarBackup3 :: Module util :: Class RegexList
    [hide private]
    [frames] | no frames]

    Class RegexList

    source code

    object --+        
             |        
          list --+    
                 |    
     UnorderedList --+
                     |
                    RegexList
    

    Class representing a list of valid regular expression strings.

    This is an unordered list.

    We override the append, insert and extend methods to ensure that any item added to the list is a valid regular expression.

    Instance Methods [hide private]
     
    append(self, item)
    Overrides the standard append method.
    source code
     
    insert(self, index, item)
    Overrides the standard insert method.
    source code
     
    extend(self, seq)
    Overrides the standard insert method.
    source code

    Inherited from UnorderedList: __eq__, __ge__, __gt__, __le__, __lt__, __ne__

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __init__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, count, index, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Static Methods [hide private]

    Inherited from UnorderedList: mixedkey, mixedsort

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    append(self, item)

    source code 

    Overrides the standard append method.

    Raises:
    • ValueError - If item is not an absolute path.
    Overrides: list.append

    insert(self, index, item)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item is not an absolute path.
    Overrides: list.insert

    extend(self, seq)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If any item is not an absolute path.
    Overrides: list.extend

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.testutil-pysrc.html0000664000175000017500000033461612657665547026372 0ustar pronovicpronovic00000000000000 CedarBackup3.testutil
    Package CedarBackup3 :: Module testutil
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup3.testutil

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2006,2008,2010,2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 3 (>= 3.4) 
     29  # Project  : Cedar Backup, release 3 
     30  # Purpose  : Provides unit-testing utilities. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Provides unit-testing utilities. 
     40   
     41  These utilities are kept here, separate from util.py, because they provide 
     42  common functionality that I do not want exported "publicly" once Cedar Backup 
     43  is installed on a system.  They are only used for unit testing, and are only 
     44  useful within the source tree. 
     45   
     46  Many of these functions are in here because they are "good enough" for unit 
     47  test work but are not robust enough to be real public functions.  Others (like 
     48  L{removedir}) do what they are supposed to, but I don't want responsibility for 
     49  making them available to others. 
     50   
     51  @sort: findResources, commandAvailable, 
     52         buildPath, removedir, extractTar, changeFileAge, 
     53         getMaskAsMode, getLogin, failUnlessAssignRaises, runningAsRoot, 
     54         platformDebian, platformMacOsX 
     55   
     56  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     57  """ 
     58   
     59   
     60  ######################################################################## 
     61  # Imported modules 
     62  ######################################################################## 
     63   
     64  import sys 
     65  import os 
     66  import tarfile 
     67  import time 
     68  import getpass 
     69  import random 
     70  import string # pylint: disable=W0402 
     71  import platform 
     72  import logging 
     73  from io import StringIO 
     74   
     75  from CedarBackup3.util import encodePath, executeCommand 
     76  from CedarBackup3.config import Config, OptionsConfig 
     77  from CedarBackup3.customize import customizeOverrides 
     78  from CedarBackup3.cli import setupPathResolver 
     79   
     80   
     81  ######################################################################## 
     82  # Public functions 
     83  ######################################################################## 
     84   
     85  ############################## 
     86  # setupDebugLogger() function 
     87  ############################## 
     88   
    
    89 -def setupDebugLogger():
    90 """ 91 Sets up a screen logger for debugging purposes. 92 93 Normally, the CLI functionality configures the logger so that 94 things get written to the right place. However, for debugging 95 it's sometimes nice to just get everything -- debug information 96 and output -- dumped to the screen. This function takes care 97 of that. 98 """ 99 logger = logging.getLogger("CedarBackup3") 100 logger.setLevel(logging.DEBUG) # let the logger see all messages 101 formatter = logging.Formatter(fmt="%(message)s") 102 handler = logging.StreamHandler(stream=sys.stdout) 103 handler.setFormatter(formatter) 104 handler.setLevel(logging.DEBUG) 105 logger.addHandler(handler)
    106 107 108 ################# 109 # setupOverrides 110 ################# 111
    112 -def setupOverrides():
    113 """ 114 Set up any platform-specific overrides that might be required. 115 116 When packages are built, this is done manually (hardcoded) in customize.py 117 and the overrides are set up in cli.cli(). This way, no runtime checks need 118 to be done. This is safe, because the package maintainer knows exactly 119 which platform (Debian or not) the package is being built for. 120 121 Unit tests are different, because they might be run anywhere. So, we 122 attempt to make a guess about plaform using platformDebian(), and use that 123 to set up the custom overrides so that platform-specific unit tests continue 124 to work. 125 """ 126 config = Config() 127 config.options = OptionsConfig() 128 if platformDebian(): 129 customizeOverrides(config, platform="debian") 130 else: 131 customizeOverrides(config, platform="standard") 132 setupPathResolver(config)
    133 134 135 ########################### 136 # findResources() function 137 ########################### 138
    139 -def findResources(resources, dataDirs):
    140 """ 141 Returns a dictionary of locations for various resources. 142 @param resources: List of required resources. 143 @param dataDirs: List of data directories to search within for resources. 144 @return: Dictionary mapping resource name to resource path. 145 @raise Exception: If some resource cannot be found. 146 """ 147 mapping = { } 148 for resource in resources: 149 for resourceDir in dataDirs: 150 path = os.path.join(resourceDir, resource) 151 if os.path.exists(path): 152 mapping[resource] = path 153 break 154 else: 155 raise Exception("Unable to find resource [%s]." % resource) 156 return mapping
    157 158 159 ############################## 160 # commandAvailable() function 161 ############################## 162
    163 -def commandAvailable(command):
    164 """ 165 Indicates whether a command is available on $PATH somewhere. 166 This should work on both Windows and UNIX platforms. 167 @param command: Commang to search for 168 @return: Boolean true/false depending on whether command is available. 169 """ 170 if "PATH" in os.environ: 171 for path in os.environ["PATH"].split(os.sep): 172 if os.path.exists(os.path.join(path, command)): 173 return True 174 return False
    175 176 177 ####################### 178 # buildPath() function 179 ####################### 180
    181 -def buildPath(components):
    182 """ 183 Builds a complete path from a list of components. 184 For instance, constructs C{"/a/b/c"} from C{["/a", "b", "c",]}. 185 @param components: List of components. 186 @returns: String path constructed from components. 187 @raise ValueError: If a path cannot be encoded properly. 188 """ 189 path = components[0] 190 for component in components[1:]: 191 path = os.path.join(path, component) 192 return encodePath(path)
    193 194 195 ####################### 196 # removedir() function 197 ####################### 198
    199 -def removedir(tree):
    200 """ 201 Recursively removes an entire directory. 202 This is basically taken from an example on python.com. 203 @param tree: Directory tree to remove. 204 @raise ValueError: If a path cannot be encoded properly. 205 """ 206 tree = encodePath(tree) 207 for root, dirs, files in os.walk(tree, topdown=False): 208 for name in files: 209 path = os.path.join(root, name) 210 if os.path.islink(path): 211 os.remove(path) 212 elif os.path.isfile(path): 213 os.remove(path) 214 for name in dirs: 215 path = os.path.join(root, name) 216 if os.path.islink(path): 217 os.remove(path) 218 elif os.path.isdir(path): 219 os.rmdir(path) 220 os.rmdir(tree)
    221 222 223 ######################## 224 # extractTar() function 225 ######################## 226
    227 -def extractTar(tmpdir, filepath):
    228 """ 229 Extracts the indicated tar file to the indicated tmpdir. 230 @param tmpdir: Temp directory to extract to. 231 @param filepath: Path to tarfile to extract. 232 @raise ValueError: If a path cannot be encoded properly. 233 """ 234 # pylint: disable=E1101 235 tmpdir = encodePath(tmpdir) 236 filepath = encodePath(filepath) 237 with tarfile.open(filepath) as tar: 238 try: 239 tar.format = tarfile.GNU_FORMAT 240 except AttributeError: 241 tar.posix = False 242 for tarinfo in tar: 243 tar.extract(tarinfo, tmpdir)
    244 245 246 ########################### 247 # changeFileAge() function 248 ########################### 249
    250 -def changeFileAge(filename, subtract=None):
    251 """ 252 Changes a file age using the C{os.utime} function. 253 254 @note: Some platforms don't seem to be able to set an age precisely. As a 255 result, whereas we might have intended to set an age of 86400 seconds, we 256 actually get an age of 86399.375 seconds. When util.calculateFileAge() 257 looks at that the file, it calculates an age of 0.999992766204 days, which 258 then gets truncated down to zero whole days. The tests get very confused. 259 To work around this, I always subtract off one additional second as a fudge 260 factor. That way, the file age will be I{at least} as old as requested 261 later on. 262 263 @param filename: File to operate on. 264 @param subtract: Number of seconds to subtract from the current time. 265 @raise ValueError: If a path cannot be encoded properly. 266 """ 267 filename = encodePath(filename) 268 newTime = time.time() - 1 269 if subtract is not None: 270 newTime -= subtract 271 os.utime(filename, (newTime, newTime))
    272 273 274 ########################### 275 # getMaskAsMode() function 276 ########################### 277
    278 -def getMaskAsMode():
    279 """ 280 Returns the user's current umask inverted to a mode. 281 A mode is mostly a bitwise inversion of a mask, i.e. mask 002 is mode 775. 282 @return: Umask converted to a mode, as an integer. 283 """ 284 umask = os.umask(0o777) 285 os.umask(umask) 286 return int(~umask & 0o777) # invert, then use only lower bytes
    287 288 289 ###################### 290 # getLogin() function 291 ###################### 292
    293 -def getLogin():
    294 """ 295 Returns the name of the currently-logged in user. This might fail under 296 some circumstances - but if it does, our tests would fail anyway. 297 """ 298 return getpass.getuser()
    299 300 301 ############################ 302 # randomFilename() function 303 ############################ 304
    305 -def randomFilename(length, prefix=None, suffix=None):
    306 """ 307 Generates a random filename with the given length. 308 @param length: Length of filename. 309 @return Random filename. 310 """ 311 characters = [None] * length 312 for i in range(length): 313 characters[i] = random.choice(string.ascii_uppercase) 314 if prefix is None: 315 prefix = "" 316 if suffix is None: 317 suffix = "" 318 return "%s%s%s" % (prefix, "".join(characters), suffix)
    319 320 321 #################################### 322 # failUnlessAssignRaises() function 323 #################################### 324
    325 -def failUnlessAssignRaises(testCase, exception, obj, prop, value):
    326 """ 327 Equivalent of C{failUnlessRaises}, but used for property assignments instead. 328 329 It's nice to be able to use C{failUnlessRaises} to check that a method call 330 raises the exception that you expect. Unfortunately, this method can't be 331 used to check Python propery assignments, even though these property 332 assignments are actually implemented underneath as methods. 333 334 This function (which can be easily called by unit test classes) provides an 335 easy way to wrap the assignment checks. It's not pretty, or as intuitive as 336 the original check it's modeled on, but it does work. 337 338 Let's assume you make this method call:: 339 340 testCase.failUnlessAssignRaises(ValueError, collectDir, "absolutePath", absolutePath) 341 342 If you do this, a test case failure will be raised unless the assignment:: 343 344 collectDir.absolutePath = absolutePath 345 346 fails with a C{ValueError} exception. The failure message differentiates 347 between the case where no exception was raised and the case where the wrong 348 exception was raised. 349 350 @note: Internally, the C{missed} and C{instead} variables are used rather 351 than directly calling C{testCase.fail} upon noticing a problem because the 352 act of "failure" itself generates an exception that would be caught by the 353 general C{except} clause. 354 355 @param testCase: PyUnit test case object (i.e. self). 356 @param exception: Exception that is expected to be raised. 357 @param obj: Object whose property is to be assigned to. 358 @param prop: Name of the property, as a string. 359 @param value: Value that is to be assigned to the property. 360 361 @see: C{unittest.TestCase.failUnlessRaises} 362 """ 363 missed = False 364 instead = None 365 try: 366 exec("obj.%s = value" % prop) # pylint: disable=W0122 367 missed = True 368 except exception: pass 369 except Exception as e: 370 instead = e 371 if missed: 372 testCase.fail("Expected assignment to raise %s, but got no exception." % (exception.__name__)) 373 if instead is not None: 374 testCase.fail("Expected assignment to raise %s, but got %s instead." % (ValueError, instead.__class__.__name__))
    375 376 377 ########################### 378 # captureOutput() function 379 ########################### 380
    381 -def captureOutput(c):
    382 """ 383 Captures the output (stdout, stderr) of a function or a method. 384 385 Some of our functions don't do anything other than just print output. We 386 need a way to test these functions (at least nominally) but we don't want 387 any of the output spoiling the test suite output. 388 389 This function just creates a dummy file descriptor that can be used as a 390 target by the callable function, rather than C{stdout} or C{stderr}. 391 392 @note: This method assumes that C{callable} doesn't take any arguments 393 besides keyword argument C{fd} to specify the file descriptor. 394 395 @param c: Callable function or method. 396 397 @return: Output of function, as one big string. 398 """ 399 fd = StringIO() 400 c(fd=fd) 401 result = fd.getvalue() 402 fd.close() 403 return result
    404 405 406 ######################### 407 # _isPlatform() function 408 ######################### 409
    410 -def _isPlatform(name):
    411 """ 412 Returns boolean indicating whether we're running on the indicated platform. 413 @param name: Platform name to check, currently one of "windows" or "macosx" 414 """ 415 if name == "windows": 416 return platform.platform(True, True).startswith("Windows") 417 elif name == "macosx": 418 return sys.platform == "darwin" 419 elif name == "debian": 420 return platform.platform(False, False).find("debian") > 0 421 elif name == "cygwin": 422 return platform.platform(True, True).startswith("CYGWIN") 423 else: 424 raise ValueError("Unknown platform [%s]." % name)
    425 426 427 ############################ 428 # platformDebian() function 429 ############################ 430
    431 -def platformDebian():
    432 """ 433 Returns boolean indicating whether this is the Debian platform. 434 """ 435 return _isPlatform("debian")
    436 437 438 ############################ 439 # platformMacOsX() function 440 ############################ 441
    442 -def platformMacOsX():
    443 """ 444 Returns boolean indicating whether this is the Mac OS X platform. 445 """ 446 return _isPlatform("macosx")
    447 448 449 ########################### 450 # runningAsRoot() function 451 ########################### 452
    453 -def runningAsRoot():
    454 """ 455 Returns boolean indicating whether the effective user id is root. 456 """ 457 return os.geteuid() == 0
    458 459 460 ############################## 461 # availableLocales() function 462 ############################## 463
    464 -def availableLocales():
    465 """ 466 Returns a list of available locales on the system 467 @return: List of string locale names 468 """ 469 locales = [] 470 output = executeCommand(["locale"], [ "-a", ], returnOutput=True, ignoreStderr=True)[1] 471 for line in output: 472 locales.append(line.rstrip()) 473 return locales
    474

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.split.LocalConfig-class.html0000664000175000017500000010327512657665545031253 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.split.LocalConfig
    Package CedarBackup3 :: Package extend :: Module split :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit split-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds a <split> configuration section as the next child of a parent.
    source code
     
    _setSplit(self, value)
    Property target used to set the split configuration value.
    source code
     
    _getSplit(self)
    Property target used to get the split configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseSplit(parent)
    Parses an split configuration section.
    source code
    Properties [hide private]
      split
    Split configuration in terms of a SplitConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    Split configuration must be filled in. Within that, both the size limit and split size must be filled in.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds a <split> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      sizeLimit      //cb_config/split/size_limit
      splitSize      //cb_config/split/split_size
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setSplit(self, value)

    source code 

    Property target used to set the split configuration value. If not None, the value must be a SplitConfig object.

    Raises:
    • ValueError - If the value is not a SplitConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the split configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseSplit(parent)
    Static Method

    source code 

    Parses an split configuration section.

    We read the following individual fields:

      sizeLimit      //cb_config/split/size_limit
      splitSize      //cb_config/split/split_size
    
    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    EncryptConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    Property Details [hide private]

    split

    Split configuration in terms of a SplitConfig object.

    Get Method:
    _getSplit(self) - Property target used to get the split configuration value.
    Set Method:
    _setSplit(self, value) - Property target used to set the split configuration value.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.cli.Options-class.html0000664000175000017500000026721012657665544026653 0ustar pronovicpronovic00000000000000 CedarBackup3.cli.Options
    Package CedarBackup3 :: Module cli :: Class Options
    [hide private]
    [frames] | no frames]

    Class Options

    source code

    object --+
             |
            Options
    
    Known Subclasses:

    Class representing command-line options for the cback3 script.

    The Options class is a Python object representation of the command-line options of the cback3 script.

    The object representation is two-way: a command line string or a list of command line arguments can be used to create an Options object, and then changes to the object can be propogated back to a list of command-line arguments or to a command-line string. An Options object can even be created from scratch programmatically (if you have a need for that).

    There are two main levels of validation in the Options class. The first is field-level validation. Field-level validation comes into play when a given field in an object is assigned to or updated. We use Python's property functionality to enforce specific validations on field values, and in some places we even use customized list classes to enforce validations on list members. You should expect to catch a ValueError exception when making assignments to fields if you are programmatically filling an object.

    The second level of validation is post-completion validation. Certain validations don't make sense until an object representation of options is fully "complete". We don't want these validations to apply all of the time, because it would make building up a valid object from scratch a real pain. For instance, we might have to do things in the right order to keep from throwing exceptions, etc.

    All of these post-completion validations are encapsulated in the Options.validate method. This method can be called at any time by a client, and will always be called immediately after creating a Options object from a command line and before exporting a Options object back to a command line. This way, we get acceptable ease-of-use but we also don't accept or emit invalid command lines.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, argumentList=None, argumentString=None, validate=True)
    Initializes an options object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, implemented in terms of original Python 2 compare operator.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y
     
    _getActions(self)
    Property target used to get the actions list.
    source code
     
    _getConfig(self)
    Property target used to get the config parameter.
    source code
     
    _getDebug(self)
    Property target used to get the debug flag.
    source code
     
    _getDiagnostics(self)
    Property target used to get the diagnostics flag.
    source code
     
    _getFull(self)
    Property target used to get the full flag.
    source code
     
    _getHelp(self)
    Property target used to get the help flag.
    source code
     
    _getLogfile(self)
    Property target used to get the logfile parameter.
    source code
     
    _getManaged(self)
    Property target used to get the managed flag.
    source code
     
    _getManagedOnly(self)
    Property target used to get the managedOnly flag.
    source code
     
    _getMode(self)
    Property target used to get the mode parameter.
    source code
     
    _getOutput(self)
    Property target used to get the output flag.
    source code
     
    _getOwner(self)
    Property target used to get the owner parameter.
    source code
     
    _getQuiet(self)
    Property target used to get the quiet flag.
    source code
     
    _getStacktrace(self)
    Property target used to get the stacktrace flag.
    source code
     
    _getVerbose(self)
    Property target used to get the verbose flag.
    source code
     
    _getVersion(self)
    Property target used to get the version flag.
    source code
     
    _parseArgumentList(self, argumentList)
    Internal method to parse a list of command-line arguments.
    source code
     
    _setActions(self, value)
    Property target used to set the actions list.
    source code
     
    _setConfig(self, value)
    Property target used to set the config parameter.
    source code
     
    _setDebug(self, value)
    Property target used to set the debug flag.
    source code
     
    _setDiagnostics(self, value)
    Property target used to set the diagnostics flag.
    source code
     
    _setFull(self, value)
    Property target used to set the full flag.
    source code
     
    _setHelp(self, value)
    Property target used to set the help flag.
    source code
     
    _setLogfile(self, value)
    Property target used to set the logfile parameter.
    source code
     
    _setManaged(self, value)
    Property target used to set the managed flag.
    source code
     
    _setManagedOnly(self, value)
    Property target used to set the managedOnly flag.
    source code
     
    _setMode(self, value)
    Property target used to set the mode parameter.
    source code
     
    _setOutput(self, value)
    Property target used to set the output flag.
    source code
     
    _setOwner(self, value)
    Property target used to set the owner parameter.
    source code
     
    _setQuiet(self, value)
    Property target used to set the quiet flag.
    source code
     
    _setStacktrace(self, value)
    Property target used to set the stacktrace flag.
    source code
     
    _setVerbose(self, value)
    Property target used to set the verbose flag.
    source code
     
    _setVersion(self, value)
    Property target used to set the version flag.
    source code
     
    buildArgumentList(self, validate=True)
    Extracts options into a list of command line arguments.
    source code
     
    buildArgumentString(self, validate=True)
    Extracts options into a string of command-line arguments.
    source code
     
    validate(self)
    Validates command-line options represented by the object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      actions
    Command-line actions list.
      config
    Command-line configuration file (-c,--config) parameter.
      debug
    Command-line debug (-d,--debug) flag.
      diagnostics
    Command-line diagnostics (-D,--diagnostics) flag.
      full
    Command-line full-backup (-f,--full) flag.
      help
    Command-line help (-h,--help) flag.
      logfile
    Command-line logfile (-l,--logfile) parameter.
      managed
    Command-line managed (-M,--managed) flag.
      managedOnly
    Command-line managed-only (-N,--managed-only) flag.
      mode
    Command-line mode (-m,--mode) parameter.
      output
    Command-line output (-O,--output) flag.
      owner
    Command-line owner (-o,--owner) parameter, as tuple (user,group).
      quiet
    Command-line quiet (-q,--quiet) flag.
      stacktrace
    Command-line stacktrace (-s,--stack) flag.
      verbose
    Command-line verbose (-b,--verbose) flag.
      version
    Command-line version (-V,--version) flag.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, argumentList=None, argumentString=None, validate=True)
    (Constructor)

    source code 

    Initializes an options object.

    If you initialize the object without passing either argumentList or argumentString, the object will be empty and will be invalid until it is filled in properly.

    No reference to the original arguments is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    The argument list is assumed to be a list of arguments, not including the name of the command, something like sys.argv[1:]. If you pass sys.argv instead, things are not going to work.

    The argument string will be parsed into an argument list by the util.splitCommandLine function (see the documentation for that function for some important notes about its limitations). There is an assumption that the resulting list will be equivalent to sys.argv[1:], just like argumentList.

    Unless the validate argument is False, the Options.validate method will be called (with its default arguments) after successfully parsing any passed-in command line. This validation ensures that appropriate actions, etc. have been specified. Keep in mind that even if validate is False, it might not be possible to parse the passed-in command line, so an exception might still be raised.

    Parameters:
    • argumentList (List of arguments, i.e. sys.argv) - Command line for a program.
    • argumentString (String, i.e. "cback3 --verbose stage store") - Command line for a program.
    • validate (Boolean true/false.) - Validate the command line after parsing it.
    Raises:
    • getopt.GetoptError - If the command-line arguments could not be parsed.
    • ValueError - If the command-line arguments are invalid.
    Overrides: object.__init__
    Notes:
    • The command line format is specified by the _usage function. Call _usage to see a usage statement for the cback3 script.
    • It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid command line arguments.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _getOwner(self)

    source code 

    Property target used to get the owner parameter. The parameter is a tuple of (user, group).

    _parseArgumentList(self, argumentList)

    source code 

    Internal method to parse a list of command-line arguments.

    Most of the validation we do here has to do with whether the arguments can be parsed and whether any values which exist are valid. We don't do any validation as to whether required elements exist or whether elements exist in the proper combination (instead, that's the job of the validate method).

    For any of the options which supply parameters, if the option is duplicated with long and short switches (i.e. -l and a --logfile) then the long switch is used. If the same option is duplicated with the same switch (long or short), then the last entry on the command line is used.

    Parameters:
    • argumentList (List of arguments to a command, i.e. sys.argv[1:]) - List of arguments to a command.
    Raises:
    • ValueError - If the argument list cannot be successfully parsed.

    _setActions(self, value)

    source code 

    Property target used to set the actions list. We don't restrict the contents of actions. They're validated somewhere else.

    Raises:
    • ValueError - If the value is not valid.

    _setDebug(self, value)

    source code 

    Property target used to set the debug flag. No validations, but we normalize the value to True or False.

    _setDiagnostics(self, value)

    source code 

    Property target used to set the diagnostics flag. No validations, but we normalize the value to True or False.

    _setFull(self, value)

    source code 

    Property target used to set the full flag. No validations, but we normalize the value to True or False.

    _setHelp(self, value)

    source code 

    Property target used to set the help flag. No validations, but we normalize the value to True or False.

    _setLogfile(self, value)

    source code 

    Property target used to set the logfile parameter.

    Raises:
    • ValueError - If the value cannot be encoded properly.

    _setManaged(self, value)

    source code 

    Property target used to set the managed flag. No validations, but we normalize the value to True or False.

    _setManagedOnly(self, value)

    source code 

    Property target used to set the managedOnly flag. No validations, but we normalize the value to True or False.

    _setOutput(self, value)

    source code 

    Property target used to set the output flag. No validations, but we normalize the value to True or False.

    _setOwner(self, value)

    source code 

    Property target used to set the owner parameter. If not None, the owner must be a (user,group) tuple or list. Strings (and inherited children of strings) are explicitly disallowed. The value will be normalized to a tuple.

    Raises:
    • ValueError - If the value is not valid.

    _setQuiet(self, value)

    source code 

    Property target used to set the quiet flag. No validations, but we normalize the value to True or False.

    _setStacktrace(self, value)

    source code 

    Property target used to set the stacktrace flag. No validations, but we normalize the value to True or False.

    _setVerbose(self, value)

    source code 

    Property target used to set the verbose flag. No validations, but we normalize the value to True or False.

    _setVersion(self, value)

    source code 

    Property target used to set the version flag. No validations, but we normalize the value to True or False.

    buildArgumentList(self, validate=True)

    source code 

    Extracts options into a list of command line arguments.

    The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument list. Besides that, the argument list is normalized to use the long option names (i.e. --version rather than -V). The resulting list will be suitable for passing back to the constructor in the argumentList parameter. Unlike buildArgumentString, string arguments are not quoted here, because there is no need for it.

    Unless the validate parameter is False, the Options.validate method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument list will not be extracted.

    Parameters:
    • validate (Boolean true/false.) - Validate the options before extracting the command line.
    Returns:
    List representation of command-line arguments.
    Raises:
    • ValueError - If options within the object are invalid.

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to extract an invalid command line.

    buildArgumentString(self, validate=True)

    source code 

    Extracts options into a string of command-line arguments.

    The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument string. Besides that, the argument string is normalized to use the long option names (i.e. --version rather than -V) and to quote all string arguments with double quotes ("). The resulting string will be suitable for passing back to the constructor in the argumentString parameter.

    Unless the validate parameter is False, the Options.validate method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument string will not be extracted.

    Parameters:
    • validate (Boolean true/false.) - Validate the options before extracting the command line.
    Returns:
    String representation of command-line arguments.
    Raises:
    • ValueError - If options within the object are invalid.

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to extract an invalid command line.

    validate(self)

    source code 

    Validates command-line options represented by the object.

    Unless --help or --version are supplied, at least one action must be specified. Other validations (as for allowed values for particular options) will be taken care of at assignment time by the properties functionality.

    Raises:
    • ValueError - If one of the validations fails.

    Note: The command line format is specified by the _usage function. Call _usage to see a usage statement for the cback3 script.


    Property Details [hide private]

    actions

    Command-line actions list.

    Get Method:
    _getActions(self) - Property target used to get the actions list.
    Set Method:
    _setActions(self, value) - Property target used to set the actions list.

    config

    Command-line configuration file (-c,--config) parameter.

    Get Method:
    _getConfig(self) - Property target used to get the config parameter.
    Set Method:
    _setConfig(self, value) - Property target used to set the config parameter.

    debug

    Command-line debug (-d,--debug) flag.

    Get Method:
    _getDebug(self) - Property target used to get the debug flag.
    Set Method:
    _setDebug(self, value) - Property target used to set the debug flag.

    diagnostics

    Command-line diagnostics (-D,--diagnostics) flag.

    Get Method:
    _getDiagnostics(self) - Property target used to get the diagnostics flag.
    Set Method:
    _setDiagnostics(self, value) - Property target used to set the diagnostics flag.

    full

    Command-line full-backup (-f,--full) flag.

    Get Method:
    _getFull(self) - Property target used to get the full flag.
    Set Method:
    _setFull(self, value) - Property target used to set the full flag.

    help

    Command-line help (-h,--help) flag.

    Get Method:
    _getHelp(self) - Property target used to get the help flag.
    Set Method:
    _setHelp(self, value) - Property target used to set the help flag.

    logfile

    Command-line logfile (-l,--logfile) parameter.

    Get Method:
    _getLogfile(self) - Property target used to get the logfile parameter.
    Set Method:
    _setLogfile(self, value) - Property target used to set the logfile parameter.

    managed

    Command-line managed (-M,--managed) flag.

    Get Method:
    _getManaged(self) - Property target used to get the managed flag.
    Set Method:
    _setManaged(self, value) - Property target used to set the managed flag.

    managedOnly

    Command-line managed-only (-N,--managed-only) flag.

    Get Method:
    _getManagedOnly(self) - Property target used to get the managedOnly flag.
    Set Method:
    _setManagedOnly(self, value) - Property target used to set the managedOnly flag.

    mode

    Command-line mode (-m,--mode) parameter.

    Get Method:
    _getMode(self) - Property target used to get the mode parameter.
    Set Method:
    _setMode(self, value) - Property target used to set the mode parameter.

    output

    Command-line output (-O,--output) flag.

    Get Method:
    _getOutput(self) - Property target used to get the output flag.
    Set Method:
    _setOutput(self, value) - Property target used to set the output flag.

    owner

    Command-line owner (-o,--owner) parameter, as tuple (user,group).

    Get Method:
    _getOwner(self) - Property target used to get the owner parameter.
    Set Method:
    _setOwner(self, value) - Property target used to set the owner parameter.

    quiet

    Command-line quiet (-q,--quiet) flag.

    Get Method:
    _getQuiet(self) - Property target used to get the quiet flag.
    Set Method:
    _setQuiet(self, value) - Property target used to set the quiet flag.

    stacktrace

    Command-line stacktrace (-s,--stack) flag.

    Get Method:
    _getStacktrace(self) - Property target used to get the stacktrace flag.
    Set Method:
    _setStacktrace(self, value) - Property target used to set the stacktrace flag.

    verbose

    Command-line verbose (-b,--verbose) flag.

    Get Method:
    _getVerbose(self) - Property target used to get the verbose flag.
    Set Method:
    _setVerbose(self, value) - Property target used to set the verbose flag.

    version

    Command-line version (-V,--version) flag.

    Get Method:
    _getVersion(self) - Property target used to get the version flag.
    Set Method:
    _setVersion(self, value) - Property target used to set the version flag.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.writers.dvdwriter.MediaDefinition-class.html0000664000175000017500000004303712657665545033211 0ustar pronovicpronovic00000000000000 CedarBackup3.writers.dvdwriter.MediaDefinition
    Package CedarBackup3 :: Package writers :: Module dvdwriter :: Class MediaDefinition
    [hide private]
    [frames] | no frames]

    Class MediaDefinition

    source code

    object --+
             |
            MediaDefinition
    

    Class encapsulating information about DVD media definitions.

    The following media types are accepted:

    • MEDIA_DVDPLUSR: DVD+R media (4.4 GB capacity)
    • MEDIA_DVDPLUSRW: DVD+RW media (4.4 GB capacity)

    Note that the capacity attribute returns capacity in terms of ISO sectors (util.ISO_SECTOR_SIZE). This is for compatibility with the CD writer functionality.

    The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes of 1024*1024*1024 bytes per gigabyte.

    Instance Methods [hide private]
     
    __init__(self, mediaType)
    Creates a media definition for the indicated media type.
    source code
     
    _setValues(self, mediaType)
    Sets values based on media type.
    source code
     
    _getMediaType(self)
    Property target used to get the media type value.
    source code
     
    _getRewritable(self)
    Property target used to get the rewritable flag value.
    source code
     
    _getCapacity(self)
    Property target used to get the capacity value.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]
      mediaType
    Configured media type.
      rewritable
    Boolean indicating whether the media is rewritable.
      capacity
    Total capacity of media in 2048-byte sectors.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, mediaType)
    (Constructor)

    source code 

    Creates a media definition for the indicated media type.

    Parameters:
    • mediaType - Type of the media, as discussed above.
    Raises:
    • ValueError - If the media type is unknown or unsupported.
    Overrides: object.__init__

    _setValues(self, mediaType)

    source code 

    Sets values based on media type.

    Parameters:
    • mediaType - Type of the media, as discussed above.
    Raises:
    • ValueError - If the media type is unknown or unsupported.

    Property Details [hide private]

    mediaType

    Configured media type.

    Get Method:
    _getMediaType(self) - Property target used to get the media type value.

    rewritable

    Boolean indicating whether the media is rewritable.

    Get Method:
    _getRewritable(self) - Property target used to get the rewritable flag value.

    capacity

    Total capacity of media in 2048-byte sectors.

    Get Method:
    _getCapacity(self) - Property target used to get the capacity value.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.filesystem.FilesystemList-class.html0000664000175000017500000021565112657665545031620 0ustar pronovicpronovic00000000000000 CedarBackup3.filesystem.FilesystemList
    Package CedarBackup3 :: Module filesystem :: Class FilesystemList
    [hide private]
    [frames] | no frames]

    Class FilesystemList

    source code

    object --+    
             |    
          list --+
                 |
                FilesystemList
    
    Known Subclasses:

    Represents a list of filesystem items.

    This is a generic class that represents a list of filesystem items. Callers can add individual files or directories to the list, or can recursively add the contents of a directory. The class also allows for up-front exclusions in several forms (all files, all directories, all items matching a pattern, all items whose basename matches a pattern, or all directories containing a specific "ignore file"). Symbolic links are typically backed up non-recursively, i.e. the link to a directory is backed up, but not the contents of that link (we don't want to deal with recursive loops, etc.).

    The custom methods such as addFile will only add items if they exist on the filesystem and do not match any exclusions that are already in place. However, since a FilesystemList is a subclass of Python's standard list class, callers can also add items to the list in the usual way, using methods like append() or insert(). No validations apply to items added to the list in this way; however, many list-manipulation methods deal "gracefully" with items that don't exist in the filesystem, often by ignoring them.

    Once a list has been created, callers can remove individual items from the list using standard methods like pop() or remove() or they can use custom methods to remove specific types of entries or entries which match a particular pattern.


    Note: Regular expression patterns that apply to paths are assumed to be bounded at front and back by the beginning and end of the string, i.e. they are treated as if they begin with ^ and end with $. This is true whether we are matching a complete path or a basename.

    Instance Methods [hide private]
    new empty list
    __init__(self)
    Initializes a list with no configured exclusions.
    source code
     
    addFile(self, path)
    Adds a file to the list.
    source code
     
    addDir(self, path)
    Adds a directory to the list.
    source code
     
    addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False)
    Adds the contents of a directory to the list.
    source code
     
    removeFiles(self, pattern=None)
    Removes file entries from the list.
    source code
     
    removeDirs(self, pattern=None)
    Removes directory entries from the list.
    source code
     
    removeLinks(self, pattern=None)
    Removes soft link entries from the list.
    source code
     
    removeMatch(self, pattern)
    Removes from the list all entries matching a pattern.
    source code
     
    removeInvalid(self)
    Removes from the list all entries that do not exist on disk.
    source code
     
    normalize(self)
    Normalizes the list, ensuring that each entry is unique.
    source code
     
    _setExcludeFiles(self, value)
    Property target used to set the exclude files flag.
    source code
     
    _getExcludeFiles(self)
    Property target used to get the exclude files flag.
    source code
     
    _setExcludeDirs(self, value)
    Property target used to set the exclude directories flag.
    source code
     
    _getExcludeDirs(self)
    Property target used to get the exclude directories flag.
    source code
     
    _setExcludeLinks(self, value)
    Property target used to set the exclude soft links flag.
    source code
     
    _getExcludeLinks(self)
    Property target used to get the exclude soft links flag.
    source code
     
    _setExcludePaths(self, value)
    Property target used to set the exclude paths list.
    source code
     
    _getExcludePaths(self)
    Property target used to get the absolute exclude paths list.
    source code
     
    _setExcludePatterns(self, value)
    Property target used to set the exclude patterns list.
    source code
     
    _getExcludePatterns(self)
    Property target used to get the exclude patterns list.
    source code
     
    _setExcludeBasenamePatterns(self, value)
    Property target used to set the exclude basename patterns list.
    source code
     
    _getExcludeBasenamePatterns(self)
    Property target used to get the exclude basename patterns list.
    source code
     
    _setIgnoreFile(self, value)
    Property target used to set the ignore file.
    source code
     
    _getIgnoreFile(self)
    Property target used to get the ignore file.
    source code
     
    _addDirContentsInternal(self, path, includePath=True, recursive=True, linkDepth=0, dereference=False)
    Internal implementation of addDirContents.
    source code
     
    verify(self)
    Verifies that all entries in the list exist on disk.
    source code

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __eq__, __ge__, __getattribute__, __getitem__, __getslice__, __gt__, __iadd__, __imul__, __iter__, __le__, __len__, __lt__, __mul__, __ne__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, append, count, extend, index, insert, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]
      excludeFiles
    Boolean indicating whether files should be excluded.
      excludeDirs
    Boolean indicating whether directories should be excluded.
      excludeLinks
    Boolean indicating whether soft links should be excluded.
      excludePaths
    List of absolute paths to be excluded.
      excludePatterns
    List of regular expression patterns (matching complete path) to be excluded.
      excludeBasenamePatterns
    List of regular expression patterns (matching basename) to be excluded.
      ignoreFile
    Name of file which will cause directory contents to be ignored.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    Initializes a list with no configured exclusions.

    Returns: new empty list
    Overrides: object.__init__

    addFile(self, path)

    source code 

    Adds a file to the list.

    The path must exist and must be a file or a link to an existing file. It will be added to the list subject to any exclusions that are in place.

    Parameters:
    • path (String representing a path on disk) - File path to be added to the list
    Returns:
    Number of items added to the list.
    Raises:
    • ValueError - If path is not a file or does not exist.
    • ValueError - If the path could not be encoded properly.

    addDir(self, path)

    source code 

    Adds a directory to the list.

    The path must exist and must be a directory or a link to an existing directory. It will be added to the list subject to any exclusions that are in place. The ignoreFile does not apply to this method, only to addDirContents.

    Parameters:
    • path (String representing a path on disk) - Directory path to be added to the list
    Returns:
    Number of items added to the list.
    Raises:
    • ValueError - If path is not a directory or does not exist.
    • ValueError - If the path could not be encoded properly.

    addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False)

    source code 

    Adds the contents of a directory to the list.

    The path must exist and must be a directory or a link to a directory. The contents of the directory (as well as the directory path itself) will be recursively added to the list, subject to any exclusions that are in place. If you only want the directory and its immediate contents to be added, then pass in recursive=False.

    Parameters:
    • path (String representing a path on disk) - Directory path whose contents should be added to the list
    • recursive (Boolean value) - Indicates whether directory contents should be added recursively.
    • addSelf (Boolean value) - Indicates whether the directory itself should be added to the list.
    • linkDepth (Integer value, where zero means not to follow any soft links) - Maximum depth of the tree at which soft links should be followed
    • dereference (Boolean value) - Indicates whether soft links, if followed, should be dereferenced
    Returns:
    Number of items recursively added to the list
    Raises:
    • ValueError - If path is not a directory or does not exist.
    • ValueError - If the path could not be encoded properly.
    Notes:
    • If a directory's absolute path matches an exclude pattern or path, or if the directory contains the configured ignore file, then the directory and all of its contents will be recursively excluded from the list.
    • If the passed-in directory happens to be a soft link, it will be recursed. However, the linkDepth parameter controls whether any soft links within the directory will be recursed. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links, a depth of 1 follows only links within the passed-in directory, a depth of 2 follows the links at the next level down, etc.
    • Any invalid soft links (i.e. soft links that point to non-existent items) will be silently ignored.
    • The excludeDirs flag only controls whether any given directory path itself is added to the list once it has been discovered. It does not modify any behavior related to directory recursion.
    • If you call this method on a link to a directory that link will never be dereferenced (it may, however, be followed).

    removeFiles(self, pattern=None)

    source code 

    Removes file entries from the list.

    If pattern is not passed in or is None, then all file entries will be removed from the list. Otherwise, only those file entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use removeInvalid to purge those entries).

    This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all files, then you will be better off setting excludeFiles to True before adding items to the list.

    Parameters:
    • pattern - Regular expression pattern representing entries to remove
    Returns:
    Number of entries removed
    Raises:
    • ValueError - If the passed-in pattern is not a valid regular expression.

    removeDirs(self, pattern=None)

    source code 

    Removes directory entries from the list.

    If pattern is not passed in or is None, then all directory entries will be removed from the list. Otherwise, only those directory entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use removeInvalid to purge those entries).

    This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all directories, then you will be better off setting excludeDirs to True before adding items to the list (note that this will not prevent you from recursively adding the contents of directories).

    Parameters:
    • pattern - Regular expression pattern representing entries to remove
    Returns:
    Number of entries removed
    Raises:
    • ValueError - If the passed-in pattern is not a valid regular expression.

    removeLinks(self, pattern=None)

    source code 

    Removes soft link entries from the list.

    If pattern is not passed in or is None, then all soft link entries will be removed from the list. Otherwise, only those soft link entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use removeInvalid to purge those entries).

    This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all soft links, then you will be better off setting excludeLinks to True before adding items to the list.

    Parameters:
    • pattern - Regular expression pattern representing entries to remove
    Returns:
    Number of entries removed
    Raises:
    • ValueError - If the passed-in pattern is not a valid regular expression.

    removeMatch(self, pattern)

    source code 

    Removes from the list all entries matching a pattern.

    This method removes from the list all entries which match the passed in pattern. Since there is no need to check the type of each entry, it is faster to call this method than to call the removeFiles, removeDirs or removeLinks methods individually. If you know which patterns you will want to remove ahead of time, you may be better off setting excludePatterns or excludeBasenamePatterns before adding items to the list.

    Parameters:
    • pattern - Regular expression pattern representing entries to remove
    Returns:
    Number of entries removed.
    Raises:
    • ValueError - If the passed-in pattern is not a valid regular expression.

    Note: Unlike when using the exclude lists, the pattern here is not bounded at the front and the back of the string. You can use any pattern you want.

    removeInvalid(self)

    source code 

    Removes from the list all entries that do not exist on disk.

    This method removes from the list all entries which do not currently exist on disk in some form. No attention is paid to whether the entries are files or directories.

    Returns:
    Number of entries removed.

    _setExcludeFiles(self, value)

    source code 

    Property target used to set the exclude files flag. No validations, but we normalize the value to True or False.

    _setExcludeDirs(self, value)

    source code 

    Property target used to set the exclude directories flag. No validations, but we normalize the value to True or False.

    _setExcludeLinks(self, value)

    source code 

    Property target used to set the exclude soft links flag. No validations, but we normalize the value to True or False.

    _setExcludePaths(self, value)

    source code 

    Property target used to set the exclude paths list. A None value is converted to an empty list. Elements do not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If any list element is not an absolute path.

    _setExcludePatterns(self, value)

    source code 

    Property target used to set the exclude patterns list. A None value is converted to an empty list.

    _setExcludeBasenamePatterns(self, value)

    source code 

    Property target used to set the exclude basename patterns list. A None value is converted to an empty list.

    _setIgnoreFile(self, value)

    source code 

    Property target used to set the ignore file. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _addDirContentsInternal(self, path, includePath=True, recursive=True, linkDepth=0, dereference=False)

    source code 

    Internal implementation of addDirContents.

    This internal implementation exists due to some refactoring. Basically, some subclasses have a need to add the contents of a directory, but not the directory itself. This is different than the standard FilesystemList behavior and actually ends up making a special case out of the first call in the recursive chain. Since I don't want to expose the modified interface, addDirContents ends up being wholly implemented in terms of this method.

    The linkDepth parameter controls whether soft links are followed when we are adding the contents recursively. Any recursive calls reduce the value by one. If the value zero or less, then soft links will just be added as directories, but will not be followed. This means that links are followed to a constant depth starting from the top-most directory.

    There is one difference between soft links and directories: soft links that are added recursively are not placed into the list explicitly. This is because if we do add the links recursively, the resulting tar file gets a little confused (it has a link and a directory with the same name).

    Parameters:
    • path - Directory path whose contents should be added to the list.
    • includePath - Indicates whether to include the path as well as contents.
    • recursive - Indicates whether directory contents should be added recursively.
    • linkDepth - Depth of soft links that should be followed
    • dereference - Indicates whether soft links, if followed, should be dereferenced
    Returns:
    Number of items recursively added to the list
    Raises:
    • ValueError - If path is not a directory or does not exist.

    Note: If you call this method on a link to a directory that link will never be dereferenced (it may, however, be followed).

    verify(self)

    source code 

    Verifies that all entries in the list exist on disk.

    Returns:
    True if all entries exist, False otherwise.

    Property Details [hide private]

    excludeFiles

    Boolean indicating whether files should be excluded.

    Get Method:
    _getExcludeFiles(self) - Property target used to get the exclude files flag.
    Set Method:
    _setExcludeFiles(self, value) - Property target used to set the exclude files flag.

    excludeDirs

    Boolean indicating whether directories should be excluded.

    Get Method:
    _getExcludeDirs(self) - Property target used to get the exclude directories flag.
    Set Method:
    _setExcludeDirs(self, value) - Property target used to set the exclude directories flag.

    excludeLinks

    Boolean indicating whether soft links should be excluded.

    Get Method:
    _getExcludeLinks(self) - Property target used to get the exclude soft links flag.
    Set Method:
    _setExcludeLinks(self, value) - Property target used to set the exclude soft links flag.

    excludePaths

    List of absolute paths to be excluded.

    Get Method:
    _getExcludePaths(self) - Property target used to get the absolute exclude paths list.
    Set Method:
    _setExcludePaths(self, value) - Property target used to set the exclude paths list.

    excludePatterns

    List of regular expression patterns (matching complete path) to be excluded.

    Get Method:
    _getExcludePatterns(self) - Property target used to get the exclude patterns list.
    Set Method:
    _setExcludePatterns(self, value) - Property target used to set the exclude patterns list.

    excludeBasenamePatterns

    List of regular expression patterns (matching basename) to be excluded.

    Get Method:
    _getExcludeBasenamePatterns(self) - Property target used to get the exclude basename patterns list.
    Set Method:
    _setExcludeBasenamePatterns(self, value) - Property target used to set the exclude basename patterns list.

    ignoreFile

    Name of file which will cause directory contents to be ignored.

    Get Method:
    _getIgnoreFile(self) - Property target used to get the ignore file.
    Set Method:
    _setIgnoreFile(self, value) - Property target used to set the ignore file.

    CedarBackup3-3.1.6/doc/interface/api-objects.txt0000664000175000017500000073222012657665550023167 0ustar pronovicpronovic00000000000000CedarBackup3 CedarBackup3-module.html CedarBackup3.__package__ CedarBackup3-module.html#__package__ CedarBackup3.action CedarBackup3.action-module.html CedarBackup3.action.executePurge CedarBackup3.actions.purge-module.html#executePurge CedarBackup3.action.executeRebuild CedarBackup3.actions.rebuild-module.html#executeRebuild CedarBackup3.action.executeStage CedarBackup3.actions.stage-module.html#executeStage CedarBackup3.action.__package__ CedarBackup3.action-module.html#__package__ CedarBackup3.action.executeStore CedarBackup3.actions.store-module.html#executeStore CedarBackup3.action.executeCollect CedarBackup3.actions.collect-module.html#executeCollect CedarBackup3.action.executeValidate CedarBackup3.actions.validate-module.html#executeValidate CedarBackup3.actions CedarBackup3.actions-module.html CedarBackup3.actions.__package__ CedarBackup3.actions-module.html#__package__ CedarBackup3.actions.collect CedarBackup3.actions.collect-module.html CedarBackup3.actions.collect._getTarfilePath CedarBackup3.actions.collect-module.html#_getTarfilePath CedarBackup3.actions.collect._getCollectMode CedarBackup3.actions.collect-module.html#_getCollectMode CedarBackup3.actions.collect._getArchiveMode CedarBackup3.actions.collect-module.html#_getArchiveMode CedarBackup3.actions.collect._writeDigest CedarBackup3.actions.collect-module.html#_writeDigest CedarBackup3.actions.collect.writeIndicatorFile CedarBackup3.actions.util-module.html#writeIndicatorFile CedarBackup3.actions.collect.isStartOfWeek CedarBackup3.util-module.html#isStartOfWeek CedarBackup3.actions.collect.__package__ CedarBackup3.actions.collect-module.html#__package__ CedarBackup3.actions.collect._executeBackup CedarBackup3.actions.collect-module.html#_executeBackup CedarBackup3.actions.collect._loadDigest CedarBackup3.actions.collect-module.html#_loadDigest CedarBackup3.actions.collect._collectFile CedarBackup3.actions.collect-module.html#_collectFile CedarBackup3.actions.collect.logger CedarBackup3.actions.collect-module.html#logger CedarBackup3.actions.collect.displayBytes CedarBackup3.util-module.html#displayBytes CedarBackup3.actions.collect._getDereference CedarBackup3.actions.collect-module.html#_getDereference CedarBackup3.actions.collect.changeOwnership CedarBackup3.util-module.html#changeOwnership CedarBackup3.actions.collect._getLinkDepth CedarBackup3.actions.collect-module.html#_getLinkDepth CedarBackup3.actions.collect._getRecursionLevel CedarBackup3.actions.collect-module.html#_getRecursionLevel CedarBackup3.actions.collect.executeCollect CedarBackup3.actions.collect-module.html#executeCollect CedarBackup3.actions.collect._getIgnoreFile CedarBackup3.actions.collect-module.html#_getIgnoreFile CedarBackup3.actions.collect._getExclusions CedarBackup3.actions.collect-module.html#_getExclusions CedarBackup3.actions.collect._collectDirectory CedarBackup3.actions.collect-module.html#_collectDirectory CedarBackup3.actions.collect._getDigestPath CedarBackup3.actions.collect-module.html#_getDigestPath CedarBackup3.actions.collect.buildNormalizedPath CedarBackup3.util-module.html#buildNormalizedPath CedarBackup3.actions.constants CedarBackup3.actions.constants-module.html CedarBackup3.actions.constants.INDICATOR_PATTERN CedarBackup3.actions.constants-module.html#INDICATOR_PATTERN CedarBackup3.actions.constants.STAGE_INDICATOR CedarBackup3.actions.constants-module.html#STAGE_INDICATOR CedarBackup3.actions.constants.STORE_INDICATOR CedarBackup3.actions.constants-module.html#STORE_INDICATOR CedarBackup3.actions.constants.DIR_TIME_FORMAT CedarBackup3.actions.constants-module.html#DIR_TIME_FORMAT CedarBackup3.actions.constants.__package__ CedarBackup3.actions.constants-module.html#__package__ CedarBackup3.actions.constants.COLLECT_INDICATOR CedarBackup3.actions.constants-module.html#COLLECT_INDICATOR CedarBackup3.actions.constants.DIGEST_EXTENSION CedarBackup3.actions.constants-module.html#DIGEST_EXTENSION CedarBackup3.actions.initialize CedarBackup3.actions.initialize-module.html CedarBackup3.actions.initialize.logger CedarBackup3.actions.initialize-module.html#logger CedarBackup3.actions.initialize.initializeMediaState CedarBackup3.actions.util-module.html#initializeMediaState CedarBackup3.actions.initialize.executeInitialize CedarBackup3.actions.initialize-module.html#executeInitialize CedarBackup3.actions.initialize.__package__ CedarBackup3.actions.initialize-module.html#__package__ CedarBackup3.actions.purge CedarBackup3.actions.purge-module.html CedarBackup3.actions.purge.executePurge CedarBackup3.actions.purge-module.html#executePurge CedarBackup3.actions.purge.logger CedarBackup3.actions.purge-module.html#logger CedarBackup3.actions.purge.__package__ CedarBackup3.actions.purge-module.html#__package__ CedarBackup3.actions.rebuild CedarBackup3.actions.rebuild-module.html CedarBackup3.actions.rebuild.writeStoreIndicator CedarBackup3.actions.store-module.html#writeStoreIndicator CedarBackup3.actions.rebuild.executeRebuild CedarBackup3.actions.rebuild-module.html#executeRebuild CedarBackup3.actions.rebuild.writeImage CedarBackup3.actions.store-module.html#writeImage CedarBackup3.actions.rebuild.__package__ CedarBackup3.actions.rebuild-module.html#__package__ CedarBackup3.actions.rebuild.checkMediaState CedarBackup3.actions.util-module.html#checkMediaState CedarBackup3.actions.rebuild._findRebuildDirs CedarBackup3.actions.rebuild-module.html#_findRebuildDirs CedarBackup3.actions.rebuild.deriveDayOfWeek CedarBackup3.util-module.html#deriveDayOfWeek CedarBackup3.actions.rebuild.consistencyCheck CedarBackup3.actions.store-module.html#consistencyCheck CedarBackup3.actions.rebuild.logger CedarBackup3.actions.rebuild-module.html#logger CedarBackup3.actions.stage CedarBackup3.actions.stage-module.html CedarBackup3.actions.stage._getRcpCommand CedarBackup3.actions.stage-module.html#_getRcpCommand CedarBackup3.actions.stage._getLocalUser CedarBackup3.actions.stage-module.html#_getLocalUser CedarBackup3.actions.stage._getRemotePeers CedarBackup3.actions.stage-module.html#_getRemotePeers CedarBackup3.actions.stage.getUidGid CedarBackup3.util-module.html#getUidGid CedarBackup3.actions.stage._createStagingDirs CedarBackup3.actions.stage-module.html#_createStagingDirs CedarBackup3.actions.stage.isStartOfWeek CedarBackup3.util-module.html#isStartOfWeek CedarBackup3.actions.stage.executeStage CedarBackup3.actions.stage-module.html#executeStage CedarBackup3.actions.stage.writeIndicatorFile CedarBackup3.actions.util-module.html#writeIndicatorFile CedarBackup3.actions.stage.__package__ CedarBackup3.actions.stage-module.html#__package__ CedarBackup3.actions.stage.logger CedarBackup3.actions.stage-module.html#logger CedarBackup3.actions.stage._getLocalPeers CedarBackup3.actions.stage-module.html#_getLocalPeers CedarBackup3.actions.stage.changeOwnership CedarBackup3.util-module.html#changeOwnership CedarBackup3.actions.stage._getDailyDir CedarBackup3.actions.stage-module.html#_getDailyDir CedarBackup3.actions.stage.isRunningAsRoot CedarBackup3.util-module.html#isRunningAsRoot CedarBackup3.actions.stage._getIgnoreFailuresFlag CedarBackup3.actions.stage-module.html#_getIgnoreFailuresFlag CedarBackup3.actions.stage._getRemoteUser CedarBackup3.actions.stage-module.html#_getRemoteUser CedarBackup3.actions.store CedarBackup3.actions.store-module.html CedarBackup3.actions.store.writeImage CedarBackup3.actions.store-module.html#writeImage CedarBackup3.actions.store.executeStore CedarBackup3.actions.store-module.html#executeStore CedarBackup3.actions.store._getNewDisc CedarBackup3.actions.store-module.html#_getNewDisc CedarBackup3.actions.store.writeIndicatorFile CedarBackup3.actions.util-module.html#writeIndicatorFile CedarBackup3.actions.store.createWriter CedarBackup3.actions.util-module.html#createWriter CedarBackup3.actions.store.isStartOfWeek CedarBackup3.util-module.html#isStartOfWeek CedarBackup3.actions.store.unmount CedarBackup3.util-module.html#unmount CedarBackup3.actions.store.__package__ CedarBackup3.actions.store-module.html#__package__ CedarBackup3.actions.store.writeStoreIndicator CedarBackup3.actions.store-module.html#writeStoreIndicator CedarBackup3.actions.store.logger CedarBackup3.actions.store-module.html#logger CedarBackup3.actions.store.displayBytes CedarBackup3.util-module.html#displayBytes CedarBackup3.actions.store.checkMediaState CedarBackup3.actions.util-module.html#checkMediaState CedarBackup3.actions.store._findCorrectDailyDir CedarBackup3.actions.store-module.html#_findCorrectDailyDir CedarBackup3.actions.store.writeImageBlankSafe CedarBackup3.actions.store-module.html#writeImageBlankSafe CedarBackup3.actions.store.buildMediaLabel CedarBackup3.actions.util-module.html#buildMediaLabel CedarBackup3.actions.store.compareContents CedarBackup3.filesystem-module.html#compareContents CedarBackup3.actions.store.consistencyCheck CedarBackup3.actions.store-module.html#consistencyCheck CedarBackup3.actions.store.mount CedarBackup3.util-module.html#mount CedarBackup3.actions.util CedarBackup3.actions.util-module.html CedarBackup3.actions.util.findDailyDirs CedarBackup3.actions.util-module.html#findDailyDirs CedarBackup3.actions.util.writeIndicatorFile CedarBackup3.actions.util-module.html#writeIndicatorFile CedarBackup3.actions.util.createWriter CedarBackup3.actions.util-module.html#createWriter CedarBackup3.actions.util.__package__ CedarBackup3.actions.util-module.html#__package__ CedarBackup3.actions.util.readMediaLabel CedarBackup3.writers.util-module.html#readMediaLabel CedarBackup3.actions.util.logger CedarBackup3.actions.util-module.html#logger CedarBackup3.actions.util._getMediaType CedarBackup3.actions.util-module.html#_getMediaType CedarBackup3.actions.util._getDeviceType CedarBackup3.actions.util-module.html#_getDeviceType CedarBackup3.actions.util.changeOwnership CedarBackup3.util-module.html#changeOwnership CedarBackup3.actions.util.getBackupFiles CedarBackup3.actions.util-module.html#getBackupFiles CedarBackup3.actions.util.MEDIA_LABEL_PREFIX CedarBackup3.actions.util-module.html#MEDIA_LABEL_PREFIX CedarBackup3.actions.util.deviceMounted CedarBackup3.util-module.html#deviceMounted CedarBackup3.actions.util.checkMediaState CedarBackup3.actions.util-module.html#checkMediaState CedarBackup3.actions.util.buildMediaLabel CedarBackup3.actions.util-module.html#buildMediaLabel CedarBackup3.actions.util.initializeMediaState CedarBackup3.actions.util-module.html#initializeMediaState CedarBackup3.actions.validate CedarBackup3.actions.validate-module.html CedarBackup3.actions.validate._checkDir CedarBackup3.actions.validate-module.html#_checkDir CedarBackup3.actions.validate._validatePurge CedarBackup3.actions.validate-module.html#_validatePurge CedarBackup3.actions.validate._validateReference CedarBackup3.actions.validate-module.html#_validateReference CedarBackup3.actions.validate._validateStage CedarBackup3.actions.validate-module.html#_validateStage CedarBackup3.actions.validate._validateOptions CedarBackup3.actions.validate-module.html#_validateOptions CedarBackup3.actions.validate.__package__ CedarBackup3.actions.validate-module.html#__package__ CedarBackup3.actions.validate.getUidGid CedarBackup3.util-module.html#getUidGid CedarBackup3.actions.validate._validateExtensions CedarBackup3.actions.validate-module.html#_validateExtensions CedarBackup3.actions.validate._validateCollect CedarBackup3.actions.validate-module.html#_validateCollect CedarBackup3.actions.validate.getFunctionReference CedarBackup3.util-module.html#getFunctionReference CedarBackup3.actions.validate.executeValidate CedarBackup3.actions.validate-module.html#executeValidate CedarBackup3.actions.validate._validateStore CedarBackup3.actions.validate-module.html#_validateStore CedarBackup3.actions.validate.createWriter CedarBackup3.actions.util-module.html#createWriter CedarBackup3.actions.validate.logger CedarBackup3.actions.validate-module.html#logger CedarBackup3.cli CedarBackup3.cli-module.html CedarBackup3.cli.SHORT_SWITCHES CedarBackup3.cli-module.html#SHORT_SWITCHES CedarBackup3.cli.executeRebuild CedarBackup3.actions.rebuild-module.html#executeRebuild CedarBackup3.cli.LONG_SWITCHES CedarBackup3.cli-module.html#LONG_SWITCHES CedarBackup3.cli.DISK_LOG_FORMAT CedarBackup3.cli-module.html#DISK_LOG_FORMAT CedarBackup3.cli.DEFAULT_LOGFILE CedarBackup3.cli-module.html#DEFAULT_LOGFILE CedarBackup3.cli.DEFAULT_MODE CedarBackup3.cli-module.html#DEFAULT_MODE CedarBackup3.cli.executeStore CedarBackup3.actions.store-module.html#executeStore CedarBackup3.cli._usage CedarBackup3.cli-module.html#_usage CedarBackup3.cli.getFunctionReference CedarBackup3.util-module.html#getFunctionReference CedarBackup3.cli._setupDiskOutputLogging CedarBackup3.cli-module.html#_setupDiskOutputLogging CedarBackup3.cli.cli CedarBackup3.cli-module.html#cli CedarBackup3.cli.customizeOverrides CedarBackup3.customize-module.html#customizeOverrides CedarBackup3.cli.sortDict CedarBackup3.util-module.html#sortDict CedarBackup3.cli.__package__ CedarBackup3.cli-module.html#__package__ CedarBackup3.cli.DISK_OUTPUT_FORMAT CedarBackup3.cli-module.html#DISK_OUTPUT_FORMAT CedarBackup3.cli.executeValidate CedarBackup3.actions.validate-module.html#executeValidate CedarBackup3.cli.VALIDATE_INDEX CedarBackup3.cli-module.html#VALIDATE_INDEX CedarBackup3.cli.executeInitialize CedarBackup3.actions.initialize-module.html#executeInitialize CedarBackup3.cli.getUidGid CedarBackup3.util-module.html#getUidGid CedarBackup3.cli._setupScreenFlowLogging CedarBackup3.cli-module.html#_setupScreenFlowLogging CedarBackup3.cli.executeCommand CedarBackup3.util-module.html#executeCommand CedarBackup3.cli.executeCollect CedarBackup3.actions.collect-module.html#executeCollect CedarBackup3.cli.logger CedarBackup3.cli-module.html#logger CedarBackup3.cli.splitCommandLine CedarBackup3.util-module.html#splitCommandLine CedarBackup3.cli.NONCOMBINE_ACTIONS CedarBackup3.cli-module.html#NONCOMBINE_ACTIONS CedarBackup3.cli._setupLogfile CedarBackup3.cli-module.html#_setupLogfile CedarBackup3.cli.STAGE_INDEX CedarBackup3.cli-module.html#STAGE_INDEX CedarBackup3.cli._setupOutputLogging CedarBackup3.cli-module.html#_setupOutputLogging CedarBackup3.cli.executePurge CedarBackup3.actions.purge-module.html#executePurge CedarBackup3.cli.STORE_INDEX CedarBackup3.cli-module.html#STORE_INDEX CedarBackup3.cli.COLLECT_INDEX CedarBackup3.cli-module.html#COLLECT_INDEX CedarBackup3.cli.SCREEN_LOG_STREAM CedarBackup3.cli-module.html#SCREEN_LOG_STREAM CedarBackup3.cli.encodePath CedarBackup3.util-module.html#encodePath CedarBackup3.cli.COMBINE_ACTIONS CedarBackup3.cli-module.html#COMBINE_ACTIONS CedarBackup3.cli.DEFAULT_CONFIG CedarBackup3.cli-module.html#DEFAULT_CONFIG CedarBackup3.cli.executeStage CedarBackup3.actions.stage-module.html#executeStage CedarBackup3.cli.DEFAULT_OWNERSHIP CedarBackup3.cli-module.html#DEFAULT_OWNERSHIP CedarBackup3.cli.DATE_FORMAT CedarBackup3.cli-module.html#DATE_FORMAT CedarBackup3.cli.setupPathResolver CedarBackup3.cli-module.html#setupPathResolver CedarBackup3.cli.SCREEN_LOG_FORMAT CedarBackup3.cli-module.html#SCREEN_LOG_FORMAT CedarBackup3.cli.setupLogging CedarBackup3.cli-module.html#setupLogging CedarBackup3.cli._diagnostics CedarBackup3.cli-module.html#_diagnostics CedarBackup3.cli.INITIALIZE_INDEX CedarBackup3.cli-module.html#INITIALIZE_INDEX CedarBackup3.cli._version CedarBackup3.cli-module.html#_version CedarBackup3.cli.PURGE_INDEX CedarBackup3.cli-module.html#PURGE_INDEX CedarBackup3.cli.REBUILD_INDEX CedarBackup3.cli-module.html#REBUILD_INDEX CedarBackup3.cli.VALID_ACTIONS CedarBackup3.cli-module.html#VALID_ACTIONS CedarBackup3.cli._setupFlowLogging CedarBackup3.cli-module.html#_setupFlowLogging CedarBackup3.cli._setupDiskFlowLogging CedarBackup3.cli-module.html#_setupDiskFlowLogging CedarBackup3.config CedarBackup3.config-module.html CedarBackup3.config.VALID_MEDIA_TYPES CedarBackup3.config-module.html#VALID_MEDIA_TYPES CedarBackup3.config.VALID_ORDER_MODES CedarBackup3.config-module.html#VALID_ORDER_MODES CedarBackup3.config.VALID_COLLECT_MODES CedarBackup3.config-module.html#VALID_COLLECT_MODES CedarBackup3.config.readBoolean CedarBackup3.xmlutil-module.html#readBoolean CedarBackup3.config.addByteQuantityNode CedarBackup3.config-module.html#addByteQuantityNode CedarBackup3.config.validateScsiId CedarBackup3.writers.util-module.html#validateScsiId CedarBackup3.config.REWRITABLE_MEDIA_TYPES CedarBackup3.config-module.html#REWRITABLE_MEDIA_TYPES CedarBackup3.config.readString CedarBackup3.xmlutil-module.html#readString CedarBackup3.config.addContainerNode CedarBackup3.xmlutil-module.html#addContainerNode CedarBackup3.config.VALID_ARCHIVE_MODES CedarBackup3.config-module.html#VALID_ARCHIVE_MODES CedarBackup3.config.serializeDom CedarBackup3.xmlutil-module.html#serializeDom CedarBackup3.config.DEFAULT_MEDIA_TYPE CedarBackup3.config-module.html#DEFAULT_MEDIA_TYPE CedarBackup3.config.convertSize CedarBackup3.util-module.html#convertSize CedarBackup3.config.VALID_CD_MEDIA_TYPES CedarBackup3.config-module.html#VALID_CD_MEDIA_TYPES CedarBackup3.config.__package__ CedarBackup3.config-module.html#__package__ CedarBackup3.config.createInputDom CedarBackup3.xmlutil-module.html#createInputDom CedarBackup3.config.validateDriveSpeed CedarBackup3.writers.util-module.html#validateDriveSpeed CedarBackup3.config.checkUnique CedarBackup3.util-module.html#checkUnique CedarBackup3.config.readInteger CedarBackup3.xmlutil-module.html#readInteger CedarBackup3.config.parseCommaSeparatedString CedarBackup3.util-module.html#parseCommaSeparatedString CedarBackup3.config.isElement CedarBackup3.xmlutil-module.html#isElement CedarBackup3.config.logger CedarBackup3.config-module.html#logger CedarBackup3.config.displayBytes CedarBackup3.util-module.html#displayBytes CedarBackup3.config.addStringNode CedarBackup3.xmlutil-module.html#addStringNode CedarBackup3.config.VALID_DEVICE_TYPES CedarBackup3.config-module.html#VALID_DEVICE_TYPES CedarBackup3.config.DEFAULT_DEVICE_TYPE CedarBackup3.config-module.html#DEFAULT_DEVICE_TYPE CedarBackup3.config.addBooleanNode CedarBackup3.xmlutil-module.html#addBooleanNode CedarBackup3.config.readChildren CedarBackup3.xmlutil-module.html#readChildren CedarBackup3.config.VALID_FAILURE_MODES CedarBackup3.config-module.html#VALID_FAILURE_MODES CedarBackup3.config.VALID_BYTE_UNITS CedarBackup3.config-module.html#VALID_BYTE_UNITS CedarBackup3.config.readFirstChild CedarBackup3.xmlutil-module.html#readFirstChild CedarBackup3.config.encodePath CedarBackup3.util-module.html#encodePath CedarBackup3.config.VALID_BLANK_MODES CedarBackup3.config-module.html#VALID_BLANK_MODES CedarBackup3.config.readStringList CedarBackup3.xmlutil-module.html#readStringList CedarBackup3.config.VALID_COMPRESS_MODES CedarBackup3.config-module.html#VALID_COMPRESS_MODES CedarBackup3.config.ACTION_NAME_REGEX CedarBackup3.config-module.html#ACTION_NAME_REGEX CedarBackup3.config.createOutputDom CedarBackup3.xmlutil-module.html#createOutputDom CedarBackup3.config.VALID_DVD_MEDIA_TYPES CedarBackup3.config-module.html#VALID_DVD_MEDIA_TYPES CedarBackup3.config.readByteQuantity CedarBackup3.config-module.html#readByteQuantity CedarBackup3.config.addIntegerNode CedarBackup3.xmlutil-module.html#addIntegerNode CedarBackup3.customize CedarBackup3.customize-module.html CedarBackup3.customize.DEBIAN_MKISOFS CedarBackup3.customize-module.html#DEBIAN_MKISOFS CedarBackup3.customize.customizeOverrides CedarBackup3.customize-module.html#customizeOverrides CedarBackup3.customize.__package__ CedarBackup3.customize-module.html#__package__ CedarBackup3.customize.PLATFORM CedarBackup3.customize-module.html#PLATFORM CedarBackup3.customize.DEBIAN_CDRECORD CedarBackup3.customize-module.html#DEBIAN_CDRECORD CedarBackup3.customize.logger CedarBackup3.customize-module.html#logger CedarBackup3.extend CedarBackup3.extend-module.html CedarBackup3.extend.__package__ CedarBackup3.extend-module.html#__package__ CedarBackup3.extend.amazons3 CedarBackup3.extend.amazons3-module.html CedarBackup3.extend.amazons3.executeAction CedarBackup3.extend.amazons3-module.html#executeAction CedarBackup3.extend.amazons3.readFirstChild CedarBackup3.xmlutil-module.html#readFirstChild CedarBackup3.extend.amazons3.addContainerNode CedarBackup3.xmlutil-module.html#addContainerNode CedarBackup3.extend.amazons3.addStringNode CedarBackup3.xmlutil-module.html#addStringNode CedarBackup3.extend.amazons3.writeIndicatorFile CedarBackup3.actions.util-module.html#writeIndicatorFile CedarBackup3.extend.amazons3.SU_COMMAND CedarBackup3.extend.amazons3-module.html#SU_COMMAND CedarBackup3.extend.amazons3.isStartOfWeek CedarBackup3.util-module.html#isStartOfWeek CedarBackup3.extend.amazons3.readBoolean CedarBackup3.xmlutil-module.html#readBoolean CedarBackup3.extend.amazons3._verifyUpload CedarBackup3.extend.amazons3-module.html#_verifyUpload CedarBackup3.extend.amazons3.__package__ CedarBackup3.extend.amazons3-module.html#__package__ CedarBackup3.extend.amazons3.createInputDom CedarBackup3.xmlutil-module.html#createInputDom CedarBackup3.extend.amazons3.logger CedarBackup3.extend.amazons3-module.html#logger CedarBackup3.extend.amazons3.displayBytes CedarBackup3.util-module.html#displayBytes CedarBackup3.extend.amazons3._applySizeLimits CedarBackup3.extend.amazons3-module.html#_applySizeLimits CedarBackup3.extend.amazons3.AWS_COMMAND CedarBackup3.extend.amazons3-module.html#AWS_COMMAND CedarBackup3.extend.amazons3._clearExistingBackup CedarBackup3.extend.amazons3-module.html#_clearExistingBackup CedarBackup3.extend.amazons3.changeOwnership CedarBackup3.util-module.html#changeOwnership CedarBackup3.extend.amazons3.resolveCommand CedarBackup3.util-module.html#resolveCommand CedarBackup3.extend.amazons3.executeCommand CedarBackup3.util-module.html#executeCommand CedarBackup3.extend.amazons3._findCorrectDailyDir CedarBackup3.extend.amazons3-module.html#_findCorrectDailyDir CedarBackup3.extend.amazons3.readString CedarBackup3.xmlutil-module.html#readString CedarBackup3.extend.amazons3._writeToAmazonS3 CedarBackup3.extend.amazons3-module.html#_writeToAmazonS3 CedarBackup3.extend.amazons3.STORE_INDICATOR CedarBackup3.extend.amazons3-module.html#STORE_INDICATOR CedarBackup3.extend.amazons3.addBooleanNode CedarBackup3.xmlutil-module.html#addBooleanNode CedarBackup3.extend.amazons3.readByteQuantity CedarBackup3.config-module.html#readByteQuantity CedarBackup3.extend.amazons3._encryptStagingDir CedarBackup3.extend.amazons3-module.html#_encryptStagingDir CedarBackup3.extend.amazons3.addByteQuantityNode CedarBackup3.config-module.html#addByteQuantityNode CedarBackup3.extend.amazons3._uploadStagingDir CedarBackup3.extend.amazons3-module.html#_uploadStagingDir CedarBackup3.extend.amazons3.isRunningAsRoot CedarBackup3.util-module.html#isRunningAsRoot CedarBackup3.extend.amazons3._writeStoreIndicator CedarBackup3.extend.amazons3-module.html#_writeStoreIndicator CedarBackup3.extend.capacity CedarBackup3.extend.capacity-module.html CedarBackup3.extend.capacity.readByteQuantity CedarBackup3.config-module.html#readByteQuantity CedarBackup3.extend.capacity.displayBytes CedarBackup3.util-module.html#displayBytes CedarBackup3.extend.capacity.readFirstChild CedarBackup3.xmlutil-module.html#readFirstChild CedarBackup3.extend.capacity.executeAction CedarBackup3.extend.capacity-module.html#executeAction CedarBackup3.extend.capacity.addContainerNode CedarBackup3.xmlutil-module.html#addContainerNode CedarBackup3.extend.capacity.__package__ CedarBackup3.extend.capacity-module.html#__package__ CedarBackup3.extend.capacity.createInputDom CedarBackup3.xmlutil-module.html#createInputDom CedarBackup3.extend.capacity.addByteQuantityNode CedarBackup3.config-module.html#addByteQuantityNode CedarBackup3.extend.capacity.checkMediaState CedarBackup3.actions.util-module.html#checkMediaState CedarBackup3.extend.capacity.readString CedarBackup3.xmlutil-module.html#readString CedarBackup3.extend.capacity.logger CedarBackup3.extend.capacity-module.html#logger CedarBackup3.extend.capacity.addStringNode CedarBackup3.xmlutil-module.html#addStringNode CedarBackup3.extend.capacity.createWriter CedarBackup3.actions.util-module.html#createWriter CedarBackup3.extend.encrypt CedarBackup3.extend.encrypt-module.html CedarBackup3.extend.encrypt.executeAction CedarBackup3.extend.encrypt-module.html#executeAction CedarBackup3.extend.encrypt.readFirstChild CedarBackup3.xmlutil-module.html#readFirstChild CedarBackup3.extend.encrypt.addStringNode CedarBackup3.xmlutil-module.html#addStringNode CedarBackup3.extend.encrypt.writeIndicatorFile CedarBackup3.actions.util-module.html#writeIndicatorFile CedarBackup3.extend.encrypt._encryptFile CedarBackup3.extend.encrypt-module.html#_encryptFile CedarBackup3.extend.encrypt.addContainerNode CedarBackup3.xmlutil-module.html#addContainerNode CedarBackup3.extend.encrypt.__package__ CedarBackup3.extend.encrypt-module.html#__package__ CedarBackup3.extend.encrypt.createInputDom CedarBackup3.xmlutil-module.html#createInputDom CedarBackup3.extend.encrypt.resolveCommand CedarBackup3.util-module.html#resolveCommand CedarBackup3.extend.encrypt._encryptDailyDir CedarBackup3.extend.encrypt-module.html#_encryptDailyDir CedarBackup3.extend.encrypt.findDailyDirs CedarBackup3.actions.util-module.html#findDailyDirs CedarBackup3.extend.encrypt.logger CedarBackup3.extend.encrypt-module.html#logger CedarBackup3.extend.encrypt.changeOwnership CedarBackup3.util-module.html#changeOwnership CedarBackup3.extend.encrypt.getBackupFiles CedarBackup3.actions.util-module.html#getBackupFiles CedarBackup3.extend.encrypt.executeCommand CedarBackup3.util-module.html#executeCommand CedarBackup3.extend.encrypt.readString CedarBackup3.xmlutil-module.html#readString CedarBackup3.extend.encrypt.VALID_ENCRYPT_MODES CedarBackup3.extend.encrypt-module.html#VALID_ENCRYPT_MODES CedarBackup3.extend.encrypt._confirmGpgRecipient CedarBackup3.extend.encrypt-module.html#_confirmGpgRecipient CedarBackup3.extend.encrypt.GPG_COMMAND CedarBackup3.extend.encrypt-module.html#GPG_COMMAND CedarBackup3.extend.encrypt.ENCRYPT_INDICATOR CedarBackup3.extend.encrypt-module.html#ENCRYPT_INDICATOR CedarBackup3.extend.encrypt._encryptFileWithGpg CedarBackup3.extend.encrypt-module.html#_encryptFileWithGpg CedarBackup3.extend.mbox CedarBackup3.extend.mbox-module.html CedarBackup3.extend.mbox._getTarfilePath CedarBackup3.extend.mbox-module.html#_getTarfilePath CedarBackup3.extend.mbox._getCollectMode CedarBackup3.extend.mbox-module.html#_getCollectMode CedarBackup3.extend.mbox._getExclusions CedarBackup3.extend.mbox-module.html#_getExclusions CedarBackup3.extend.mbox.executeAction CedarBackup3.extend.mbox-module.html#executeAction CedarBackup3.extend.mbox._getOutputFile CedarBackup3.extend.mbox-module.html#_getOutputFile CedarBackup3.extend.mbox.readFirstChild CedarBackup3.xmlutil-module.html#readFirstChild CedarBackup3.extend.mbox.addContainerNode CedarBackup3.xmlutil-module.html#addContainerNode CedarBackup3.extend.mbox.GREPMAIL_COMMAND CedarBackup3.extend.mbox-module.html#GREPMAIL_COMMAND CedarBackup3.extend.mbox.isStartOfWeek CedarBackup3.util-module.html#isStartOfWeek CedarBackup3.extend.mbox.addStringNode CedarBackup3.xmlutil-module.html#addStringNode CedarBackup3.extend.mbox._getRevisionPath CedarBackup3.extend.mbox-module.html#_getRevisionPath CedarBackup3.extend.mbox.__package__ CedarBackup3.extend.mbox-module.html#__package__ CedarBackup3.extend.mbox.changeOwnership CedarBackup3.util-module.html#changeOwnership CedarBackup3.extend.mbox.createInputDom CedarBackup3.xmlutil-module.html#createInputDom CedarBackup3.extend.mbox.isElement CedarBackup3.xmlutil-module.html#isElement CedarBackup3.extend.mbox.logger CedarBackup3.extend.mbox-module.html#logger CedarBackup3.extend.mbox._backupMboxDir CedarBackup3.extend.mbox-module.html#_backupMboxDir CedarBackup3.extend.mbox._backupMboxFile CedarBackup3.extend.mbox-module.html#_backupMboxFile CedarBackup3.extend.mbox.resolveCommand CedarBackup3.util-module.html#resolveCommand CedarBackup3.extend.mbox.executeCommand CedarBackup3.util-module.html#executeCommand CedarBackup3.extend.mbox._getBackupPath CedarBackup3.extend.mbox-module.html#_getBackupPath CedarBackup3.extend.mbox.readChildren CedarBackup3.xmlutil-module.html#readChildren CedarBackup3.extend.mbox.buildNormalizedPath CedarBackup3.util-module.html#buildNormalizedPath CedarBackup3.extend.mbox.encodePath CedarBackup3.util-module.html#encodePath CedarBackup3.extend.mbox.readString CedarBackup3.xmlutil-module.html#readString CedarBackup3.extend.mbox._getCompressMode CedarBackup3.extend.mbox-module.html#_getCompressMode CedarBackup3.extend.mbox._writeNewRevision CedarBackup3.extend.mbox-module.html#_writeNewRevision CedarBackup3.extend.mbox._loadLastRevision CedarBackup3.extend.mbox-module.html#_loadLastRevision CedarBackup3.extend.mbox.REVISION_PATH_EXTENSION CedarBackup3.extend.mbox-module.html#REVISION_PATH_EXTENSION CedarBackup3.extend.mbox.readStringList CedarBackup3.xmlutil-module.html#readStringList CedarBackup3.extend.mysql CedarBackup3.extend.mysql-module.html CedarBackup3.extend.mysql.executeAction CedarBackup3.extend.mysql-module.html#executeAction CedarBackup3.extend.mysql.MYSQLDUMP_COMMAND CedarBackup3.extend.mysql-module.html#MYSQLDUMP_COMMAND CedarBackup3.extend.mysql.readFirstChild CedarBackup3.xmlutil-module.html#readFirstChild CedarBackup3.extend.mysql.readStringList CedarBackup3.xmlutil-module.html#readStringList CedarBackup3.extend.mysql.addStringNode CedarBackup3.xmlutil-module.html#addStringNode CedarBackup3.extend.mysql.readBoolean CedarBackup3.xmlutil-module.html#readBoolean CedarBackup3.extend.mysql.addContainerNode CedarBackup3.xmlutil-module.html#addContainerNode CedarBackup3.extend.mysql.__package__ CedarBackup3.extend.mysql-module.html#__package__ CedarBackup3.extend.mysql.createInputDom CedarBackup3.xmlutil-module.html#createInputDom CedarBackup3.extend.mysql.logger CedarBackup3.extend.mysql-module.html#logger CedarBackup3.extend.mysql.backupDatabase CedarBackup3.extend.mysql-module.html#backupDatabase CedarBackup3.extend.mysql._getOutputFile CedarBackup3.extend.mysql-module.html#_getOutputFile CedarBackup3.extend.mysql.changeOwnership CedarBackup3.util-module.html#changeOwnership CedarBackup3.extend.mysql.resolveCommand CedarBackup3.util-module.html#resolveCommand CedarBackup3.extend.mysql.executeCommand CedarBackup3.util-module.html#executeCommand CedarBackup3.extend.mysql.readString CedarBackup3.xmlutil-module.html#readString CedarBackup3.extend.mysql._backupDatabase CedarBackup3.extend.mysql-module.html#_backupDatabase CedarBackup3.extend.mysql.addBooleanNode CedarBackup3.xmlutil-module.html#addBooleanNode CedarBackup3.extend.postgresql CedarBackup3.extend.postgresql-module.html CedarBackup3.extend.postgresql.executeAction CedarBackup3.extend.postgresql-module.html#executeAction CedarBackup3.extend.postgresql.readFirstChild CedarBackup3.xmlutil-module.html#readFirstChild CedarBackup3.extend.postgresql.readStringList CedarBackup3.xmlutil-module.html#readStringList CedarBackup3.extend.postgresql.addStringNode CedarBackup3.xmlutil-module.html#addStringNode CedarBackup3.extend.postgresql.readBoolean CedarBackup3.xmlutil-module.html#readBoolean CedarBackup3.extend.postgresql.addContainerNode CedarBackup3.xmlutil-module.html#addContainerNode CedarBackup3.extend.postgresql.__package__ CedarBackup3.extend.postgresql-module.html#__package__ CedarBackup3.extend.postgresql.createInputDom CedarBackup3.xmlutil-module.html#createInputDom CedarBackup3.extend.postgresql.logger CedarBackup3.extend.postgresql-module.html#logger CedarBackup3.extend.postgresql.backupDatabase CedarBackup3.extend.postgresql-module.html#backupDatabase CedarBackup3.extend.postgresql._getOutputFile CedarBackup3.extend.postgresql-module.html#_getOutputFile CedarBackup3.extend.postgresql.changeOwnership CedarBackup3.util-module.html#changeOwnership CedarBackup3.extend.postgresql.resolveCommand CedarBackup3.util-module.html#resolveCommand CedarBackup3.extend.postgresql.executeCommand CedarBackup3.util-module.html#executeCommand CedarBackup3.extend.postgresql.readString CedarBackup3.xmlutil-module.html#readString CedarBackup3.extend.postgresql.POSTGRESQLDUMP_COMMAND CedarBackup3.extend.postgresql-module.html#POSTGRESQLDUMP_COMMAND CedarBackup3.extend.postgresql._backupDatabase CedarBackup3.extend.postgresql-module.html#_backupDatabase CedarBackup3.extend.postgresql.addBooleanNode CedarBackup3.xmlutil-module.html#addBooleanNode CedarBackup3.extend.postgresql.POSTGRESQLDUMPALL_COMMAND CedarBackup3.extend.postgresql-module.html#POSTGRESQLDUMPALL_COMMAND CedarBackup3.extend.split CedarBackup3.extend.split-module.html CedarBackup3.extend.split.executeAction CedarBackup3.extend.split-module.html#executeAction CedarBackup3.extend.split.readFirstChild CedarBackup3.xmlutil-module.html#readFirstChild CedarBackup3.extend.split.writeIndicatorFile CedarBackup3.actions.util-module.html#writeIndicatorFile CedarBackup3.extend.split._splitFile CedarBackup3.extend.split-module.html#_splitFile CedarBackup3.extend.split.SPLIT_COMMAND CedarBackup3.extend.split-module.html#SPLIT_COMMAND CedarBackup3.extend.split.addContainerNode CedarBackup3.xmlutil-module.html#addContainerNode CedarBackup3.extend.split.__package__ CedarBackup3.extend.split-module.html#__package__ CedarBackup3.extend.split.createInputDom CedarBackup3.xmlutil-module.html#createInputDom CedarBackup3.extend.split.resolveCommand CedarBackup3.util-module.html#resolveCommand CedarBackup3.extend.split.logger CedarBackup3.extend.split-module.html#logger CedarBackup3.extend.split._splitDailyDir CedarBackup3.extend.split-module.html#_splitDailyDir CedarBackup3.extend.split.findDailyDirs CedarBackup3.actions.util-module.html#findDailyDirs CedarBackup3.extend.split.changeOwnership CedarBackup3.util-module.html#changeOwnership CedarBackup3.extend.split.getBackupFiles CedarBackup3.actions.util-module.html#getBackupFiles CedarBackup3.extend.split.executeCommand CedarBackup3.util-module.html#executeCommand CedarBackup3.extend.split.SPLIT_INDICATOR CedarBackup3.extend.split-module.html#SPLIT_INDICATOR CedarBackup3.extend.split.addByteQuantityNode CedarBackup3.config-module.html#addByteQuantityNode CedarBackup3.extend.split.readByteQuantity CedarBackup3.config-module.html#readByteQuantity CedarBackup3.extend.subversion CedarBackup3.extend.subversion-module.html CedarBackup3.extend.subversion._getCollectMode CedarBackup3.extend.subversion-module.html#_getCollectMode CedarBackup3.extend.subversion.SVNADMIN_COMMAND CedarBackup3.extend.subversion-module.html#SVNADMIN_COMMAND CedarBackup3.extend.subversion._getExclusions CedarBackup3.extend.subversion-module.html#_getExclusions CedarBackup3.extend.subversion.executeAction CedarBackup3.extend.subversion-module.html#executeAction CedarBackup3.extend.subversion.readFirstChild CedarBackup3.xmlutil-module.html#readFirstChild CedarBackup3.extend.subversion.SVNLOOK_COMMAND CedarBackup3.extend.subversion-module.html#SVNLOOK_COMMAND CedarBackup3.extend.subversion.readStringList CedarBackup3.xmlutil-module.html#readStringList CedarBackup3.extend.subversion.addStringNode CedarBackup3.xmlutil-module.html#addStringNode CedarBackup3.extend.subversion.isStartOfWeek CedarBackup3.util-module.html#isStartOfWeek CedarBackup3.extend.subversion._getRepositoryPaths CedarBackup3.extend.subversion-module.html#_getRepositoryPaths CedarBackup3.extend.subversion._getRevisionPath CedarBackup3.extend.subversion-module.html#_getRevisionPath CedarBackup3.extend.subversion.addContainerNode CedarBackup3.xmlutil-module.html#addContainerNode CedarBackup3.extend.subversion.__package__ CedarBackup3.extend.subversion-module.html#__package__ CedarBackup3.extend.subversion.createInputDom CedarBackup3.xmlutil-module.html#createInputDom CedarBackup3.extend.subversion.isElement CedarBackup3.xmlutil-module.html#isElement CedarBackup3.extend.subversion.logger CedarBackup3.extend.subversion-module.html#logger CedarBackup3.extend.subversion.executeCommand CedarBackup3.util-module.html#executeCommand CedarBackup3.extend.subversion._getOutputFile CedarBackup3.extend.subversion-module.html#_getOutputFile CedarBackup3.extend.subversion.changeOwnership CedarBackup3.util-module.html#changeOwnership CedarBackup3.extend.subversion.resolveCommand CedarBackup3.util-module.html#resolveCommand CedarBackup3.extend.subversion.backupRepository CedarBackup3.extend.subversion-module.html#backupRepository CedarBackup3.extend.subversion._getBackupPath CedarBackup3.extend.subversion-module.html#_getBackupPath CedarBackup3.extend.subversion.readChildren CedarBackup3.xmlutil-module.html#readChildren CedarBackup3.extend.subversion.buildNormalizedPath CedarBackup3.util-module.html#buildNormalizedPath CedarBackup3.extend.subversion.encodePath CedarBackup3.util-module.html#encodePath CedarBackup3.extend.subversion.readString CedarBackup3.xmlutil-module.html#readString CedarBackup3.extend.subversion.getYoungestRevision CedarBackup3.extend.subversion-module.html#getYoungestRevision CedarBackup3.extend.subversion._writeLastRevision CedarBackup3.extend.subversion-module.html#_writeLastRevision CedarBackup3.extend.subversion._getCompressMode CedarBackup3.extend.subversion-module.html#_getCompressMode CedarBackup3.extend.subversion.backupBDBRepository CedarBackup3.extend.subversion-module.html#backupBDBRepository CedarBackup3.extend.subversion._backupRepository CedarBackup3.extend.subversion-module.html#_backupRepository CedarBackup3.extend.subversion._loadLastRevision CedarBackup3.extend.subversion-module.html#_loadLastRevision CedarBackup3.extend.subversion.REVISION_PATH_EXTENSION CedarBackup3.extend.subversion-module.html#REVISION_PATH_EXTENSION CedarBackup3.extend.subversion.backupFSFSRepository CedarBackup3.extend.subversion-module.html#backupFSFSRepository CedarBackup3.extend.sysinfo CedarBackup3.extend.sysinfo-module.html CedarBackup3.extend.sysinfo._getOutputFile CedarBackup3.extend.sysinfo-module.html#_getOutputFile CedarBackup3.extend.sysinfo.logger CedarBackup3.extend.sysinfo-module.html#logger CedarBackup3.extend.sysinfo.DPKG_PATH CedarBackup3.extend.sysinfo-module.html#DPKG_PATH CedarBackup3.extend.sysinfo.FDISK_COMMAND CedarBackup3.extend.sysinfo-module.html#FDISK_COMMAND CedarBackup3.extend.sysinfo.changeOwnership CedarBackup3.util-module.html#changeOwnership CedarBackup3.extend.sysinfo._dumpPartitionTable CedarBackup3.extend.sysinfo-module.html#_dumpPartitionTable CedarBackup3.extend.sysinfo.executeAction CedarBackup3.extend.sysinfo-module.html#executeAction CedarBackup3.extend.sysinfo.DPKG_COMMAND CedarBackup3.extend.sysinfo-module.html#DPKG_COMMAND CedarBackup3.extend.sysinfo.LS_COMMAND CedarBackup3.extend.sysinfo-module.html#LS_COMMAND CedarBackup3.extend.sysinfo._dumpFilesystemContents CedarBackup3.extend.sysinfo-module.html#_dumpFilesystemContents CedarBackup3.extend.sysinfo.__package__ CedarBackup3.extend.sysinfo-module.html#__package__ CedarBackup3.extend.sysinfo.resolveCommand CedarBackup3.util-module.html#resolveCommand CedarBackup3.extend.sysinfo._dumpDebianPackages CedarBackup3.extend.sysinfo-module.html#_dumpDebianPackages CedarBackup3.extend.sysinfo.executeCommand CedarBackup3.util-module.html#executeCommand CedarBackup3.extend.sysinfo.FDISK_PATH CedarBackup3.extend.sysinfo-module.html#FDISK_PATH CedarBackup3.filesystem CedarBackup3.filesystem-module.html CedarBackup3.filesystem.normalizeDir CedarBackup3.filesystem-module.html#normalizeDir CedarBackup3.filesystem.firstFit CedarBackup3.knapsack-module.html#firstFit CedarBackup3.filesystem.calculateFileAge CedarBackup3.util-module.html#calculateFileAge CedarBackup3.filesystem.removeKeys CedarBackup3.util-module.html#removeKeys CedarBackup3.filesystem.alternateFit CedarBackup3.knapsack-module.html#alternateFit CedarBackup3.filesystem.__package__ CedarBackup3.filesystem-module.html#__package__ CedarBackup3.filesystem.worstFit CedarBackup3.knapsack-module.html#worstFit CedarBackup3.filesystem.logger CedarBackup3.filesystem-module.html#logger CedarBackup3.filesystem.displayBytes CedarBackup3.util-module.html#displayBytes CedarBackup3.filesystem.encodePath CedarBackup3.util-module.html#encodePath CedarBackup3.filesystem.bestFit CedarBackup3.knapsack-module.html#bestFit CedarBackup3.filesystem.compareDigestMaps CedarBackup3.filesystem-module.html#compareDigestMaps CedarBackup3.filesystem.compareContents CedarBackup3.filesystem-module.html#compareContents CedarBackup3.filesystem.dereferenceLink CedarBackup3.util-module.html#dereferenceLink CedarBackup3.image CedarBackup3.image-module.html CedarBackup3.image.__package__ CedarBackup3.image-module.html#__package__ CedarBackup3.knapsack CedarBackup3.knapsack-module.html CedarBackup3.knapsack.bestFit CedarBackup3.knapsack-module.html#bestFit CedarBackup3.knapsack.firstFit CedarBackup3.knapsack-module.html#firstFit CedarBackup3.knapsack.alternateFit CedarBackup3.knapsack-module.html#alternateFit CedarBackup3.knapsack.worstFit CedarBackup3.knapsack-module.html#worstFit CedarBackup3.knapsack.__package__ CedarBackup3.knapsack-module.html#__package__ CedarBackup3.peer CedarBackup3.peer-module.html CedarBackup3.peer.SU_COMMAND CedarBackup3.peer-module.html#SU_COMMAND CedarBackup3.peer.DEF_CBACK_COMMAND CedarBackup3.peer-module.html#DEF_CBACK_COMMAND CedarBackup3.peer.DEF_RSH_COMMAND CedarBackup3.peer-module.html#DEF_RSH_COMMAND CedarBackup3.peer.DEF_STAGE_INDICATOR CedarBackup3.peer-module.html#DEF_STAGE_INDICATOR CedarBackup3.peer.splitCommandLine CedarBackup3.util-module.html#splitCommandLine CedarBackup3.peer.resolveCommand CedarBackup3.util-module.html#resolveCommand CedarBackup3.peer.executeCommand CedarBackup3.util-module.html#executeCommand CedarBackup3.peer.__package__ CedarBackup3.peer-module.html#__package__ CedarBackup3.peer.DEF_COLLECT_INDICATOR CedarBackup3.peer-module.html#DEF_COLLECT_INDICATOR CedarBackup3.peer.encodePath CedarBackup3.util-module.html#encodePath CedarBackup3.peer.DEF_RCP_COMMAND CedarBackup3.peer-module.html#DEF_RCP_COMMAND CedarBackup3.peer.isRunningAsRoot CedarBackup3.util-module.html#isRunningAsRoot CedarBackup3.peer.logger CedarBackup3.peer-module.html#logger CedarBackup3.release CedarBackup3.release-module.html CedarBackup3.release.COPYRIGHT CedarBackup3.release-module.html#COPYRIGHT CedarBackup3.release.AUTHOR CedarBackup3.release-module.html#AUTHOR CedarBackup3.release.URL CedarBackup3.release-module.html#URL CedarBackup3.release.__package__ CedarBackup3.release-module.html#__package__ CedarBackup3.release.VERSION CedarBackup3.release-module.html#VERSION CedarBackup3.release.DATE CedarBackup3.release-module.html#DATE CedarBackup3.release.EMAIL CedarBackup3.release-module.html#EMAIL CedarBackup3.testutil CedarBackup3.testutil-module.html CedarBackup3.testutil.changeFileAge CedarBackup3.testutil-module.html#changeFileAge CedarBackup3.testutil.setupDebugLogger CedarBackup3.testutil-module.html#setupDebugLogger CedarBackup3.testutil.getLogin CedarBackup3.testutil-module.html#getLogin CedarBackup3.testutil.buildPath CedarBackup3.testutil-module.html#buildPath CedarBackup3.testutil._isPlatform CedarBackup3.testutil-module.html#_isPlatform CedarBackup3.testutil.platformDebian CedarBackup3.testutil-module.html#platformDebian CedarBackup3.testutil.findResources CedarBackup3.testutil-module.html#findResources CedarBackup3.testutil.customizeOverrides CedarBackup3.customize-module.html#customizeOverrides CedarBackup3.testutil.captureOutput CedarBackup3.testutil-module.html#captureOutput CedarBackup3.testutil.__package__ CedarBackup3.testutil-module.html#__package__ CedarBackup3.testutil.extractTar CedarBackup3.testutil-module.html#extractTar CedarBackup3.testutil.randomFilename CedarBackup3.testutil-module.html#randomFilename CedarBackup3.testutil.commandAvailable CedarBackup3.testutil-module.html#commandAvailable CedarBackup3.testutil.platformMacOsX CedarBackup3.testutil-module.html#platformMacOsX CedarBackup3.testutil.getMaskAsMode CedarBackup3.testutil-module.html#getMaskAsMode CedarBackup3.testutil.executeCommand CedarBackup3.util-module.html#executeCommand CedarBackup3.testutil.removedir CedarBackup3.testutil-module.html#removedir CedarBackup3.testutil.availableLocales CedarBackup3.testutil-module.html#availableLocales CedarBackup3.testutil.setupOverrides CedarBackup3.testutil-module.html#setupOverrides CedarBackup3.testutil.encodePath CedarBackup3.util-module.html#encodePath CedarBackup3.testutil.runningAsRoot CedarBackup3.testutil-module.html#runningAsRoot CedarBackup3.testutil.failUnlessAssignRaises CedarBackup3.testutil-module.html#failUnlessAssignRaises CedarBackup3.testutil.setupPathResolver CedarBackup3.cli-module.html#setupPathResolver CedarBackup3.tools CedarBackup3.tools-module.html CedarBackup3.tools.__package__ CedarBackup3.tools-module.html#__package__ CedarBackup3.tools.amazons3 CedarBackup3.tools.amazons3-module.html CedarBackup3.tools.amazons3.SHORT_SWITCHES CedarBackup3.tools.amazons3-module.html#SHORT_SWITCHES CedarBackup3.tools.amazons3._buildSourceFiles CedarBackup3.tools.amazons3-module.html#_buildSourceFiles CedarBackup3.tools.amazons3.LONG_SWITCHES CedarBackup3.tools.amazons3-module.html#LONG_SWITCHES CedarBackup3.tools.amazons3._usage CedarBackup3.tools.amazons3-module.html#_usage CedarBackup3.tools.amazons3.cli CedarBackup3.tools.amazons3-module.html#cli CedarBackup3.tools.amazons3._synchronizeBucket CedarBackup3.tools.amazons3-module.html#_synchronizeBucket CedarBackup3.tools.amazons3._executeAction CedarBackup3.tools.amazons3-module.html#_executeAction CedarBackup3.tools.amazons3.logger CedarBackup3.tools.amazons3-module.html#logger CedarBackup3.tools.amazons3.AWS_COMMAND CedarBackup3.tools.amazons3-module.html#AWS_COMMAND CedarBackup3.tools.amazons3._checkSourceFiles CedarBackup3.tools.amazons3-module.html#_checkSourceFiles CedarBackup3.tools.amazons3._diagnostics CedarBackup3.tools.amazons3-module.html#_diagnostics CedarBackup3.tools.amazons3._version CedarBackup3.tools.amazons3-module.html#_version CedarBackup3.tools.amazons3._verifyBucketContents CedarBackup3.tools.amazons3-module.html#_verifyBucketContents CedarBackup3.tools.span CedarBackup3.tools.span-module.html CedarBackup3.tools.span._writeDisc CedarBackup3.tools.span-module.html#_writeDisc CedarBackup3.tools.span.normalizeDir CedarBackup3.filesystem-module.html#normalizeDir CedarBackup3.tools.span._discWriteImage CedarBackup3.tools.span-module.html#_discWriteImage CedarBackup3.tools.span.compareDigestMaps CedarBackup3.filesystem-module.html#compareDigestMaps CedarBackup3.tools.span._getFloat CedarBackup3.tools.span-module.html#_getFloat CedarBackup3.tools.span._getReturn CedarBackup3.tools.span-module.html#_getReturn CedarBackup3.tools.span._usage CedarBackup3.tools.span-module.html#_usage CedarBackup3.tools.span._getChoiceAnswer CedarBackup3.tools.span-module.html#_getChoiceAnswer CedarBackup3.tools.span.unmount CedarBackup3.util-module.html#unmount CedarBackup3.tools.span._discConsistencyCheck CedarBackup3.tools.span-module.html#_discConsistencyCheck CedarBackup3.tools.span.convertSize CedarBackup3.util-module.html#convertSize CedarBackup3.tools.span._findDailyDirs CedarBackup3.tools.span-module.html#_findDailyDirs CedarBackup3.tools.span.__package__ CedarBackup3.tools.span-module.html#__package__ CedarBackup3.tools.span._executeAction CedarBackup3.tools.span-module.html#_executeAction CedarBackup3.tools.span._discInitializeImage CedarBackup3.tools.span-module.html#_discInitializeImage CedarBackup3.tools.span.setupLogging CedarBackup3.cli-module.html#setupLogging CedarBackup3.tools.span._getWriter CedarBackup3.tools.span-module.html#_getWriter CedarBackup3.tools.span.displayBytes CedarBackup3.util-module.html#displayBytes CedarBackup3.tools.span.findDailyDirs CedarBackup3.actions.util-module.html#findDailyDirs CedarBackup3.tools.span.logger CedarBackup3.tools.span-module.html#logger CedarBackup3.tools.span._consistencyCheck CedarBackup3.tools.span-module.html#_consistencyCheck CedarBackup3.tools.span._getYesNoAnswer CedarBackup3.tools.span-module.html#_getYesNoAnswer CedarBackup3.tools.span.cli CedarBackup3.tools.span-module.html#cli CedarBackup3.tools.span.createWriter CedarBackup3.actions.util-module.html#createWriter CedarBackup3.tools.span._diagnostics CedarBackup3.tools.span-module.html#_diagnostics CedarBackup3.tools.span._version CedarBackup3.tools.span-module.html#_version CedarBackup3.tools.span.setupPathResolver CedarBackup3.cli-module.html#setupPathResolver CedarBackup3.tools.span.writeIndicatorFile CedarBackup3.actions.util-module.html#writeIndicatorFile CedarBackup3.tools.span.mount CedarBackup3.util-module.html#mount CedarBackup3.tools.span._writeStoreIndicator CedarBackup3.tools.span-module.html#_writeStoreIndicator CedarBackup3.util CedarBackup3.util-module.html CedarBackup3.util.SECONDS_PER_DAY CedarBackup3.util-module.html#SECONDS_PER_DAY CedarBackup3.util.unmount CedarBackup3.util-module.html#unmount CedarBackup3.util.UNIT_BYTES CedarBackup3.util-module.html#UNIT_BYTES CedarBackup3.util.parseCommaSeparatedString CedarBackup3.util-module.html#parseCommaSeparatedString CedarBackup3.util.UNIT_SECTORS CedarBackup3.util-module.html#UNIT_SECTORS CedarBackup3.util.getUidGid CedarBackup3.util-module.html#getUidGid CedarBackup3.util._UID_GID_AVAILABLE CedarBackup3.util-module.html#_UID_GID_AVAILABLE CedarBackup3.util.getFunctionReference CedarBackup3.util-module.html#getFunctionReference CedarBackup3.util.deriveDayOfWeek CedarBackup3.util-module.html#deriveDayOfWeek CedarBackup3.util.HOURS_PER_DAY CedarBackup3.util-module.html#HOURS_PER_DAY CedarBackup3.util.BYTES_PER_MBYTE CedarBackup3.util-module.html#BYTES_PER_MBYTE CedarBackup3.util.removeKeys CedarBackup3.util-module.html#removeKeys CedarBackup3.util.deviceMounted CedarBackup3.util-module.html#deviceMounted CedarBackup3.util.isStartOfWeek CedarBackup3.util-module.html#isStartOfWeek CedarBackup3.util.buildNormalizedPath CedarBackup3.util-module.html#buildNormalizedPath CedarBackup3.util.sanitizeEnvironment CedarBackup3.util-module.html#sanitizeEnvironment CedarBackup3.util.UNIT_MBYTES CedarBackup3.util-module.html#UNIT_MBYTES CedarBackup3.util.convertSize CedarBackup3.util-module.html#convertSize CedarBackup3.util.UNIT_KBYTES CedarBackup3.util-module.html#UNIT_KBYTES CedarBackup3.util.DEFAULT_LANGUAGE CedarBackup3.util-module.html#DEFAULT_LANGUAGE CedarBackup3.util.UNIT_GBYTES CedarBackup3.util-module.html#UNIT_GBYTES CedarBackup3.util.__package__ CedarBackup3.util-module.html#__package__ CedarBackup3.util.nullDevice CedarBackup3.util-module.html#nullDevice CedarBackup3.util.resolveCommand CedarBackup3.util-module.html#resolveCommand CedarBackup3.util.UMOUNT_COMMAND CedarBackup3.util-module.html#UMOUNT_COMMAND CedarBackup3.util.MBYTES_PER_GBYTE CedarBackup3.util-module.html#MBYTES_PER_GBYTE CedarBackup3.util.displayBytes CedarBackup3.util-module.html#displayBytes CedarBackup3.util.executeCommand CedarBackup3.util-module.html#executeCommand CedarBackup3.util.MOUNT_COMMAND CedarBackup3.util-module.html#MOUNT_COMMAND CedarBackup3.util.logger CedarBackup3.util-module.html#logger CedarBackup3.util.changeOwnership CedarBackup3.util-module.html#changeOwnership CedarBackup3.util.SECONDS_PER_MINUTE CedarBackup3.util-module.html#SECONDS_PER_MINUTE CedarBackup3.util.LOCALE_VARS CedarBackup3.util-module.html#LOCALE_VARS CedarBackup3.util.MTAB_FILE CedarBackup3.util-module.html#MTAB_FILE CedarBackup3.util.encodePath CedarBackup3.util-module.html#encodePath CedarBackup3.util.BYTES_PER_SECTOR CedarBackup3.util-module.html#BYTES_PER_SECTOR CedarBackup3.util.KBYTES_PER_MBYTE CedarBackup3.util-module.html#KBYTES_PER_MBYTE CedarBackup3.util.LANG_VAR CedarBackup3.util-module.html#LANG_VAR CedarBackup3.util.MINUTES_PER_HOUR CedarBackup3.util-module.html#MINUTES_PER_HOUR CedarBackup3.util.BYTES_PER_KBYTE CedarBackup3.util-module.html#BYTES_PER_KBYTE CedarBackup3.util.sortDict CedarBackup3.util-module.html#sortDict CedarBackup3.util.isRunningAsRoot CedarBackup3.util-module.html#isRunningAsRoot CedarBackup3.util.splitCommandLine CedarBackup3.util-module.html#splitCommandLine CedarBackup3.util.outputLogger CedarBackup3.util-module.html#outputLogger CedarBackup3.util.BYTES_PER_GBYTE CedarBackup3.util-module.html#BYTES_PER_GBYTE CedarBackup3.util.calculateFileAge CedarBackup3.util-module.html#calculateFileAge CedarBackup3.util.checkUnique CedarBackup3.util-module.html#checkUnique CedarBackup3.util.ISO_SECTOR_SIZE CedarBackup3.util-module.html#ISO_SECTOR_SIZE CedarBackup3.util.mount CedarBackup3.util-module.html#mount CedarBackup3.util.dereferenceLink CedarBackup3.util-module.html#dereferenceLink CedarBackup3.writer CedarBackup3.writer-module.html CedarBackup3.writer.validateScsiId CedarBackup3.writers.util-module.html#validateScsiId CedarBackup3.writer.__package__ CedarBackup3.writer-module.html#__package__ CedarBackup3.writer.validateDriveSpeed CedarBackup3.writers.util-module.html#validateDriveSpeed CedarBackup3.writers CedarBackup3.writers-module.html CedarBackup3.writers.__package__ CedarBackup3.writers-module.html#__package__ CedarBackup3.writers.cdwriter CedarBackup3.writers.cdwriter-module.html CedarBackup3.writers.cdwriter.validateScsiId CedarBackup3.writers.util-module.html#validateScsiId CedarBackup3.writers.cdwriter.validateDriveSpeed CedarBackup3.writers.util-module.html#validateDriveSpeed CedarBackup3.writers.cdwriter.convertSize CedarBackup3.util-module.html#convertSize CedarBackup3.writers.cdwriter.MEDIA_CDRW_80 CedarBackup3.writers.cdwriter-module.html#MEDIA_CDRW_80 CedarBackup3.writers.cdwriter.__package__ CedarBackup3.writers.cdwriter-module.html#__package__ CedarBackup3.writers.cdwriter.CDRECORD_COMMAND CedarBackup3.writers.cdwriter-module.html#CDRECORD_COMMAND CedarBackup3.writers.cdwriter.logger CedarBackup3.writers.cdwriter-module.html#logger CedarBackup3.writers.cdwriter.displayBytes CedarBackup3.util-module.html#displayBytes CedarBackup3.writers.cdwriter.EJECT_COMMAND CedarBackup3.writers.cdwriter-module.html#EJECT_COMMAND CedarBackup3.writers.cdwriter.validateDevice CedarBackup3.writers.util-module.html#validateDevice CedarBackup3.writers.cdwriter.resolveCommand CedarBackup3.util-module.html#resolveCommand CedarBackup3.writers.cdwriter.executeCommand CedarBackup3.util-module.html#executeCommand CedarBackup3.writers.cdwriter.MEDIA_CDRW_74 CedarBackup3.writers.cdwriter-module.html#MEDIA_CDRW_74 CedarBackup3.writers.cdwriter.encodePath CedarBackup3.util-module.html#encodePath CedarBackup3.writers.cdwriter.MKISOFS_COMMAND CedarBackup3.writers.cdwriter-module.html#MKISOFS_COMMAND CedarBackup3.writers.cdwriter.MEDIA_CDR_80 CedarBackup3.writers.cdwriter-module.html#MEDIA_CDR_80 CedarBackup3.writers.cdwriter.MEDIA_CDR_74 CedarBackup3.writers.cdwriter-module.html#MEDIA_CDR_74 CedarBackup3.writers.dvdwriter CedarBackup3.writers.dvdwriter-module.html CedarBackup3.writers.dvdwriter.MEDIA_DVDPLUSR CedarBackup3.writers.dvdwriter-module.html#MEDIA_DVDPLUSR CedarBackup3.writers.dvdwriter.validateDriveSpeed CedarBackup3.writers.util-module.html#validateDriveSpeed CedarBackup3.writers.dvdwriter.convertSize CedarBackup3.util-module.html#convertSize CedarBackup3.writers.dvdwriter.__package__ CedarBackup3.writers.dvdwriter-module.html#__package__ CedarBackup3.writers.dvdwriter.logger CedarBackup3.writers.dvdwriter-module.html#logger CedarBackup3.writers.dvdwriter.displayBytes CedarBackup3.util-module.html#displayBytes CedarBackup3.writers.dvdwriter.EJECT_COMMAND CedarBackup3.writers.dvdwriter-module.html#EJECT_COMMAND CedarBackup3.writers.dvdwriter.MEDIA_DVDPLUSRW CedarBackup3.writers.dvdwriter-module.html#MEDIA_DVDPLUSRW CedarBackup3.writers.dvdwriter.validateDevice CedarBackup3.writers.util-module.html#validateDevice CedarBackup3.writers.dvdwriter.resolveCommand CedarBackup3.util-module.html#resolveCommand CedarBackup3.writers.dvdwriter.executeCommand CedarBackup3.util-module.html#executeCommand CedarBackup3.writers.dvdwriter.encodePath CedarBackup3.util-module.html#encodePath CedarBackup3.writers.dvdwriter.GROWISOFS_COMMAND CedarBackup3.writers.dvdwriter-module.html#GROWISOFS_COMMAND CedarBackup3.writers.util CedarBackup3.writers.util-module.html CedarBackup3.writers.util.validateDevice CedarBackup3.writers.util-module.html#validateDevice CedarBackup3.writers.util.convertSize CedarBackup3.util-module.html#convertSize CedarBackup3.writers.util.executeCommand CedarBackup3.util-module.html#executeCommand CedarBackup3.writers.util.VOLNAME_COMMAND CedarBackup3.writers.util-module.html#VOLNAME_COMMAND CedarBackup3.writers.util.validateScsiId CedarBackup3.writers.util-module.html#validateScsiId CedarBackup3.writers.util.__package__ CedarBackup3.writers.util-module.html#__package__ CedarBackup3.writers.util.readMediaLabel CedarBackup3.writers.util-module.html#readMediaLabel CedarBackup3.writers.util.resolveCommand CedarBackup3.util-module.html#resolveCommand CedarBackup3.writers.util.encodePath CedarBackup3.util-module.html#encodePath CedarBackup3.writers.util.logger CedarBackup3.writers.util-module.html#logger CedarBackup3.writers.util.MKISOFS_COMMAND CedarBackup3.writers.util-module.html#MKISOFS_COMMAND CedarBackup3.writers.util.validateDriveSpeed CedarBackup3.writers.util-module.html#validateDriveSpeed CedarBackup3.xmlutil CedarBackup3.xmlutil-module.html CedarBackup3.xmlutil.readFloat CedarBackup3.xmlutil-module.html#readFloat CedarBackup3.xmlutil.addLongNode CedarBackup3.xmlutil-module.html#addLongNode CedarBackup3.xmlutil.readFirstChild CedarBackup3.xmlutil-module.html#readFirstChild CedarBackup3.xmlutil._translateCDATAAttr CedarBackup3.xmlutil-module.html#_translateCDATAAttr CedarBackup3.xmlutil.TRUE_BOOLEAN_VALUES CedarBackup3.xmlutil-module.html#TRUE_BOOLEAN_VALUES CedarBackup3.xmlutil.readStringList CedarBackup3.xmlutil-module.html#readStringList CedarBackup3.xmlutil.addStringNode CedarBackup3.xmlutil-module.html#addStringNode CedarBackup3.xmlutil.serializeDom CedarBackup3.xmlutil-module.html#serializeDom CedarBackup3.xmlutil.readInteger CedarBackup3.xmlutil-module.html#readInteger CedarBackup3.xmlutil.VALID_BOOLEAN_VALUES CedarBackup3.xmlutil-module.html#VALID_BOOLEAN_VALUES CedarBackup3.xmlutil.readBoolean CedarBackup3.xmlutil-module.html#readBoolean CedarBackup3.xmlutil.addContainerNode CedarBackup3.xmlutil-module.html#addContainerNode CedarBackup3.xmlutil.__package__ CedarBackup3.xmlutil-module.html#__package__ CedarBackup3.xmlutil.createInputDom CedarBackup3.xmlutil-module.html#createInputDom CedarBackup3.xmlutil.isElement CedarBackup3.xmlutil-module.html#isElement CedarBackup3.xmlutil.logger CedarBackup3.xmlutil-module.html#logger CedarBackup3.xmlutil._encodeText CedarBackup3.xmlutil-module.html#_encodeText CedarBackup3.xmlutil.readChildren CedarBackup3.xmlutil-module.html#readChildren CedarBackup3.xmlutil.FALSE_BOOLEAN_VALUES CedarBackup3.xmlutil-module.html#FALSE_BOOLEAN_VALUES CedarBackup3.xmlutil.readString CedarBackup3.xmlutil-module.html#readString CedarBackup3.xmlutil.createOutputDom CedarBackup3.xmlutil-module.html#createOutputDom CedarBackup3.xmlutil.addBooleanNode CedarBackup3.xmlutil-module.html#addBooleanNode CedarBackup3.xmlutil.readLong CedarBackup3.xmlutil-module.html#readLong CedarBackup3.xmlutil.addIntegerNode CedarBackup3.xmlutil-module.html#addIntegerNode CedarBackup3.xmlutil._translateCDATA CedarBackup3.xmlutil-module.html#_translateCDATA CedarBackup3.cli.Options CedarBackup3.cli.Options-class.html CedarBackup3.cli.Options._getMode CedarBackup3.cli.Options-class.html#_getMode CedarBackup3.cli.Options.stacktrace CedarBackup3.cli.Options-class.html#stacktrace CedarBackup3.cli.Options.managed CedarBackup3.cli.Options-class.html#managed CedarBackup3.cli.Options.help CedarBackup3.cli.Options-class.html#help CedarBackup3.cli.Options._getFull CedarBackup3.cli.Options-class.html#_getFull CedarBackup3.cli.Options.__str__ CedarBackup3.cli.Options-class.html#__str__ CedarBackup3.cli.Options._setStacktrace CedarBackup3.cli.Options-class.html#_setStacktrace CedarBackup3.cli.Options.actions CedarBackup3.cli.Options-class.html#actions CedarBackup3.cli.Options.owner CedarBackup3.cli.Options-class.html#owner CedarBackup3.cli.Options._setQuiet CedarBackup3.cli.Options-class.html#_setQuiet CedarBackup3.cli.Options._setVersion CedarBackup3.cli.Options-class.html#_setVersion CedarBackup3.cli.Options.__lt__ CedarBackup3.cli.Options-class.html#__lt__ CedarBackup3.cli.Options._getVerbose CedarBackup3.cli.Options-class.html#_getVerbose CedarBackup3.cli.Options.verbose CedarBackup3.cli.Options-class.html#verbose CedarBackup3.cli.Options._setHelp CedarBackup3.cli.Options-class.html#_setHelp CedarBackup3.cli.Options._getDebug CedarBackup3.cli.Options-class.html#_getDebug CedarBackup3.cli.Options.output CedarBackup3.cli.Options-class.html#output CedarBackup3.cli.Options.debug CedarBackup3.cli.Options-class.html#debug CedarBackup3.cli.Options._parseArgumentList CedarBackup3.cli.Options-class.html#_parseArgumentList CedarBackup3.cli.Options.buildArgumentList CedarBackup3.cli.Options-class.html#buildArgumentList CedarBackup3.cli.Options._getManagedOnly CedarBackup3.cli.Options-class.html#_getManagedOnly CedarBackup3.cli.Options.__cmp__ CedarBackup3.cli.Options-class.html#__cmp__ CedarBackup3.cli.Options._getStacktrace CedarBackup3.cli.Options-class.html#_getStacktrace CedarBackup3.cli.Options._setOwner CedarBackup3.cli.Options-class.html#_setOwner CedarBackup3.cli.Options._setMode CedarBackup3.cli.Options-class.html#_setMode CedarBackup3.cli.Options.__init__ CedarBackup3.cli.Options-class.html#__init__ CedarBackup3.cli.Options._getQuiet CedarBackup3.cli.Options-class.html#_getQuiet CedarBackup3.cli.Options.managedOnly CedarBackup3.cli.Options-class.html#managedOnly CedarBackup3.cli.Options._setDebug CedarBackup3.cli.Options-class.html#_setDebug CedarBackup3.cli.Options.config CedarBackup3.cli.Options-class.html#config CedarBackup3.cli.Options.mode CedarBackup3.cli.Options-class.html#mode CedarBackup3.cli.Options._getVersion CedarBackup3.cli.Options-class.html#_getVersion CedarBackup3.cli.Options._getLogfile CedarBackup3.cli.Options-class.html#_getLogfile CedarBackup3.cli.Options.full CedarBackup3.cli.Options-class.html#full CedarBackup3.cli.Options._getConfig CedarBackup3.cli.Options-class.html#_getConfig CedarBackup3.cli.Options._setOutput CedarBackup3.cli.Options-class.html#_setOutput CedarBackup3.cli.Options._setFull CedarBackup3.cli.Options-class.html#_setFull CedarBackup3.cli.Options.version CedarBackup3.cli.Options-class.html#version CedarBackup3.cli.Options._setManagedOnly CedarBackup3.cli.Options-class.html#_setManagedOnly CedarBackup3.cli.Options._setDiagnostics CedarBackup3.cli.Options-class.html#_setDiagnostics CedarBackup3.cli.Options.__gt__ CedarBackup3.cli.Options-class.html#__gt__ CedarBackup3.cli.Options.validate CedarBackup3.cli.Options-class.html#validate CedarBackup3.cli.Options.logfile CedarBackup3.cli.Options-class.html#logfile CedarBackup3.cli.Options.__eq__ CedarBackup3.cli.Options-class.html#__eq__ CedarBackup3.cli.Options.buildArgumentString CedarBackup3.cli.Options-class.html#buildArgumentString CedarBackup3.cli.Options._getManaged CedarBackup3.cli.Options-class.html#_getManaged CedarBackup3.cli.Options._setManaged CedarBackup3.cli.Options-class.html#_setManaged CedarBackup3.cli.Options._setActions CedarBackup3.cli.Options-class.html#_setActions CedarBackup3.cli.Options._getOutput CedarBackup3.cli.Options-class.html#_getOutput CedarBackup3.cli.Options._getOwner CedarBackup3.cli.Options-class.html#_getOwner CedarBackup3.cli.Options._setLogfile CedarBackup3.cli.Options-class.html#_setLogfile CedarBackup3.cli.Options.quiet CedarBackup3.cli.Options-class.html#quiet CedarBackup3.cli.Options.__le__ CedarBackup3.cli.Options-class.html#__le__ CedarBackup3.cli.Options.__repr__ CedarBackup3.cli.Options-class.html#__repr__ CedarBackup3.cli.Options.diagnostics CedarBackup3.cli.Options-class.html#diagnostics CedarBackup3.cli.Options._getDiagnostics CedarBackup3.cli.Options-class.html#_getDiagnostics CedarBackup3.cli.Options._setConfig CedarBackup3.cli.Options-class.html#_setConfig CedarBackup3.cli.Options._setVerbose CedarBackup3.cli.Options-class.html#_setVerbose CedarBackup3.cli.Options._getHelp CedarBackup3.cli.Options-class.html#_getHelp CedarBackup3.cli.Options._getActions CedarBackup3.cli.Options-class.html#_getActions CedarBackup3.cli.Options.__ge__ CedarBackup3.cli.Options-class.html#__ge__ CedarBackup3.cli._ActionItem CedarBackup3.cli._ActionItem-class.html CedarBackup3.cli._ActionItem.executeAction CedarBackup3.cli._ActionItem-class.html#executeAction CedarBackup3.cli._ActionItem.__lt__ CedarBackup3.cli._ActionItem-class.html#__lt__ CedarBackup3.cli._ActionItem.__init__ CedarBackup3.cli._ActionItem-class.html#__init__ CedarBackup3.cli._ActionItem.__cmp__ CedarBackup3.cli._ActionItem-class.html#__cmp__ CedarBackup3.cli._ActionItem._executeAction CedarBackup3.cli._ActionItem-class.html#_executeAction CedarBackup3.cli._ActionItem.__le__ CedarBackup3.cli._ActionItem-class.html#__le__ CedarBackup3.cli._ActionItem.__gt__ CedarBackup3.cli._ActionItem-class.html#__gt__ CedarBackup3.cli._ActionItem._executeHook CedarBackup3.cli._ActionItem-class.html#_executeHook CedarBackup3.cli._ActionItem.__eq__ CedarBackup3.cli._ActionItem-class.html#__eq__ CedarBackup3.cli._ActionItem.SORT_ORDER CedarBackup3.cli._ActionItem-class.html#SORT_ORDER CedarBackup3.cli._ActionItem.__ge__ CedarBackup3.cli._ActionItem-class.html#__ge__ CedarBackup3.cli._ActionSet CedarBackup3.cli._ActionSet-class.html CedarBackup3.cli._ActionSet._validateActions CedarBackup3.cli._ActionSet-class.html#_validateActions CedarBackup3.cli._ActionSet._deriveHooks CedarBackup3.cli._ActionSet-class.html#_deriveHooks CedarBackup3.cli._ActionSet.__init__ CedarBackup3.cli._ActionSet-class.html#__init__ CedarBackup3.cli._ActionSet._getCbackCommand CedarBackup3.cli._ActionSet-class.html#_getCbackCommand CedarBackup3.cli._ActionSet.executeActions CedarBackup3.cli._ActionSet-class.html#executeActions CedarBackup3.cli._ActionSet._buildIndexMap CedarBackup3.cli._ActionSet-class.html#_buildIndexMap CedarBackup3.cli._ActionSet._buildHookMaps CedarBackup3.cli._ActionSet-class.html#_buildHookMaps CedarBackup3.cli._ActionSet._buildActionMap CedarBackup3.cli._ActionSet-class.html#_buildActionMap CedarBackup3.cli._ActionSet._buildFunctionMap CedarBackup3.cli._ActionSet-class.html#_buildFunctionMap CedarBackup3.cli._ActionSet._buildPeerMap CedarBackup3.cli._ActionSet-class.html#_buildPeerMap CedarBackup3.cli._ActionSet._getManagedActions CedarBackup3.cli._ActionSet-class.html#_getManagedActions CedarBackup3.cli._ActionSet._getRemoteUser CedarBackup3.cli._ActionSet-class.html#_getRemoteUser CedarBackup3.cli._ActionSet._deriveExtensionNames CedarBackup3.cli._ActionSet-class.html#_deriveExtensionNames CedarBackup3.cli._ActionSet._getRshCommand CedarBackup3.cli._ActionSet-class.html#_getRshCommand CedarBackup3.cli._ActionSet._buildActionSet CedarBackup3.cli._ActionSet-class.html#_buildActionSet CedarBackup3.cli._ManagedActionItem CedarBackup3.cli._ManagedActionItem-class.html CedarBackup3.cli._ManagedActionItem.executeAction CedarBackup3.cli._ManagedActionItem-class.html#executeAction CedarBackup3.cli._ManagedActionItem.__lt__ CedarBackup3.cli._ManagedActionItem-class.html#__lt__ CedarBackup3.cli._ManagedActionItem.__init__ CedarBackup3.cli._ManagedActionItem-class.html#__init__ CedarBackup3.cli._ManagedActionItem.__cmp__ CedarBackup3.cli._ManagedActionItem-class.html#__cmp__ CedarBackup3.cli._ManagedActionItem.__le__ CedarBackup3.cli._ManagedActionItem-class.html#__le__ CedarBackup3.cli._ManagedActionItem.__gt__ CedarBackup3.cli._ManagedActionItem-class.html#__gt__ CedarBackup3.cli._ManagedActionItem.__eq__ CedarBackup3.cli._ManagedActionItem-class.html#__eq__ CedarBackup3.cli._ManagedActionItem.SORT_ORDER CedarBackup3.cli._ManagedActionItem-class.html#SORT_ORDER CedarBackup3.cli._ManagedActionItem.__ge__ CedarBackup3.cli._ManagedActionItem-class.html#__ge__ CedarBackup3.config.ActionDependencies CedarBackup3.config.ActionDependencies-class.html CedarBackup3.config.ActionDependencies._getAfterList CedarBackup3.config.ActionDependencies-class.html#_getAfterList CedarBackup3.config.ActionDependencies.__str__ CedarBackup3.config.ActionDependencies-class.html#__str__ CedarBackup3.config.ActionDependencies.__lt__ CedarBackup3.config.ActionDependencies-class.html#__lt__ CedarBackup3.config.ActionDependencies.__init__ CedarBackup3.config.ActionDependencies-class.html#__init__ CedarBackup3.config.ActionDependencies.beforeList CedarBackup3.config.ActionDependencies-class.html#beforeList CedarBackup3.config.ActionDependencies.__cmp__ CedarBackup3.config.ActionDependencies-class.html#__cmp__ CedarBackup3.config.ActionDependencies._getBeforeList CedarBackup3.config.ActionDependencies-class.html#_getBeforeList CedarBackup3.config.ActionDependencies._setAfterList CedarBackup3.config.ActionDependencies-class.html#_setAfterList CedarBackup3.config.ActionDependencies.__gt__ CedarBackup3.config.ActionDependencies-class.html#__gt__ CedarBackup3.config.ActionDependencies.afterList CedarBackup3.config.ActionDependencies-class.html#afterList CedarBackup3.config.ActionDependencies.__eq__ CedarBackup3.config.ActionDependencies-class.html#__eq__ CedarBackup3.config.ActionDependencies.__le__ CedarBackup3.config.ActionDependencies-class.html#__le__ CedarBackup3.config.ActionDependencies.__repr__ CedarBackup3.config.ActionDependencies-class.html#__repr__ CedarBackup3.config.ActionDependencies._setBeforeList CedarBackup3.config.ActionDependencies-class.html#_setBeforeList CedarBackup3.config.ActionDependencies.__ge__ CedarBackup3.config.ActionDependencies-class.html#__ge__ CedarBackup3.config.ActionHook CedarBackup3.config.ActionHook-class.html CedarBackup3.config.ActionHook.__str__ CedarBackup3.config.ActionHook-class.html#__str__ CedarBackup3.config.ActionHook.__lt__ CedarBackup3.config.ActionHook-class.html#__lt__ CedarBackup3.config.ActionHook._getAction CedarBackup3.config.ActionHook-class.html#_getAction CedarBackup3.config.ActionHook.__init__ CedarBackup3.config.ActionHook-class.html#__init__ CedarBackup3.config.ActionHook._getCommand CedarBackup3.config.ActionHook-class.html#_getCommand CedarBackup3.config.ActionHook._getBefore CedarBackup3.config.ActionHook-class.html#_getBefore CedarBackup3.config.ActionHook._setAction CedarBackup3.config.ActionHook-class.html#_setAction CedarBackup3.config.ActionHook.__cmp__ CedarBackup3.config.ActionHook-class.html#__cmp__ CedarBackup3.config.ActionHook._setCommand CedarBackup3.config.ActionHook-class.html#_setCommand CedarBackup3.config.ActionHook._getAfter CedarBackup3.config.ActionHook-class.html#_getAfter CedarBackup3.config.ActionHook.before CedarBackup3.config.ActionHook-class.html#before CedarBackup3.config.ActionHook.after CedarBackup3.config.ActionHook-class.html#after CedarBackup3.config.ActionHook.__gt__ CedarBackup3.config.ActionHook-class.html#__gt__ CedarBackup3.config.ActionHook.__eq__ CedarBackup3.config.ActionHook-class.html#__eq__ CedarBackup3.config.ActionHook.__le__ CedarBackup3.config.ActionHook-class.html#__le__ CedarBackup3.config.ActionHook.command CedarBackup3.config.ActionHook-class.html#command CedarBackup3.config.ActionHook.__repr__ CedarBackup3.config.ActionHook-class.html#__repr__ CedarBackup3.config.ActionHook.action CedarBackup3.config.ActionHook-class.html#action CedarBackup3.config.ActionHook.__ge__ CedarBackup3.config.ActionHook-class.html#__ge__ CedarBackup3.config.BlankBehavior CedarBackup3.config.BlankBehavior-class.html CedarBackup3.config.BlankBehavior.__str__ CedarBackup3.config.BlankBehavior-class.html#__str__ CedarBackup3.config.BlankBehavior.blankMode CedarBackup3.config.BlankBehavior-class.html#blankMode CedarBackup3.config.BlankBehavior.__lt__ CedarBackup3.config.BlankBehavior-class.html#__lt__ CedarBackup3.config.BlankBehavior.__init__ CedarBackup3.config.BlankBehavior-class.html#__init__ CedarBackup3.config.BlankBehavior._setBlankFactor CedarBackup3.config.BlankBehavior-class.html#_setBlankFactor CedarBackup3.config.BlankBehavior.__cmp__ CedarBackup3.config.BlankBehavior-class.html#__cmp__ CedarBackup3.config.BlankBehavior._getBlankMode CedarBackup3.config.BlankBehavior-class.html#_getBlankMode CedarBackup3.config.BlankBehavior._getBlankFactor CedarBackup3.config.BlankBehavior-class.html#_getBlankFactor CedarBackup3.config.BlankBehavior._setBlankMode CedarBackup3.config.BlankBehavior-class.html#_setBlankMode CedarBackup3.config.BlankBehavior.__gt__ CedarBackup3.config.BlankBehavior-class.html#__gt__ CedarBackup3.config.BlankBehavior.__eq__ CedarBackup3.config.BlankBehavior-class.html#__eq__ CedarBackup3.config.BlankBehavior.blankFactor CedarBackup3.config.BlankBehavior-class.html#blankFactor CedarBackup3.config.BlankBehavior.__le__ CedarBackup3.config.BlankBehavior-class.html#__le__ CedarBackup3.config.BlankBehavior.__repr__ CedarBackup3.config.BlankBehavior-class.html#__repr__ CedarBackup3.config.BlankBehavior.__ge__ CedarBackup3.config.BlankBehavior-class.html#__ge__ CedarBackup3.config.ByteQuantity CedarBackup3.config.ByteQuantity-class.html CedarBackup3.config.ByteQuantity._setQuantity CedarBackup3.config.ByteQuantity-class.html#_setQuantity CedarBackup3.config.ByteQuantity._getBytes CedarBackup3.config.ByteQuantity-class.html#_getBytes CedarBackup3.config.ByteQuantity.__str__ CedarBackup3.config.ByteQuantity-class.html#__str__ CedarBackup3.config.ByteQuantity.__lt__ CedarBackup3.config.ByteQuantity-class.html#__lt__ CedarBackup3.config.ByteQuantity.__init__ CedarBackup3.config.ByteQuantity-class.html#__init__ CedarBackup3.config.ByteQuantity.__cmp__ CedarBackup3.config.ByteQuantity-class.html#__cmp__ CedarBackup3.config.ByteQuantity._getQuantity CedarBackup3.config.ByteQuantity-class.html#_getQuantity CedarBackup3.config.ByteQuantity.units CedarBackup3.config.ByteQuantity-class.html#units CedarBackup3.config.ByteQuantity._getUnits CedarBackup3.config.ByteQuantity-class.html#_getUnits CedarBackup3.config.ByteQuantity.__gt__ CedarBackup3.config.ByteQuantity-class.html#__gt__ CedarBackup3.config.ByteQuantity._setUnits CedarBackup3.config.ByteQuantity-class.html#_setUnits CedarBackup3.config.ByteQuantity.__eq__ CedarBackup3.config.ByteQuantity-class.html#__eq__ CedarBackup3.config.ByteQuantity.bytes CedarBackup3.config.ByteQuantity-class.html#bytes CedarBackup3.config.ByteQuantity.__le__ CedarBackup3.config.ByteQuantity-class.html#__le__ CedarBackup3.config.ByteQuantity.__repr__ CedarBackup3.config.ByteQuantity-class.html#__repr__ CedarBackup3.config.ByteQuantity.__ge__ CedarBackup3.config.ByteQuantity-class.html#__ge__ CedarBackup3.config.ByteQuantity.quantity CedarBackup3.config.ByteQuantity-class.html#quantity CedarBackup3.config.CollectConfig CedarBackup3.config.CollectConfig-class.html CedarBackup3.config.CollectConfig._getCollectMode CedarBackup3.config.CollectConfig-class.html#_getCollectMode CedarBackup3.config.CollectConfig._getArchiveMode CedarBackup3.config.CollectConfig-class.html#_getArchiveMode CedarBackup3.config.CollectConfig.__str__ CedarBackup3.config.CollectConfig-class.html#__str__ CedarBackup3.config.CollectConfig._setArchiveMode CedarBackup3.config.CollectConfig-class.html#_setArchiveMode CedarBackup3.config.CollectConfig._setExcludePatterns CedarBackup3.config.CollectConfig-class.html#_setExcludePatterns CedarBackup3.config.CollectConfig.collectDirs CedarBackup3.config.CollectConfig-class.html#collectDirs CedarBackup3.config.CollectConfig._getCollectFiles CedarBackup3.config.CollectConfig-class.html#_getCollectFiles CedarBackup3.config.CollectConfig.__lt__ CedarBackup3.config.CollectConfig-class.html#__lt__ CedarBackup3.config.CollectConfig.collectFiles CedarBackup3.config.CollectConfig-class.html#collectFiles CedarBackup3.config.CollectConfig.__init__ CedarBackup3.config.CollectConfig-class.html#__init__ CedarBackup3.config.CollectConfig._setCollectMode CedarBackup3.config.CollectConfig-class.html#_setCollectMode CedarBackup3.config.CollectConfig.archiveMode CedarBackup3.config.CollectConfig-class.html#archiveMode CedarBackup3.config.CollectConfig._getTargetDir CedarBackup3.config.CollectConfig-class.html#_getTargetDir CedarBackup3.config.CollectConfig.__cmp__ CedarBackup3.config.CollectConfig-class.html#__cmp__ CedarBackup3.config.CollectConfig._setIgnoreFile CedarBackup3.config.CollectConfig-class.html#_setIgnoreFile CedarBackup3.config.CollectConfig.absoluteExcludePaths CedarBackup3.config.CollectConfig-class.html#absoluteExcludePaths CedarBackup3.config.CollectConfig._getCollectDirs CedarBackup3.config.CollectConfig-class.html#_getCollectDirs CedarBackup3.config.CollectConfig.ignoreFile CedarBackup3.config.CollectConfig-class.html#ignoreFile CedarBackup3.config.CollectConfig._setCollectFiles CedarBackup3.config.CollectConfig-class.html#_setCollectFiles CedarBackup3.config.CollectConfig._setAbsoluteExcludePaths CedarBackup3.config.CollectConfig-class.html#_setAbsoluteExcludePaths CedarBackup3.config.CollectConfig.__gt__ CedarBackup3.config.CollectConfig-class.html#__gt__ CedarBackup3.config.CollectConfig._setCollectDirs CedarBackup3.config.CollectConfig-class.html#_setCollectDirs CedarBackup3.config.CollectConfig.__eq__ CedarBackup3.config.CollectConfig-class.html#__eq__ CedarBackup3.config.CollectConfig._getIgnoreFile CedarBackup3.config.CollectConfig-class.html#_getIgnoreFile CedarBackup3.config.CollectConfig._getAbsoluteExcludePaths CedarBackup3.config.CollectConfig-class.html#_getAbsoluteExcludePaths CedarBackup3.config.CollectConfig.collectMode CedarBackup3.config.CollectConfig-class.html#collectMode CedarBackup3.config.CollectConfig._getExcludePatterns CedarBackup3.config.CollectConfig-class.html#_getExcludePatterns CedarBackup3.config.CollectConfig.excludePatterns CedarBackup3.config.CollectConfig-class.html#excludePatterns CedarBackup3.config.CollectConfig.targetDir CedarBackup3.config.CollectConfig-class.html#targetDir CedarBackup3.config.CollectConfig.__le__ CedarBackup3.config.CollectConfig-class.html#__le__ CedarBackup3.config.CollectConfig.__repr__ CedarBackup3.config.CollectConfig-class.html#__repr__ CedarBackup3.config.CollectConfig._setTargetDir CedarBackup3.config.CollectConfig-class.html#_setTargetDir CedarBackup3.config.CollectConfig.__ge__ CedarBackup3.config.CollectConfig-class.html#__ge__ CedarBackup3.config.CollectDir CedarBackup3.config.CollectDir-class.html CedarBackup3.config.CollectDir._getCollectMode CedarBackup3.config.CollectDir-class.html#_getCollectMode CedarBackup3.config.CollectDir._getArchiveMode CedarBackup3.config.CollectDir-class.html#_getArchiveMode CedarBackup3.config.CollectDir.archiveMode CedarBackup3.config.CollectDir-class.html#archiveMode CedarBackup3.config.CollectDir.__str__ CedarBackup3.config.CollectDir-class.html#__str__ CedarBackup3.config.CollectDir._getAbsolutePath CedarBackup3.config.CollectDir-class.html#_getAbsolutePath CedarBackup3.config.CollectDir._setExcludePatterns CedarBackup3.config.CollectDir-class.html#_setExcludePatterns CedarBackup3.config.CollectDir.__lt__ CedarBackup3.config.CollectDir-class.html#__lt__ CedarBackup3.config.CollectDir.__init__ CedarBackup3.config.CollectDir-class.html#__init__ CedarBackup3.config.CollectDir._setCollectMode CedarBackup3.config.CollectDir-class.html#_setCollectMode CedarBackup3.config.CollectDir._setLinkDepth CedarBackup3.config.CollectDir-class.html#_setLinkDepth CedarBackup3.config.CollectDir.recursionLevel CedarBackup3.config.CollectDir-class.html#recursionLevel CedarBackup3.config.CollectDir.absolutePath CedarBackup3.config.CollectDir-class.html#absolutePath CedarBackup3.config.CollectDir.__cmp__ CedarBackup3.config.CollectDir-class.html#__cmp__ CedarBackup3.config.CollectDir._setIgnoreFile CedarBackup3.config.CollectDir-class.html#_setIgnoreFile CedarBackup3.config.CollectDir.absoluteExcludePaths CedarBackup3.config.CollectDir-class.html#absoluteExcludePaths CedarBackup3.config.CollectDir.relativeExcludePaths CedarBackup3.config.CollectDir-class.html#relativeExcludePaths CedarBackup3.config.CollectDir._setArchiveMode CedarBackup3.config.CollectDir-class.html#_setArchiveMode CedarBackup3.config.CollectDir._getDereference CedarBackup3.config.CollectDir-class.html#_getDereference CedarBackup3.config.CollectDir.ignoreFile CedarBackup3.config.CollectDir-class.html#ignoreFile CedarBackup3.config.CollectDir._getLinkDepth CedarBackup3.config.CollectDir-class.html#_getLinkDepth CedarBackup3.config.CollectDir.dereference CedarBackup3.config.CollectDir-class.html#dereference CedarBackup3.config.CollectDir._setAbsoluteExcludePaths CedarBackup3.config.CollectDir-class.html#_setAbsoluteExcludePaths CedarBackup3.config.CollectDir.linkDepth CedarBackup3.config.CollectDir-class.html#linkDepth CedarBackup3.config.CollectDir._getRelativeExcludePaths CedarBackup3.config.CollectDir-class.html#_getRelativeExcludePaths CedarBackup3.config.CollectDir._setRecursionLevel CedarBackup3.config.CollectDir-class.html#_setRecursionLevel CedarBackup3.config.CollectDir._getRecursionLevel CedarBackup3.config.CollectDir-class.html#_getRecursionLevel CedarBackup3.config.CollectDir.__gt__ CedarBackup3.config.CollectDir-class.html#__gt__ CedarBackup3.config.CollectDir._setDereference CedarBackup3.config.CollectDir-class.html#_setDereference CedarBackup3.config.CollectDir.__eq__ CedarBackup3.config.CollectDir-class.html#__eq__ CedarBackup3.config.CollectDir._getIgnoreFile CedarBackup3.config.CollectDir-class.html#_getIgnoreFile CedarBackup3.config.CollectDir._getAbsoluteExcludePaths CedarBackup3.config.CollectDir-class.html#_getAbsoluteExcludePaths CedarBackup3.config.CollectDir.collectMode CedarBackup3.config.CollectDir-class.html#collectMode CedarBackup3.config.CollectDir._setRelativeExcludePaths CedarBackup3.config.CollectDir-class.html#_setRelativeExcludePaths CedarBackup3.config.CollectDir.excludePatterns CedarBackup3.config.CollectDir-class.html#excludePatterns CedarBackup3.config.CollectDir._setAbsolutePath CedarBackup3.config.CollectDir-class.html#_setAbsolutePath CedarBackup3.config.CollectDir._getExcludePatterns CedarBackup3.config.CollectDir-class.html#_getExcludePatterns CedarBackup3.config.CollectDir.__repr__ CedarBackup3.config.CollectDir-class.html#__repr__ CedarBackup3.config.CollectDir.__le__ CedarBackup3.config.CollectDir-class.html#__le__ CedarBackup3.config.CollectDir.__ge__ CedarBackup3.config.CollectDir-class.html#__ge__ CedarBackup3.config.CollectFile CedarBackup3.config.CollectFile-class.html CedarBackup3.config.CollectFile._getCollectMode CedarBackup3.config.CollectFile-class.html#_getCollectMode CedarBackup3.config.CollectFile._getArchiveMode CedarBackup3.config.CollectFile-class.html#_getArchiveMode CedarBackup3.config.CollectFile.__str__ CedarBackup3.config.CollectFile-class.html#__str__ CedarBackup3.config.CollectFile._getAbsolutePath CedarBackup3.config.CollectFile-class.html#_getAbsolutePath CedarBackup3.config.CollectFile.__lt__ CedarBackup3.config.CollectFile-class.html#__lt__ CedarBackup3.config.CollectFile.__init__ CedarBackup3.config.CollectFile-class.html#__init__ CedarBackup3.config.CollectFile._setCollectMode CedarBackup3.config.CollectFile-class.html#_setCollectMode CedarBackup3.config.CollectFile.archiveMode CedarBackup3.config.CollectFile-class.html#archiveMode CedarBackup3.config.CollectFile.absolutePath CedarBackup3.config.CollectFile-class.html#absolutePath CedarBackup3.config.CollectFile.__cmp__ CedarBackup3.config.CollectFile-class.html#__cmp__ CedarBackup3.config.CollectFile._setArchiveMode CedarBackup3.config.CollectFile-class.html#_setArchiveMode CedarBackup3.config.CollectFile.__gt__ CedarBackup3.config.CollectFile-class.html#__gt__ CedarBackup3.config.CollectFile.__eq__ CedarBackup3.config.CollectFile-class.html#__eq__ CedarBackup3.config.CollectFile.collectMode CedarBackup3.config.CollectFile-class.html#collectMode CedarBackup3.config.CollectFile._setAbsolutePath CedarBackup3.config.CollectFile-class.html#_setAbsolutePath CedarBackup3.config.CollectFile.__le__ CedarBackup3.config.CollectFile-class.html#__le__ CedarBackup3.config.CollectFile.__repr__ CedarBackup3.config.CollectFile-class.html#__repr__ CedarBackup3.config.CollectFile.__ge__ CedarBackup3.config.CollectFile-class.html#__ge__ CedarBackup3.config.CommandOverride CedarBackup3.config.CommandOverride-class.html CedarBackup3.config.CommandOverride.__str__ CedarBackup3.config.CommandOverride-class.html#__str__ CedarBackup3.config.CommandOverride._getAbsolutePath CedarBackup3.config.CommandOverride-class.html#_getAbsolutePath CedarBackup3.config.CommandOverride.__lt__ CedarBackup3.config.CommandOverride-class.html#__lt__ CedarBackup3.config.CommandOverride.__init__ CedarBackup3.config.CommandOverride-class.html#__init__ CedarBackup3.config.CommandOverride._getCommand CedarBackup3.config.CommandOverride-class.html#_getCommand CedarBackup3.config.CommandOverride.absolutePath CedarBackup3.config.CommandOverride-class.html#absolutePath CedarBackup3.config.CommandOverride.__cmp__ CedarBackup3.config.CommandOverride-class.html#__cmp__ CedarBackup3.config.CommandOverride._setCommand CedarBackup3.config.CommandOverride-class.html#_setCommand CedarBackup3.config.CommandOverride.__gt__ CedarBackup3.config.CommandOverride-class.html#__gt__ CedarBackup3.config.CommandOverride.__eq__ CedarBackup3.config.CommandOverride-class.html#__eq__ CedarBackup3.config.CommandOverride._setAbsolutePath CedarBackup3.config.CommandOverride-class.html#_setAbsolutePath CedarBackup3.config.CommandOverride.__le__ CedarBackup3.config.CommandOverride-class.html#__le__ CedarBackup3.config.CommandOverride.command CedarBackup3.config.CommandOverride-class.html#command CedarBackup3.config.CommandOverride.__repr__ CedarBackup3.config.CommandOverride-class.html#__repr__ CedarBackup3.config.CommandOverride.__ge__ CedarBackup3.config.CommandOverride-class.html#__ge__ CedarBackup3.config.Config CedarBackup3.config.Config-class.html CedarBackup3.config.Config._addCollect CedarBackup3.config.Config-class.html#_addCollect CedarBackup3.config.Config.extractXml CedarBackup3.config.Config-class.html#extractXml CedarBackup3.config.Config._addStage CedarBackup3.config.Config-class.html#_addStage CedarBackup3.config.Config._getReference CedarBackup3.config.Config-class.html#_getReference CedarBackup3.config.Config.__str__ CedarBackup3.config.Config-class.html#__str__ CedarBackup3.config.Config._validateStage CedarBackup3.config.Config-class.html#_validateStage CedarBackup3.config.Config._addOptions CedarBackup3.config.Config-class.html#_addOptions CedarBackup3.config.Config._validatePurge CedarBackup3.config.Config-class.html#_validatePurge CedarBackup3.config.Config.__lt__ CedarBackup3.config.Config-class.html#__lt__ CedarBackup3.config.Config._parseOverrides CedarBackup3.config.Config-class.html#_parseOverrides CedarBackup3.config.Config._setStore CedarBackup3.config.Config-class.html#_setStore CedarBackup3.config.Config._addReference CedarBackup3.config.Config-class.html#_addReference CedarBackup3.config.Config.__eq__ CedarBackup3.config.Config-class.html#__eq__ CedarBackup3.config.Config.__cmp__ CedarBackup3.config.Config-class.html#__cmp__ CedarBackup3.config.Config._validateStore CedarBackup3.config.Config-class.html#_validateStore CedarBackup3.config.Config._setPurge CedarBackup3.config.Config-class.html#_setPurge CedarBackup3.config.Config._validateExtensions CedarBackup3.config.Config-class.html#_validateExtensions CedarBackup3.config.Config.__gt__ CedarBackup3.config.Config-class.html#__gt__ CedarBackup3.config.Config._addExtendedAction CedarBackup3.config.Config-class.html#_addExtendedAction CedarBackup3.config.Config.collect CedarBackup3.config.Config-class.html#collect CedarBackup3.config.Config._validateContents CedarBackup3.config.Config-class.html#_validateContents CedarBackup3.config.Config.reference CedarBackup3.config.Config-class.html#reference CedarBackup3.config.Config._validateReference CedarBackup3.config.Config-class.html#_validateReference CedarBackup3.config.Config._addPeers CedarBackup3.config.Config-class.html#_addPeers CedarBackup3.config.Config._parseXmlData CedarBackup3.config.Config-class.html#_parseXmlData CedarBackup3.config.Config._getOptions CedarBackup3.config.Config-class.html#_getOptions CedarBackup3.config.Config._validateOptions CedarBackup3.config.Config-class.html#_validateOptions CedarBackup3.config.Config._parseBlankBehavior CedarBackup3.config.Config-class.html#_parseBlankBehavior CedarBackup3.config.Config._getStage CedarBackup3.config.Config-class.html#_getStage CedarBackup3.config.Config._setCollect CedarBackup3.config.Config-class.html#_setCollect CedarBackup3.config.Config._parseReference CedarBackup3.config.Config-class.html#_parseReference CedarBackup3.config.Config._addLocalPeer CedarBackup3.config.Config-class.html#_addLocalPeer CedarBackup3.config.Config._parseExtensions CedarBackup3.config.Config-class.html#_parseExtensions CedarBackup3.config.Config._validatePeers CedarBackup3.config.Config-class.html#_validatePeers CedarBackup3.config.Config.stage CedarBackup3.config.Config-class.html#stage CedarBackup3.config.Config._getExtensions CedarBackup3.config.Config-class.html#_getExtensions CedarBackup3.config.Config._parseExclusions CedarBackup3.config.Config-class.html#_parseExclusions CedarBackup3.config.Config._parseStage CedarBackup3.config.Config-class.html#_parseStage CedarBackup3.config.Config._parseCollectDirs CedarBackup3.config.Config-class.html#_parseCollectDirs CedarBackup3.config.Config.extensions CedarBackup3.config.Config-class.html#extensions CedarBackup3.config.Config._addBlankBehavior CedarBackup3.config.Config-class.html#_addBlankBehavior CedarBackup3.config.Config._parseDependencies CedarBackup3.config.Config-class.html#_parseDependencies CedarBackup3.config.Config.options CedarBackup3.config.Config-class.html#options CedarBackup3.config.Config.__repr__ CedarBackup3.config.Config-class.html#__repr__ CedarBackup3.config.Config._parsePeers CedarBackup3.config.Config-class.html#_parsePeers CedarBackup3.config.Config._addCollectFile CedarBackup3.config.Config-class.html#_addCollectFile CedarBackup3.config.Config._parsePeerList CedarBackup3.config.Config-class.html#_parsePeerList CedarBackup3.config.Config._extractXml CedarBackup3.config.Config-class.html#_extractXml CedarBackup3.config.Config._validatePeerList CedarBackup3.config.Config-class.html#_validatePeerList CedarBackup3.config.Config._buildCommaSeparatedString CedarBackup3.config.Config-class.html#_buildCommaSeparatedString CedarBackup3.config.Config._addHook CedarBackup3.config.Config-class.html#_addHook CedarBackup3.config.Config._getCollect CedarBackup3.config.Config-class.html#_getCollect CedarBackup3.config.Config._parseHooks CedarBackup3.config.Config-class.html#_parseHooks CedarBackup3.config.Config._parseStore CedarBackup3.config.Config-class.html#_parseStore CedarBackup3.config.Config._setPeers CedarBackup3.config.Config-class.html#_setPeers CedarBackup3.config.Config._parseOptions CedarBackup3.config.Config-class.html#_parseOptions CedarBackup3.config.Config._getPeers CedarBackup3.config.Config-class.html#_getPeers CedarBackup3.config.Config._addStore CedarBackup3.config.Config-class.html#_addStore CedarBackup3.config.Config._addExtensions CedarBackup3.config.Config-class.html#_addExtensions CedarBackup3.config.Config.purge CedarBackup3.config.Config-class.html#purge CedarBackup3.config.Config.__le__ CedarBackup3.config.Config-class.html#__le__ CedarBackup3.config.Config.store CedarBackup3.config.Config-class.html#store CedarBackup3.config.Config.__ge__ CedarBackup3.config.Config-class.html#__ge__ CedarBackup3.config.Config._addOverride CedarBackup3.config.Config-class.html#_addOverride CedarBackup3.config.Config._addPurgeDir CedarBackup3.config.Config-class.html#_addPurgeDir CedarBackup3.config.Config._addDependencies CedarBackup3.config.Config-class.html#_addDependencies CedarBackup3.config.Config._addCollectDir CedarBackup3.config.Config-class.html#_addCollectDir CedarBackup3.config.Config._parsePurge CedarBackup3.config.Config-class.html#_parsePurge CedarBackup3.config.Config._addRemotePeer CedarBackup3.config.Config-class.html#_addRemotePeer CedarBackup3.config.Config.__init__ CedarBackup3.config.Config-class.html#__init__ CedarBackup3.config.Config._addPurge CedarBackup3.config.Config-class.html#_addPurge CedarBackup3.config.Config._setExtensions CedarBackup3.config.Config-class.html#_setExtensions CedarBackup3.config.Config._parsePurgeDirs CedarBackup3.config.Config-class.html#_parsePurgeDirs CedarBackup3.config.Config._parseCollect CedarBackup3.config.Config-class.html#_parseCollect CedarBackup3.config.Config._getStore CedarBackup3.config.Config-class.html#_getStore CedarBackup3.config.Config._setStage CedarBackup3.config.Config-class.html#_setStage CedarBackup3.config.Config._validateCollect CedarBackup3.config.Config-class.html#_validateCollect CedarBackup3.config.Config._getPurge CedarBackup3.config.Config-class.html#_getPurge CedarBackup3.config.Config.validate CedarBackup3.config.Config-class.html#validate CedarBackup3.config.Config._parseExtendedActions CedarBackup3.config.Config-class.html#_parseExtendedActions CedarBackup3.config.Config.peers CedarBackup3.config.Config-class.html#peers CedarBackup3.config.Config._parseCollectFiles CedarBackup3.config.Config-class.html#_parseCollectFiles CedarBackup3.config.Config._setOptions CedarBackup3.config.Config-class.html#_setOptions CedarBackup3.config.Config._setReference CedarBackup3.config.Config-class.html#_setReference CedarBackup3.config.ExtendedAction CedarBackup3.config.ExtendedAction-class.html CedarBackup3.config.ExtendedAction._getModule CedarBackup3.config.ExtendedAction-class.html#_getModule CedarBackup3.config.ExtendedAction.__str__ CedarBackup3.config.ExtendedAction-class.html#__str__ CedarBackup3.config.ExtendedAction._setModule CedarBackup3.config.ExtendedAction-class.html#_setModule CedarBackup3.config.ExtendedAction.module CedarBackup3.config.ExtendedAction-class.html#module CedarBackup3.config.ExtendedAction.__lt__ CedarBackup3.config.ExtendedAction-class.html#__lt__ CedarBackup3.config.ExtendedAction._getName CedarBackup3.config.ExtendedAction-class.html#_getName CedarBackup3.config.ExtendedAction._getDependencies CedarBackup3.config.ExtendedAction-class.html#_getDependencies CedarBackup3.config.ExtendedAction.index CedarBackup3.config.ExtendedAction-class.html#index CedarBackup3.config.ExtendedAction.__cmp__ CedarBackup3.config.ExtendedAction-class.html#__cmp__ CedarBackup3.config.ExtendedAction.__init__ CedarBackup3.config.ExtendedAction-class.html#__init__ CedarBackup3.config.ExtendedAction.function CedarBackup3.config.ExtendedAction-class.html#function CedarBackup3.config.ExtendedAction._setIndex CedarBackup3.config.ExtendedAction-class.html#_setIndex CedarBackup3.config.ExtendedAction._getFunction CedarBackup3.config.ExtendedAction-class.html#_getFunction CedarBackup3.config.ExtendedAction._setDependencies CedarBackup3.config.ExtendedAction-class.html#_setDependencies CedarBackup3.config.ExtendedAction.dependencies CedarBackup3.config.ExtendedAction-class.html#dependencies CedarBackup3.config.ExtendedAction.__gt__ CedarBackup3.config.ExtendedAction-class.html#__gt__ CedarBackup3.config.ExtendedAction.__eq__ CedarBackup3.config.ExtendedAction-class.html#__eq__ CedarBackup3.config.ExtendedAction._getIndex CedarBackup3.config.ExtendedAction-class.html#_getIndex CedarBackup3.config.ExtendedAction.name CedarBackup3.config.ExtendedAction-class.html#name CedarBackup3.config.ExtendedAction._setFunction CedarBackup3.config.ExtendedAction-class.html#_setFunction CedarBackup3.config.ExtendedAction.__le__ CedarBackup3.config.ExtendedAction-class.html#__le__ CedarBackup3.config.ExtendedAction.__repr__ CedarBackup3.config.ExtendedAction-class.html#__repr__ CedarBackup3.config.ExtendedAction._setName CedarBackup3.config.ExtendedAction-class.html#_setName CedarBackup3.config.ExtendedAction.__ge__ CedarBackup3.config.ExtendedAction-class.html#__ge__ CedarBackup3.config.ExtensionsConfig CedarBackup3.config.ExtensionsConfig-class.html CedarBackup3.config.ExtensionsConfig.orderMode CedarBackup3.config.ExtensionsConfig-class.html#orderMode CedarBackup3.config.ExtensionsConfig.__str__ CedarBackup3.config.ExtensionsConfig-class.html#__str__ CedarBackup3.config.ExtensionsConfig.__gt__ CedarBackup3.config.ExtensionsConfig-class.html#__gt__ CedarBackup3.config.ExtensionsConfig.actions CedarBackup3.config.ExtensionsConfig-class.html#actions CedarBackup3.config.ExtensionsConfig.__lt__ CedarBackup3.config.ExtensionsConfig-class.html#__lt__ CedarBackup3.config.ExtensionsConfig.__init__ CedarBackup3.config.ExtensionsConfig-class.html#__init__ CedarBackup3.config.ExtensionsConfig.__cmp__ CedarBackup3.config.ExtensionsConfig-class.html#__cmp__ CedarBackup3.config.ExtensionsConfig._setActions CedarBackup3.config.ExtensionsConfig-class.html#_setActions CedarBackup3.config.ExtensionsConfig._setOrderMode CedarBackup3.config.ExtensionsConfig-class.html#_setOrderMode CedarBackup3.config.ExtensionsConfig._getOrderMode CedarBackup3.config.ExtensionsConfig-class.html#_getOrderMode CedarBackup3.config.ExtensionsConfig.__eq__ CedarBackup3.config.ExtensionsConfig-class.html#__eq__ CedarBackup3.config.ExtensionsConfig.__le__ CedarBackup3.config.ExtensionsConfig-class.html#__le__ CedarBackup3.config.ExtensionsConfig.__repr__ CedarBackup3.config.ExtensionsConfig-class.html#__repr__ CedarBackup3.config.ExtensionsConfig._getActions CedarBackup3.config.ExtensionsConfig-class.html#_getActions CedarBackup3.config.ExtensionsConfig.__ge__ CedarBackup3.config.ExtensionsConfig-class.html#__ge__ CedarBackup3.config.LocalPeer CedarBackup3.config.LocalPeer-class.html CedarBackup3.config.LocalPeer.__str__ CedarBackup3.config.LocalPeer-class.html#__str__ CedarBackup3.config.LocalPeer._setIgnoreFailureMode CedarBackup3.config.LocalPeer-class.html#_setIgnoreFailureMode CedarBackup3.config.LocalPeer.__lt__ CedarBackup3.config.LocalPeer-class.html#__lt__ CedarBackup3.config.LocalPeer._getName CedarBackup3.config.LocalPeer-class.html#_getName CedarBackup3.config.LocalPeer.__init__ CedarBackup3.config.LocalPeer-class.html#__init__ CedarBackup3.config.LocalPeer.__cmp__ CedarBackup3.config.LocalPeer-class.html#__cmp__ CedarBackup3.config.LocalPeer._getIgnoreFailureMode CedarBackup3.config.LocalPeer-class.html#_getIgnoreFailureMode CedarBackup3.config.LocalPeer.ignoreFailureMode CedarBackup3.config.LocalPeer-class.html#ignoreFailureMode CedarBackup3.config.LocalPeer.__gt__ CedarBackup3.config.LocalPeer-class.html#__gt__ CedarBackup3.config.LocalPeer.__eq__ CedarBackup3.config.LocalPeer-class.html#__eq__ CedarBackup3.config.LocalPeer._getCollectDir CedarBackup3.config.LocalPeer-class.html#_getCollectDir CedarBackup3.config.LocalPeer.name CedarBackup3.config.LocalPeer-class.html#name CedarBackup3.config.LocalPeer.collectDir CedarBackup3.config.LocalPeer-class.html#collectDir CedarBackup3.config.LocalPeer._setCollectDir CedarBackup3.config.LocalPeer-class.html#_setCollectDir CedarBackup3.config.LocalPeer.__le__ CedarBackup3.config.LocalPeer-class.html#__le__ CedarBackup3.config.LocalPeer.__repr__ CedarBackup3.config.LocalPeer-class.html#__repr__ CedarBackup3.config.LocalPeer._setName CedarBackup3.config.LocalPeer-class.html#_setName CedarBackup3.config.LocalPeer.__ge__ CedarBackup3.config.LocalPeer-class.html#__ge__ CedarBackup3.config.OptionsConfig CedarBackup3.config.OptionsConfig-class.html CedarBackup3.config.OptionsConfig._getRcpCommand CedarBackup3.config.OptionsConfig-class.html#_getRcpCommand CedarBackup3.config.OptionsConfig._getWorkingDir CedarBackup3.config.OptionsConfig-class.html#_getWorkingDir CedarBackup3.config.OptionsConfig._setBackupUser CedarBackup3.config.OptionsConfig-class.html#_setBackupUser CedarBackup3.config.OptionsConfig.__str__ CedarBackup3.config.OptionsConfig-class.html#__str__ CedarBackup3.config.OptionsConfig.backupUser CedarBackup3.config.OptionsConfig-class.html#backupUser CedarBackup3.config.OptionsConfig._getStartingDay CedarBackup3.config.OptionsConfig-class.html#_getStartingDay CedarBackup3.config.OptionsConfig.managedActions CedarBackup3.config.OptionsConfig-class.html#managedActions CedarBackup3.config.OptionsConfig.replaceOverride CedarBackup3.config.OptionsConfig-class.html#replaceOverride CedarBackup3.config.OptionsConfig._getBackupUser CedarBackup3.config.OptionsConfig-class.html#_getBackupUser CedarBackup3.config.OptionsConfig.__lt__ CedarBackup3.config.OptionsConfig-class.html#__lt__ CedarBackup3.config.OptionsConfig.__init__ CedarBackup3.config.OptionsConfig-class.html#__init__ CedarBackup3.config.OptionsConfig._setBackupGroup CedarBackup3.config.OptionsConfig-class.html#_setBackupGroup CedarBackup3.config.OptionsConfig._setCbackCommand CedarBackup3.config.OptionsConfig-class.html#_setCbackCommand CedarBackup3.config.OptionsConfig._getCbackCommand CedarBackup3.config.OptionsConfig-class.html#_getCbackCommand CedarBackup3.config.OptionsConfig.rshCommand CedarBackup3.config.OptionsConfig-class.html#rshCommand CedarBackup3.config.OptionsConfig.workingDir CedarBackup3.config.OptionsConfig-class.html#workingDir CedarBackup3.config.OptionsConfig.__cmp__ CedarBackup3.config.OptionsConfig-class.html#__cmp__ CedarBackup3.config.OptionsConfig.hooks CedarBackup3.config.OptionsConfig-class.html#hooks CedarBackup3.config.OptionsConfig.backupGroup CedarBackup3.config.OptionsConfig-class.html#backupGroup CedarBackup3.config.OptionsConfig.startingDay CedarBackup3.config.OptionsConfig-class.html#startingDay CedarBackup3.config.OptionsConfig._getHooks CedarBackup3.config.OptionsConfig-class.html#_getHooks CedarBackup3.config.OptionsConfig._setWorkingDir CedarBackup3.config.OptionsConfig-class.html#_setWorkingDir CedarBackup3.config.OptionsConfig.__gt__ CedarBackup3.config.OptionsConfig-class.html#__gt__ CedarBackup3.config.OptionsConfig._getBackupGroup CedarBackup3.config.OptionsConfig-class.html#_getBackupGroup CedarBackup3.config.OptionsConfig.__eq__ CedarBackup3.config.OptionsConfig-class.html#__eq__ CedarBackup3.config.OptionsConfig._setStartingDay CedarBackup3.config.OptionsConfig-class.html#_setStartingDay CedarBackup3.config.OptionsConfig.addOverride CedarBackup3.config.OptionsConfig-class.html#addOverride CedarBackup3.config.OptionsConfig._setManagedActions CedarBackup3.config.OptionsConfig-class.html#_setManagedActions CedarBackup3.config.OptionsConfig.rcpCommand CedarBackup3.config.OptionsConfig-class.html#rcpCommand CedarBackup3.config.OptionsConfig._setRcpCommand CedarBackup3.config.OptionsConfig-class.html#_setRcpCommand CedarBackup3.config.OptionsConfig.cbackCommand CedarBackup3.config.OptionsConfig-class.html#cbackCommand CedarBackup3.config.OptionsConfig.overrides CedarBackup3.config.OptionsConfig-class.html#overrides CedarBackup3.config.OptionsConfig._setOverrides CedarBackup3.config.OptionsConfig-class.html#_setOverrides CedarBackup3.config.OptionsConfig._setHooks CedarBackup3.config.OptionsConfig-class.html#_setHooks CedarBackup3.config.OptionsConfig._getManagedActions CedarBackup3.config.OptionsConfig-class.html#_getManagedActions CedarBackup3.config.OptionsConfig._getOverrides CedarBackup3.config.OptionsConfig-class.html#_getOverrides CedarBackup3.config.OptionsConfig.__le__ CedarBackup3.config.OptionsConfig-class.html#__le__ CedarBackup3.config.OptionsConfig.__repr__ CedarBackup3.config.OptionsConfig-class.html#__repr__ CedarBackup3.config.OptionsConfig._getRshCommand CedarBackup3.config.OptionsConfig-class.html#_getRshCommand CedarBackup3.config.OptionsConfig._setRshCommand CedarBackup3.config.OptionsConfig-class.html#_setRshCommand CedarBackup3.config.OptionsConfig.__ge__ CedarBackup3.config.OptionsConfig-class.html#__ge__ CedarBackup3.config.PeersConfig CedarBackup3.config.PeersConfig-class.html CedarBackup3.config.PeersConfig.__str__ CedarBackup3.config.PeersConfig-class.html#__str__ CedarBackup3.config.PeersConfig._setLocalPeers CedarBackup3.config.PeersConfig-class.html#_setLocalPeers CedarBackup3.config.PeersConfig._getRemotePeers CedarBackup3.config.PeersConfig-class.html#_getRemotePeers CedarBackup3.config.PeersConfig.localPeers CedarBackup3.config.PeersConfig-class.html#localPeers CedarBackup3.config.PeersConfig.__lt__ CedarBackup3.config.PeersConfig-class.html#__lt__ CedarBackup3.config.PeersConfig.__init__ CedarBackup3.config.PeersConfig-class.html#__init__ CedarBackup3.config.PeersConfig.hasPeers CedarBackup3.config.PeersConfig-class.html#hasPeers CedarBackup3.config.PeersConfig._setRemotePeers CedarBackup3.config.PeersConfig-class.html#_setRemotePeers CedarBackup3.config.PeersConfig.__cmp__ CedarBackup3.config.PeersConfig-class.html#__cmp__ CedarBackup3.config.PeersConfig._getLocalPeers CedarBackup3.config.PeersConfig-class.html#_getLocalPeers CedarBackup3.config.PeersConfig.__gt__ CedarBackup3.config.PeersConfig-class.html#__gt__ CedarBackup3.config.PeersConfig.__eq__ CedarBackup3.config.PeersConfig-class.html#__eq__ CedarBackup3.config.PeersConfig.remotePeers CedarBackup3.config.PeersConfig-class.html#remotePeers CedarBackup3.config.PeersConfig.__le__ CedarBackup3.config.PeersConfig-class.html#__le__ CedarBackup3.config.PeersConfig.__repr__ CedarBackup3.config.PeersConfig-class.html#__repr__ CedarBackup3.config.PeersConfig.__ge__ CedarBackup3.config.PeersConfig-class.html#__ge__ CedarBackup3.config.PostActionHook CedarBackup3.config.PostActionHook-class.html CedarBackup3.config.ActionHook.__str__ CedarBackup3.config.ActionHook-class.html#__str__ CedarBackup3.config.ActionHook.__lt__ CedarBackup3.config.ActionHook-class.html#__lt__ CedarBackup3.config.ActionHook._getAction CedarBackup3.config.ActionHook-class.html#_getAction CedarBackup3.config.PostActionHook.__init__ CedarBackup3.config.PostActionHook-class.html#__init__ CedarBackup3.config.ActionHook.before CedarBackup3.config.ActionHook-class.html#before CedarBackup3.config.ActionHook._getBefore CedarBackup3.config.ActionHook-class.html#_getBefore CedarBackup3.config.ActionHook._setAction CedarBackup3.config.ActionHook-class.html#_setAction CedarBackup3.config.ActionHook.__cmp__ CedarBackup3.config.ActionHook-class.html#__cmp__ CedarBackup3.config.ActionHook._setCommand CedarBackup3.config.ActionHook-class.html#_setCommand CedarBackup3.config.ActionHook._getAfter CedarBackup3.config.ActionHook-class.html#_getAfter CedarBackup3.config.ActionHook._getCommand CedarBackup3.config.ActionHook-class.html#_getCommand CedarBackup3.config.ActionHook.after CedarBackup3.config.ActionHook-class.html#after CedarBackup3.config.ActionHook.__gt__ CedarBackup3.config.ActionHook-class.html#__gt__ CedarBackup3.config.ActionHook.__eq__ CedarBackup3.config.ActionHook-class.html#__eq__ CedarBackup3.config.ActionHook.__le__ CedarBackup3.config.ActionHook-class.html#__le__ CedarBackup3.config.ActionHook.command CedarBackup3.config.ActionHook-class.html#command CedarBackup3.config.PostActionHook.__repr__ CedarBackup3.config.PostActionHook-class.html#__repr__ CedarBackup3.config.ActionHook.action CedarBackup3.config.ActionHook-class.html#action CedarBackup3.config.ActionHook.__ge__ CedarBackup3.config.ActionHook-class.html#__ge__ CedarBackup3.config.PreActionHook CedarBackup3.config.PreActionHook-class.html CedarBackup3.config.ActionHook.__str__ CedarBackup3.config.ActionHook-class.html#__str__ CedarBackup3.config.ActionHook.__lt__ CedarBackup3.config.ActionHook-class.html#__lt__ CedarBackup3.config.ActionHook._getAction CedarBackup3.config.ActionHook-class.html#_getAction CedarBackup3.config.PreActionHook.__init__ CedarBackup3.config.PreActionHook-class.html#__init__ CedarBackup3.config.ActionHook.before CedarBackup3.config.ActionHook-class.html#before CedarBackup3.config.ActionHook._getBefore CedarBackup3.config.ActionHook-class.html#_getBefore CedarBackup3.config.ActionHook._setAction CedarBackup3.config.ActionHook-class.html#_setAction CedarBackup3.config.ActionHook.__cmp__ CedarBackup3.config.ActionHook-class.html#__cmp__ CedarBackup3.config.ActionHook._setCommand CedarBackup3.config.ActionHook-class.html#_setCommand CedarBackup3.config.ActionHook._getAfter CedarBackup3.config.ActionHook-class.html#_getAfter CedarBackup3.config.ActionHook._getCommand CedarBackup3.config.ActionHook-class.html#_getCommand CedarBackup3.config.ActionHook.after CedarBackup3.config.ActionHook-class.html#after CedarBackup3.config.ActionHook.__gt__ CedarBackup3.config.ActionHook-class.html#__gt__ CedarBackup3.config.ActionHook.__eq__ CedarBackup3.config.ActionHook-class.html#__eq__ CedarBackup3.config.ActionHook.__le__ CedarBackup3.config.ActionHook-class.html#__le__ CedarBackup3.config.ActionHook.command CedarBackup3.config.ActionHook-class.html#command CedarBackup3.config.PreActionHook.__repr__ CedarBackup3.config.PreActionHook-class.html#__repr__ CedarBackup3.config.ActionHook.action CedarBackup3.config.ActionHook-class.html#action CedarBackup3.config.ActionHook.__ge__ CedarBackup3.config.ActionHook-class.html#__ge__ CedarBackup3.config.PurgeConfig CedarBackup3.config.PurgeConfig-class.html CedarBackup3.config.PurgeConfig.__str__ CedarBackup3.config.PurgeConfig-class.html#__str__ CedarBackup3.config.PurgeConfig.__lt__ CedarBackup3.config.PurgeConfig-class.html#__lt__ CedarBackup3.config.PurgeConfig.__init__ CedarBackup3.config.PurgeConfig-class.html#__init__ CedarBackup3.config.PurgeConfig.__cmp__ CedarBackup3.config.PurgeConfig-class.html#__cmp__ CedarBackup3.config.PurgeConfig.__le__ CedarBackup3.config.PurgeConfig-class.html#__le__ CedarBackup3.config.PurgeConfig.__gt__ CedarBackup3.config.PurgeConfig-class.html#__gt__ CedarBackup3.config.PurgeConfig.__eq__ CedarBackup3.config.PurgeConfig-class.html#__eq__ CedarBackup3.config.PurgeConfig._setPurgeDirs CedarBackup3.config.PurgeConfig-class.html#_setPurgeDirs CedarBackup3.config.PurgeConfig.purgeDirs CedarBackup3.config.PurgeConfig-class.html#purgeDirs CedarBackup3.config.PurgeConfig.__repr__ CedarBackup3.config.PurgeConfig-class.html#__repr__ CedarBackup3.config.PurgeConfig.__ge__ CedarBackup3.config.PurgeConfig-class.html#__ge__ CedarBackup3.config.PurgeConfig._getPurgeDirs CedarBackup3.config.PurgeConfig-class.html#_getPurgeDirs CedarBackup3.config.PurgeDir CedarBackup3.config.PurgeDir-class.html CedarBackup3.config.PurgeDir._getRetainDays CedarBackup3.config.PurgeDir-class.html#_getRetainDays CedarBackup3.config.PurgeDir.__str__ CedarBackup3.config.PurgeDir-class.html#__str__ CedarBackup3.config.PurgeDir._getAbsolutePath CedarBackup3.config.PurgeDir-class.html#_getAbsolutePath CedarBackup3.config.PurgeDir.__lt__ CedarBackup3.config.PurgeDir-class.html#__lt__ CedarBackup3.config.PurgeDir.__init__ CedarBackup3.config.PurgeDir-class.html#__init__ CedarBackup3.config.PurgeDir._setRetainDays CedarBackup3.config.PurgeDir-class.html#_setRetainDays CedarBackup3.config.PurgeDir.absolutePath CedarBackup3.config.PurgeDir-class.html#absolutePath CedarBackup3.config.PurgeDir.__cmp__ CedarBackup3.config.PurgeDir-class.html#__cmp__ CedarBackup3.config.PurgeDir.retainDays CedarBackup3.config.PurgeDir-class.html#retainDays CedarBackup3.config.PurgeDir.__gt__ CedarBackup3.config.PurgeDir-class.html#__gt__ CedarBackup3.config.PurgeDir.__eq__ CedarBackup3.config.PurgeDir-class.html#__eq__ CedarBackup3.config.PurgeDir._setAbsolutePath CedarBackup3.config.PurgeDir-class.html#_setAbsolutePath CedarBackup3.config.PurgeDir.__le__ CedarBackup3.config.PurgeDir-class.html#__le__ CedarBackup3.config.PurgeDir.__repr__ CedarBackup3.config.PurgeDir-class.html#__repr__ CedarBackup3.config.PurgeDir.__ge__ CedarBackup3.config.PurgeDir-class.html#__ge__ CedarBackup3.config.ReferenceConfig CedarBackup3.config.ReferenceConfig-class.html CedarBackup3.config.ReferenceConfig._setAuthor CedarBackup3.config.ReferenceConfig-class.html#_setAuthor CedarBackup3.config.ReferenceConfig.__str__ CedarBackup3.config.ReferenceConfig-class.html#__str__ CedarBackup3.config.ReferenceConfig.__lt__ CedarBackup3.config.ReferenceConfig-class.html#__lt__ CedarBackup3.config.ReferenceConfig.__init__ CedarBackup3.config.ReferenceConfig-class.html#__init__ CedarBackup3.config.ReferenceConfig.generator CedarBackup3.config.ReferenceConfig-class.html#generator CedarBackup3.config.ReferenceConfig.author CedarBackup3.config.ReferenceConfig-class.html#author CedarBackup3.config.ReferenceConfig._getGenerator CedarBackup3.config.ReferenceConfig-class.html#_getGenerator CedarBackup3.config.ReferenceConfig.__cmp__ CedarBackup3.config.ReferenceConfig-class.html#__cmp__ CedarBackup3.config.ReferenceConfig.revision CedarBackup3.config.ReferenceConfig-class.html#revision CedarBackup3.config.ReferenceConfig.description CedarBackup3.config.ReferenceConfig-class.html#description CedarBackup3.config.ReferenceConfig._setGenerator CedarBackup3.config.ReferenceConfig-class.html#_setGenerator CedarBackup3.config.ReferenceConfig.__gt__ CedarBackup3.config.ReferenceConfig-class.html#__gt__ CedarBackup3.config.ReferenceConfig._setDescription CedarBackup3.config.ReferenceConfig-class.html#_setDescription CedarBackup3.config.ReferenceConfig.__eq__ CedarBackup3.config.ReferenceConfig-class.html#__eq__ CedarBackup3.config.ReferenceConfig._setRevision CedarBackup3.config.ReferenceConfig-class.html#_setRevision CedarBackup3.config.ReferenceConfig._getRevision CedarBackup3.config.ReferenceConfig-class.html#_getRevision CedarBackup3.config.ReferenceConfig._getAuthor CedarBackup3.config.ReferenceConfig-class.html#_getAuthor CedarBackup3.config.ReferenceConfig._getDescription CedarBackup3.config.ReferenceConfig-class.html#_getDescription CedarBackup3.config.ReferenceConfig.__le__ CedarBackup3.config.ReferenceConfig-class.html#__le__ CedarBackup3.config.ReferenceConfig.__repr__ CedarBackup3.config.ReferenceConfig-class.html#__repr__ CedarBackup3.config.ReferenceConfig.__ge__ CedarBackup3.config.ReferenceConfig-class.html#__ge__ CedarBackup3.config.RemotePeer CedarBackup3.config.RemotePeer-class.html CedarBackup3.config.RemotePeer._getRcpCommand CedarBackup3.config.RemotePeer-class.html#_getRcpCommand CedarBackup3.config.RemotePeer.managed CedarBackup3.config.RemotePeer-class.html#managed CedarBackup3.config.RemotePeer.__str__ CedarBackup3.config.RemotePeer-class.html#__str__ CedarBackup3.config.RemotePeer.cbackCommand CedarBackup3.config.RemotePeer-class.html#cbackCommand CedarBackup3.config.RemotePeer._setIgnoreFailureMode CedarBackup3.config.RemotePeer-class.html#_setIgnoreFailureMode CedarBackup3.config.RemotePeer.managedActions CedarBackup3.config.RemotePeer-class.html#managedActions CedarBackup3.config.RemotePeer.__lt__ CedarBackup3.config.RemotePeer-class.html#__lt__ CedarBackup3.config.RemotePeer._getName CedarBackup3.config.RemotePeer-class.html#_getName CedarBackup3.config.RemotePeer.__init__ CedarBackup3.config.RemotePeer-class.html#__init__ CedarBackup3.config.RemotePeer._setCbackCommand CedarBackup3.config.RemotePeer-class.html#_setCbackCommand CedarBackup3.config.RemotePeer._getCbackCommand CedarBackup3.config.RemotePeer-class.html#_getCbackCommand CedarBackup3.config.RemotePeer.remoteUser CedarBackup3.config.RemotePeer-class.html#remoteUser CedarBackup3.config.RemotePeer.__cmp__ CedarBackup3.config.RemotePeer-class.html#__cmp__ CedarBackup3.config.RemotePeer.__eq__ CedarBackup3.config.RemotePeer-class.html#__eq__ CedarBackup3.config.RemotePeer._getIgnoreFailureMode CedarBackup3.config.RemotePeer-class.html#_getIgnoreFailureMode CedarBackup3.config.RemotePeer.name CedarBackup3.config.RemotePeer-class.html#name CedarBackup3.config.RemotePeer.ignoreFailureMode CedarBackup3.config.RemotePeer-class.html#ignoreFailureMode CedarBackup3.config.RemotePeer._setManagedActions CedarBackup3.config.RemotePeer-class.html#_setManagedActions CedarBackup3.config.RemotePeer.__gt__ CedarBackup3.config.RemotePeer-class.html#__gt__ CedarBackup3.config.RemotePeer.rcpCommand CedarBackup3.config.RemotePeer-class.html#rcpCommand CedarBackup3.config.RemotePeer.rshCommand CedarBackup3.config.RemotePeer-class.html#rshCommand CedarBackup3.config.RemotePeer._getManaged CedarBackup3.config.RemotePeer-class.html#_getManaged CedarBackup3.config.RemotePeer._getCollectDir CedarBackup3.config.RemotePeer-class.html#_getCollectDir CedarBackup3.config.RemotePeer._setManaged CedarBackup3.config.RemotePeer-class.html#_setManaged CedarBackup3.config.RemotePeer._setRemoteUser CedarBackup3.config.RemotePeer-class.html#_setRemoteUser CedarBackup3.config.RemotePeer._setRcpCommand CedarBackup3.config.RemotePeer-class.html#_setRcpCommand CedarBackup3.config.RemotePeer.collectDir CedarBackup3.config.RemotePeer-class.html#collectDir CedarBackup3.config.RemotePeer._setCollectDir CedarBackup3.config.RemotePeer-class.html#_setCollectDir CedarBackup3.config.RemotePeer._getManagedActions CedarBackup3.config.RemotePeer-class.html#_getManagedActions CedarBackup3.config.RemotePeer._getRemoteUser CedarBackup3.config.RemotePeer-class.html#_getRemoteUser CedarBackup3.config.RemotePeer.__le__ CedarBackup3.config.RemotePeer-class.html#__le__ CedarBackup3.config.RemotePeer.__repr__ CedarBackup3.config.RemotePeer-class.html#__repr__ CedarBackup3.config.RemotePeer._setName CedarBackup3.config.RemotePeer-class.html#_setName CedarBackup3.config.RemotePeer._getRshCommand CedarBackup3.config.RemotePeer-class.html#_getRshCommand CedarBackup3.config.RemotePeer._setRshCommand CedarBackup3.config.RemotePeer-class.html#_setRshCommand CedarBackup3.config.RemotePeer.__ge__ CedarBackup3.config.RemotePeer-class.html#__ge__ CedarBackup3.config.StageConfig CedarBackup3.config.StageConfig-class.html CedarBackup3.config.StageConfig.__str__ CedarBackup3.config.StageConfig-class.html#__str__ CedarBackup3.config.StageConfig._setLocalPeers CedarBackup3.config.StageConfig-class.html#_setLocalPeers CedarBackup3.config.StageConfig._getRemotePeers CedarBackup3.config.StageConfig-class.html#_getRemotePeers CedarBackup3.config.StageConfig.localPeers CedarBackup3.config.StageConfig-class.html#localPeers CedarBackup3.config.StageConfig.__lt__ CedarBackup3.config.StageConfig-class.html#__lt__ CedarBackup3.config.StageConfig.__init__ CedarBackup3.config.StageConfig-class.html#__init__ CedarBackup3.config.StageConfig.hasPeers CedarBackup3.config.StageConfig-class.html#hasPeers CedarBackup3.config.StageConfig._setRemotePeers CedarBackup3.config.StageConfig-class.html#_setRemotePeers CedarBackup3.config.StageConfig._getTargetDir CedarBackup3.config.StageConfig-class.html#_getTargetDir CedarBackup3.config.StageConfig.__cmp__ CedarBackup3.config.StageConfig-class.html#__cmp__ CedarBackup3.config.StageConfig._getLocalPeers CedarBackup3.config.StageConfig-class.html#_getLocalPeers CedarBackup3.config.StageConfig.__gt__ CedarBackup3.config.StageConfig-class.html#__gt__ CedarBackup3.config.StageConfig.__eq__ CedarBackup3.config.StageConfig-class.html#__eq__ CedarBackup3.config.StageConfig.remotePeers CedarBackup3.config.StageConfig-class.html#remotePeers CedarBackup3.config.StageConfig.targetDir CedarBackup3.config.StageConfig-class.html#targetDir CedarBackup3.config.StageConfig.__le__ CedarBackup3.config.StageConfig-class.html#__le__ CedarBackup3.config.StageConfig.__repr__ CedarBackup3.config.StageConfig-class.html#__repr__ CedarBackup3.config.StageConfig._setTargetDir CedarBackup3.config.StageConfig-class.html#_setTargetDir CedarBackup3.config.StageConfig.__ge__ CedarBackup3.config.StageConfig-class.html#__ge__ CedarBackup3.config.StoreConfig CedarBackup3.config.StoreConfig-class.html CedarBackup3.config.StoreConfig.__str__ CedarBackup3.config.StoreConfig-class.html#__str__ CedarBackup3.config.StoreConfig._setEjectDelay CedarBackup3.config.StoreConfig-class.html#_setEjectDelay CedarBackup3.config.StoreConfig._getDevicePath CedarBackup3.config.StoreConfig-class.html#_getDevicePath CedarBackup3.config.StoreConfig._setDeviceScsiId CedarBackup3.config.StoreConfig-class.html#_setDeviceScsiId CedarBackup3.config.StoreConfig._setDevicePath CedarBackup3.config.StoreConfig-class.html#_setDevicePath CedarBackup3.config.StoreConfig._getDeviceScsiId CedarBackup3.config.StoreConfig-class.html#_getDeviceScsiId CedarBackup3.config.StoreConfig._setSourceDir CedarBackup3.config.StoreConfig-class.html#_setSourceDir CedarBackup3.config.StoreConfig.__lt__ CedarBackup3.config.StoreConfig-class.html#__lt__ CedarBackup3.config.StoreConfig.__init__ CedarBackup3.config.StoreConfig-class.html#__init__ CedarBackup3.config.StoreConfig.refreshMediaDelay CedarBackup3.config.StoreConfig-class.html#refreshMediaDelay CedarBackup3.config.StoreConfig.sourceDir CedarBackup3.config.StoreConfig-class.html#sourceDir CedarBackup3.config.StoreConfig._getCheckMedia CedarBackup3.config.StoreConfig-class.html#_getCheckMedia CedarBackup3.config.StoreConfig.mediaType CedarBackup3.config.StoreConfig-class.html#mediaType CedarBackup3.config.StoreConfig.__cmp__ CedarBackup3.config.StoreConfig-class.html#__cmp__ CedarBackup3.config.StoreConfig._setNoEject CedarBackup3.config.StoreConfig-class.html#_setNoEject CedarBackup3.config.StoreConfig.warnMidnite CedarBackup3.config.StoreConfig-class.html#warnMidnite CedarBackup3.config.StoreConfig.deviceType CedarBackup3.config.StoreConfig-class.html#deviceType CedarBackup3.config.StoreConfig.devicePath CedarBackup3.config.StoreConfig-class.html#devicePath CedarBackup3.config.StoreConfig.driveSpeed CedarBackup3.config.StoreConfig-class.html#driveSpeed CedarBackup3.config.StoreConfig._getMediaType CedarBackup3.config.StoreConfig-class.html#_getMediaType CedarBackup3.config.StoreConfig._getDeviceType CedarBackup3.config.StoreConfig-class.html#_getDeviceType CedarBackup3.config.StoreConfig.noEject CedarBackup3.config.StoreConfig-class.html#noEject CedarBackup3.config.StoreConfig._getBlankBehavior CedarBackup3.config.StoreConfig-class.html#_getBlankBehavior CedarBackup3.config.StoreConfig._getWarnMidnite CedarBackup3.config.StoreConfig-class.html#_getWarnMidnite CedarBackup3.config.StoreConfig._setMediaType CedarBackup3.config.StoreConfig-class.html#_setMediaType CedarBackup3.config.StoreConfig.deviceScsiId CedarBackup3.config.StoreConfig-class.html#deviceScsiId CedarBackup3.config.StoreConfig.blankBehavior CedarBackup3.config.StoreConfig-class.html#blankBehavior CedarBackup3.config.StoreConfig._getDriveSpeed CedarBackup3.config.StoreConfig-class.html#_getDriveSpeed CedarBackup3.config.StoreConfig._setCheckData CedarBackup3.config.StoreConfig-class.html#_setCheckData CedarBackup3.config.StoreConfig._setRefreshMediaDelay CedarBackup3.config.StoreConfig-class.html#_setRefreshMediaDelay CedarBackup3.config.StoreConfig.__gt__ CedarBackup3.config.StoreConfig-class.html#__gt__ CedarBackup3.config.StoreConfig.checkData CedarBackup3.config.StoreConfig-class.html#checkData CedarBackup3.config.StoreConfig._setDriveSpeed CedarBackup3.config.StoreConfig-class.html#_setDriveSpeed CedarBackup3.config.StoreConfig.__eq__ CedarBackup3.config.StoreConfig-class.html#__eq__ CedarBackup3.config.StoreConfig._setDeviceType CedarBackup3.config.StoreConfig-class.html#_setDeviceType CedarBackup3.config.StoreConfig.checkMedia CedarBackup3.config.StoreConfig-class.html#checkMedia CedarBackup3.config.StoreConfig._getEjectDelay CedarBackup3.config.StoreConfig-class.html#_getEjectDelay CedarBackup3.config.StoreConfig._getRefreshMediaDelay CedarBackup3.config.StoreConfig-class.html#_getRefreshMediaDelay CedarBackup3.config.StoreConfig._getNoEject CedarBackup3.config.StoreConfig-class.html#_getNoEject CedarBackup3.config.StoreConfig._getSourceDir CedarBackup3.config.StoreConfig-class.html#_getSourceDir CedarBackup3.config.StoreConfig._setCheckMedia CedarBackup3.config.StoreConfig-class.html#_setCheckMedia CedarBackup3.config.StoreConfig.__le__ CedarBackup3.config.StoreConfig-class.html#__le__ CedarBackup3.config.StoreConfig.__repr__ CedarBackup3.config.StoreConfig-class.html#__repr__ CedarBackup3.config.StoreConfig._setWarnMidnite CedarBackup3.config.StoreConfig-class.html#_setWarnMidnite CedarBackup3.config.StoreConfig.ejectDelay CedarBackup3.config.StoreConfig-class.html#ejectDelay CedarBackup3.config.StoreConfig._setBlankBehavior CedarBackup3.config.StoreConfig-class.html#_setBlankBehavior CedarBackup3.config.StoreConfig._getCheckData CedarBackup3.config.StoreConfig-class.html#_getCheckData CedarBackup3.config.StoreConfig.__ge__ CedarBackup3.config.StoreConfig-class.html#__ge__ CedarBackup3.extend.amazons3.AmazonS3Config CedarBackup3.extend.amazons3.AmazonS3Config-class.html CedarBackup3.extend.amazons3.AmazonS3Config.__str__ CedarBackup3.extend.amazons3.AmazonS3Config-class.html#__str__ CedarBackup3.extend.amazons3.AmazonS3Config.encryptCommand CedarBackup3.extend.amazons3.AmazonS3Config-class.html#encryptCommand CedarBackup3.extend.amazons3.AmazonS3Config._getS3Bucket CedarBackup3.extend.amazons3.AmazonS3Config-class.html#_getS3Bucket CedarBackup3.extend.amazons3.AmazonS3Config._setIncrementalBackupSizeLimit CedarBackup3.extend.amazons3.AmazonS3Config-class.html#_setIncrementalBackupSizeLimit CedarBackup3.extend.amazons3.AmazonS3Config.__lt__ CedarBackup3.extend.amazons3.AmazonS3Config-class.html#__lt__ CedarBackup3.extend.amazons3.AmazonS3Config._getFullBackupSizeLimit CedarBackup3.extend.amazons3.AmazonS3Config-class.html#_getFullBackupSizeLimit CedarBackup3.extend.amazons3.AmazonS3Config.__init__ CedarBackup3.extend.amazons3.AmazonS3Config-class.html#__init__ CedarBackup3.extend.amazons3.AmazonS3Config._getEncryptCommand CedarBackup3.extend.amazons3.AmazonS3Config-class.html#_getEncryptCommand CedarBackup3.extend.amazons3.AmazonS3Config.__cmp__ CedarBackup3.extend.amazons3.AmazonS3Config-class.html#__cmp__ CedarBackup3.extend.amazons3.AmazonS3Config.s3Bucket CedarBackup3.extend.amazons3.AmazonS3Config-class.html#s3Bucket CedarBackup3.extend.amazons3.AmazonS3Config.warnMidnite CedarBackup3.extend.amazons3.AmazonS3Config-class.html#warnMidnite CedarBackup3.extend.amazons3.AmazonS3Config._setWarnMidnite CedarBackup3.extend.amazons3.AmazonS3Config-class.html#_setWarnMidnite CedarBackup3.extend.amazons3.AmazonS3Config._getWarnMidnite CedarBackup3.extend.amazons3.AmazonS3Config-class.html#_getWarnMidnite CedarBackup3.extend.amazons3.AmazonS3Config.__gt__ CedarBackup3.extend.amazons3.AmazonS3Config-class.html#__gt__ CedarBackup3.extend.amazons3.AmazonS3Config.__eq__ CedarBackup3.extend.amazons3.AmazonS3Config-class.html#__eq__ CedarBackup3.extend.amazons3.AmazonS3Config._getIncrementalBackupSizeLimit CedarBackup3.extend.amazons3.AmazonS3Config-class.html#_getIncrementalBackupSizeLimit CedarBackup3.extend.amazons3.AmazonS3Config._setEncryptCommand CedarBackup3.extend.amazons3.AmazonS3Config-class.html#_setEncryptCommand CedarBackup3.extend.amazons3.AmazonS3Config._setS3Bucket CedarBackup3.extend.amazons3.AmazonS3Config-class.html#_setS3Bucket CedarBackup3.extend.amazons3.AmazonS3Config.incrementalBackupSizeLimit CedarBackup3.extend.amazons3.AmazonS3Config-class.html#incrementalBackupSizeLimit CedarBackup3.extend.amazons3.AmazonS3Config.__le__ CedarBackup3.extend.amazons3.AmazonS3Config-class.html#__le__ CedarBackup3.extend.amazons3.AmazonS3Config.fullBackupSizeLimit CedarBackup3.extend.amazons3.AmazonS3Config-class.html#fullBackupSizeLimit CedarBackup3.extend.amazons3.AmazonS3Config.__repr__ CedarBackup3.extend.amazons3.AmazonS3Config-class.html#__repr__ CedarBackup3.extend.amazons3.AmazonS3Config._setFullBackupSizeLimit CedarBackup3.extend.amazons3.AmazonS3Config-class.html#_setFullBackupSizeLimit CedarBackup3.extend.amazons3.AmazonS3Config.__ge__ CedarBackup3.extend.amazons3.AmazonS3Config-class.html#__ge__ CedarBackup3.extend.amazons3.LocalConfig CedarBackup3.extend.amazons3.LocalConfig-class.html CedarBackup3.extend.amazons3.LocalConfig.__str__ CedarBackup3.extend.amazons3.LocalConfig-class.html#__str__ CedarBackup3.extend.amazons3.LocalConfig._parseXmlData CedarBackup3.extend.amazons3.LocalConfig-class.html#_parseXmlData CedarBackup3.extend.amazons3.LocalConfig.__lt__ CedarBackup3.extend.amazons3.LocalConfig-class.html#__lt__ CedarBackup3.extend.amazons3.LocalConfig.__init__ CedarBackup3.extend.amazons3.LocalConfig-class.html#__init__ CedarBackup3.extend.amazons3.LocalConfig.__cmp__ CedarBackup3.extend.amazons3.LocalConfig-class.html#__cmp__ CedarBackup3.extend.amazons3.LocalConfig._getAmazonS3 CedarBackup3.extend.amazons3.LocalConfig-class.html#_getAmazonS3 CedarBackup3.extend.amazons3.LocalConfig._parseAmazonS3 CedarBackup3.extend.amazons3.LocalConfig-class.html#_parseAmazonS3 CedarBackup3.extend.amazons3.LocalConfig.addConfig CedarBackup3.extend.amazons3.LocalConfig-class.html#addConfig CedarBackup3.extend.amazons3.LocalConfig.amazons3 CedarBackup3.extend.amazons3.LocalConfig-class.html#amazons3 CedarBackup3.extend.amazons3.LocalConfig.__gt__ CedarBackup3.extend.amazons3.LocalConfig-class.html#__gt__ CedarBackup3.extend.amazons3.LocalConfig.validate CedarBackup3.extend.amazons3.LocalConfig-class.html#validate CedarBackup3.extend.amazons3.LocalConfig.__eq__ CedarBackup3.extend.amazons3.LocalConfig-class.html#__eq__ CedarBackup3.extend.amazons3.LocalConfig._setAmazonS3 CedarBackup3.extend.amazons3.LocalConfig-class.html#_setAmazonS3 CedarBackup3.extend.amazons3.LocalConfig.__le__ CedarBackup3.extend.amazons3.LocalConfig-class.html#__le__ CedarBackup3.extend.amazons3.LocalConfig.__repr__ CedarBackup3.extend.amazons3.LocalConfig-class.html#__repr__ CedarBackup3.extend.amazons3.LocalConfig.__ge__ CedarBackup3.extend.amazons3.LocalConfig-class.html#__ge__ CedarBackup3.extend.capacity.CapacityConfig CedarBackup3.extend.capacity.CapacityConfig-class.html CedarBackup3.extend.capacity.CapacityConfig.__str__ CedarBackup3.extend.capacity.CapacityConfig-class.html#__str__ CedarBackup3.extend.capacity.CapacityConfig._getMaxPercentage CedarBackup3.extend.capacity.CapacityConfig-class.html#_getMaxPercentage CedarBackup3.extend.capacity.CapacityConfig.maxPercentage CedarBackup3.extend.capacity.CapacityConfig-class.html#maxPercentage CedarBackup3.extend.capacity.CapacityConfig._setMinBytes CedarBackup3.extend.capacity.CapacityConfig-class.html#_setMinBytes CedarBackup3.extend.capacity.CapacityConfig.__lt__ CedarBackup3.extend.capacity.CapacityConfig-class.html#__lt__ CedarBackup3.extend.capacity.CapacityConfig.__init__ CedarBackup3.extend.capacity.CapacityConfig-class.html#__init__ CedarBackup3.extend.capacity.CapacityConfig._setMaxPercentage CedarBackup3.extend.capacity.CapacityConfig-class.html#_setMaxPercentage CedarBackup3.extend.capacity.CapacityConfig.__eq__ CedarBackup3.extend.capacity.CapacityConfig-class.html#__eq__ CedarBackup3.extend.capacity.CapacityConfig.__cmp__ CedarBackup3.extend.capacity.CapacityConfig-class.html#__cmp__ CedarBackup3.extend.capacity.CapacityConfig.__gt__ CedarBackup3.extend.capacity.CapacityConfig-class.html#__gt__ CedarBackup3.extend.capacity.CapacityConfig._getMinBytes CedarBackup3.extend.capacity.CapacityConfig-class.html#_getMinBytes CedarBackup3.extend.capacity.CapacityConfig.minBytes CedarBackup3.extend.capacity.CapacityConfig-class.html#minBytes CedarBackup3.extend.capacity.CapacityConfig.__le__ CedarBackup3.extend.capacity.CapacityConfig-class.html#__le__ CedarBackup3.extend.capacity.CapacityConfig.__repr__ CedarBackup3.extend.capacity.CapacityConfig-class.html#__repr__ CedarBackup3.extend.capacity.CapacityConfig.__ge__ CedarBackup3.extend.capacity.CapacityConfig-class.html#__ge__ CedarBackup3.extend.capacity.LocalConfig CedarBackup3.extend.capacity.LocalConfig-class.html CedarBackup3.extend.capacity.LocalConfig.__str__ CedarBackup3.extend.capacity.LocalConfig-class.html#__str__ CedarBackup3.extend.capacity.LocalConfig._addPercentageQuantity CedarBackup3.extend.capacity.LocalConfig-class.html#_addPercentageQuantity CedarBackup3.extend.capacity.LocalConfig._parseXmlData CedarBackup3.extend.capacity.LocalConfig-class.html#_parseXmlData CedarBackup3.extend.capacity.LocalConfig.__lt__ CedarBackup3.extend.capacity.LocalConfig-class.html#__lt__ CedarBackup3.extend.capacity.LocalConfig.__init__ CedarBackup3.extend.capacity.LocalConfig-class.html#__init__ CedarBackup3.extend.capacity.LocalConfig.capacity CedarBackup3.extend.capacity.LocalConfig-class.html#capacity CedarBackup3.extend.capacity.LocalConfig.__cmp__ CedarBackup3.extend.capacity.LocalConfig-class.html#__cmp__ CedarBackup3.extend.capacity.LocalConfig._readPercentageQuantity CedarBackup3.extend.capacity.LocalConfig-class.html#_readPercentageQuantity CedarBackup3.extend.capacity.LocalConfig._parseCapacity CedarBackup3.extend.capacity.LocalConfig-class.html#_parseCapacity CedarBackup3.extend.capacity.LocalConfig.__repr__ CedarBackup3.extend.capacity.LocalConfig-class.html#__repr__ CedarBackup3.extend.capacity.LocalConfig.addConfig CedarBackup3.extend.capacity.LocalConfig-class.html#addConfig CedarBackup3.extend.capacity.LocalConfig.__gt__ CedarBackup3.extend.capacity.LocalConfig-class.html#__gt__ CedarBackup3.extend.capacity.LocalConfig.validate CedarBackup3.extend.capacity.LocalConfig-class.html#validate CedarBackup3.extend.capacity.LocalConfig.__eq__ CedarBackup3.extend.capacity.LocalConfig-class.html#__eq__ CedarBackup3.extend.capacity.LocalConfig.__le__ CedarBackup3.extend.capacity.LocalConfig-class.html#__le__ CedarBackup3.extend.capacity.LocalConfig._getCapacity CedarBackup3.extend.capacity.LocalConfig-class.html#_getCapacity CedarBackup3.extend.capacity.LocalConfig._setCapacity CedarBackup3.extend.capacity.LocalConfig-class.html#_setCapacity CedarBackup3.extend.capacity.LocalConfig.__ge__ CedarBackup3.extend.capacity.LocalConfig-class.html#__ge__ CedarBackup3.extend.capacity.PercentageQuantity CedarBackup3.extend.capacity.PercentageQuantity-class.html CedarBackup3.extend.capacity.PercentageQuantity._setQuantity CedarBackup3.extend.capacity.PercentageQuantity-class.html#_setQuantity CedarBackup3.extend.capacity.PercentageQuantity.__str__ CedarBackup3.extend.capacity.PercentageQuantity-class.html#__str__ CedarBackup3.extend.capacity.PercentageQuantity.__lt__ CedarBackup3.extend.capacity.PercentageQuantity-class.html#__lt__ CedarBackup3.extend.capacity.PercentageQuantity.__init__ CedarBackup3.extend.capacity.PercentageQuantity-class.html#__init__ CedarBackup3.extend.capacity.PercentageQuantity._getPercentage CedarBackup3.extend.capacity.PercentageQuantity-class.html#_getPercentage CedarBackup3.extend.capacity.PercentageQuantity.__cmp__ CedarBackup3.extend.capacity.PercentageQuantity-class.html#__cmp__ CedarBackup3.extend.capacity.PercentageQuantity._getQuantity CedarBackup3.extend.capacity.PercentageQuantity-class.html#_getQuantity CedarBackup3.extend.capacity.PercentageQuantity.percentage CedarBackup3.extend.capacity.PercentageQuantity-class.html#percentage CedarBackup3.extend.capacity.PercentageQuantity.__gt__ CedarBackup3.extend.capacity.PercentageQuantity-class.html#__gt__ CedarBackup3.extend.capacity.PercentageQuantity.__eq__ CedarBackup3.extend.capacity.PercentageQuantity-class.html#__eq__ CedarBackup3.extend.capacity.PercentageQuantity.__le__ CedarBackup3.extend.capacity.PercentageQuantity-class.html#__le__ CedarBackup3.extend.capacity.PercentageQuantity.__repr__ CedarBackup3.extend.capacity.PercentageQuantity-class.html#__repr__ CedarBackup3.extend.capacity.PercentageQuantity.__ge__ CedarBackup3.extend.capacity.PercentageQuantity-class.html#__ge__ CedarBackup3.extend.capacity.PercentageQuantity.quantity CedarBackup3.extend.capacity.PercentageQuantity-class.html#quantity CedarBackup3.extend.encrypt.EncryptConfig CedarBackup3.extend.encrypt.EncryptConfig-class.html CedarBackup3.extend.encrypt.EncryptConfig.__str__ CedarBackup3.extend.encrypt.EncryptConfig-class.html#__str__ CedarBackup3.extend.encrypt.EncryptConfig.__lt__ CedarBackup3.extend.encrypt.EncryptConfig-class.html#__lt__ CedarBackup3.extend.encrypt.EncryptConfig.__init__ CedarBackup3.extend.encrypt.EncryptConfig-class.html#__init__ CedarBackup3.extend.encrypt.EncryptConfig.__cmp__ CedarBackup3.extend.encrypt.EncryptConfig-class.html#__cmp__ CedarBackup3.extend.encrypt.EncryptConfig._getEncryptTarget CedarBackup3.extend.encrypt.EncryptConfig-class.html#_getEncryptTarget CedarBackup3.extend.encrypt.EncryptConfig.__repr__ CedarBackup3.extend.encrypt.EncryptConfig-class.html#__repr__ CedarBackup3.extend.encrypt.EncryptConfig._getEncryptMode CedarBackup3.extend.encrypt.EncryptConfig-class.html#_getEncryptMode CedarBackup3.extend.encrypt.EncryptConfig.__gt__ CedarBackup3.extend.encrypt.EncryptConfig-class.html#__gt__ CedarBackup3.extend.encrypt.EncryptConfig.__eq__ CedarBackup3.extend.encrypt.EncryptConfig-class.html#__eq__ CedarBackup3.extend.encrypt.EncryptConfig.encryptMode CedarBackup3.extend.encrypt.EncryptConfig-class.html#encryptMode CedarBackup3.extend.encrypt.EncryptConfig._setEncryptTarget CedarBackup3.extend.encrypt.EncryptConfig-class.html#_setEncryptTarget CedarBackup3.extend.encrypt.EncryptConfig.__le__ CedarBackup3.extend.encrypt.EncryptConfig-class.html#__le__ CedarBackup3.extend.encrypt.EncryptConfig.__ge__ CedarBackup3.extend.encrypt.EncryptConfig-class.html#__ge__ CedarBackup3.extend.encrypt.EncryptConfig.encryptTarget CedarBackup3.extend.encrypt.EncryptConfig-class.html#encryptTarget CedarBackup3.extend.encrypt.EncryptConfig._setEncryptMode CedarBackup3.extend.encrypt.EncryptConfig-class.html#_setEncryptMode CedarBackup3.extend.encrypt.LocalConfig CedarBackup3.extend.encrypt.LocalConfig-class.html CedarBackup3.extend.encrypt.LocalConfig.__str__ CedarBackup3.extend.encrypt.LocalConfig-class.html#__str__ CedarBackup3.extend.encrypt.LocalConfig._parseXmlData CedarBackup3.extend.encrypt.LocalConfig-class.html#_parseXmlData CedarBackup3.extend.encrypt.LocalConfig.__lt__ CedarBackup3.extend.encrypt.LocalConfig-class.html#__lt__ CedarBackup3.extend.encrypt.LocalConfig.__init__ CedarBackup3.extend.encrypt.LocalConfig-class.html#__init__ CedarBackup3.extend.encrypt.LocalConfig._parseEncrypt CedarBackup3.extend.encrypt.LocalConfig-class.html#_parseEncrypt CedarBackup3.extend.encrypt.LocalConfig.encrypt CedarBackup3.extend.encrypt.LocalConfig-class.html#encrypt CedarBackup3.extend.encrypt.LocalConfig._getEncrypt CedarBackup3.extend.encrypt.LocalConfig-class.html#_getEncrypt CedarBackup3.extend.encrypt.LocalConfig.__cmp__ CedarBackup3.extend.encrypt.LocalConfig-class.html#__cmp__ CedarBackup3.extend.encrypt.LocalConfig.addConfig CedarBackup3.extend.encrypt.LocalConfig-class.html#addConfig CedarBackup3.extend.encrypt.LocalConfig.__gt__ CedarBackup3.extend.encrypt.LocalConfig-class.html#__gt__ CedarBackup3.extend.encrypt.LocalConfig.validate CedarBackup3.extend.encrypt.LocalConfig-class.html#validate CedarBackup3.extend.encrypt.LocalConfig.__eq__ CedarBackup3.extend.encrypt.LocalConfig-class.html#__eq__ CedarBackup3.extend.encrypt.LocalConfig._setEncrypt CedarBackup3.extend.encrypt.LocalConfig-class.html#_setEncrypt CedarBackup3.extend.encrypt.LocalConfig.__le__ CedarBackup3.extend.encrypt.LocalConfig-class.html#__le__ CedarBackup3.extend.encrypt.LocalConfig.__repr__ CedarBackup3.extend.encrypt.LocalConfig-class.html#__repr__ CedarBackup3.extend.encrypt.LocalConfig.__ge__ CedarBackup3.extend.encrypt.LocalConfig-class.html#__ge__ CedarBackup3.extend.mbox.LocalConfig CedarBackup3.extend.mbox.LocalConfig-class.html CedarBackup3.extend.mbox.LocalConfig.__str__ CedarBackup3.extend.mbox.LocalConfig-class.html#__str__ CedarBackup3.extend.mbox.LocalConfig._parseXmlData CedarBackup3.extend.mbox.LocalConfig-class.html#_parseXmlData CedarBackup3.extend.mbox.LocalConfig.__lt__ CedarBackup3.extend.mbox.LocalConfig-class.html#__lt__ CedarBackup3.extend.mbox.LocalConfig.__init__ CedarBackup3.extend.mbox.LocalConfig-class.html#__init__ CedarBackup3.extend.mbox.LocalConfig.__cmp__ CedarBackup3.extend.mbox.LocalConfig-class.html#__cmp__ CedarBackup3.extend.mbox.LocalConfig.__repr__ CedarBackup3.extend.mbox.LocalConfig-class.html#__repr__ CedarBackup3.extend.mbox.LocalConfig.addConfig CedarBackup3.extend.mbox.LocalConfig-class.html#addConfig CedarBackup3.extend.mbox.LocalConfig.__gt__ CedarBackup3.extend.mbox.LocalConfig-class.html#__gt__ CedarBackup3.extend.mbox.LocalConfig.validate CedarBackup3.extend.mbox.LocalConfig-class.html#validate CedarBackup3.extend.mbox.LocalConfig.__eq__ CedarBackup3.extend.mbox.LocalConfig-class.html#__eq__ CedarBackup3.extend.mbox.LocalConfig._addMboxDir CedarBackup3.extend.mbox.LocalConfig-class.html#_addMboxDir CedarBackup3.extend.mbox.LocalConfig._parseMboxFiles CedarBackup3.extend.mbox.LocalConfig-class.html#_parseMboxFiles CedarBackup3.extend.mbox.LocalConfig._getMbox CedarBackup3.extend.mbox.LocalConfig-class.html#_getMbox CedarBackup3.extend.mbox.LocalConfig._addMboxFile CedarBackup3.extend.mbox.LocalConfig-class.html#_addMboxFile CedarBackup3.extend.mbox.LocalConfig._parseExclusions CedarBackup3.extend.mbox.LocalConfig-class.html#_parseExclusions CedarBackup3.extend.mbox.LocalConfig._setMbox CedarBackup3.extend.mbox.LocalConfig-class.html#_setMbox CedarBackup3.extend.mbox.LocalConfig._parseMbox CedarBackup3.extend.mbox.LocalConfig-class.html#_parseMbox CedarBackup3.extend.mbox.LocalConfig.__le__ CedarBackup3.extend.mbox.LocalConfig-class.html#__le__ CedarBackup3.extend.mbox.LocalConfig.__ge__ CedarBackup3.extend.mbox.LocalConfig-class.html#__ge__ CedarBackup3.extend.mbox.LocalConfig.mbox CedarBackup3.extend.mbox.LocalConfig-class.html#mbox CedarBackup3.extend.mbox.LocalConfig._parseMboxDirs CedarBackup3.extend.mbox.LocalConfig-class.html#_parseMboxDirs CedarBackup3.extend.mbox.MboxConfig CedarBackup3.extend.mbox.MboxConfig-class.html CedarBackup3.extend.mbox.MboxConfig._getCollectMode CedarBackup3.extend.mbox.MboxConfig-class.html#_getCollectMode CedarBackup3.extend.mbox.MboxConfig.mboxFiles CedarBackup3.extend.mbox.MboxConfig-class.html#mboxFiles CedarBackup3.extend.mbox.MboxConfig.__str__ CedarBackup3.extend.mbox.MboxConfig-class.html#__str__ CedarBackup3.extend.mbox.MboxConfig.__lt__ CedarBackup3.extend.mbox.MboxConfig-class.html#__lt__ CedarBackup3.extend.mbox.MboxConfig.__init__ CedarBackup3.extend.mbox.MboxConfig-class.html#__init__ CedarBackup3.extend.mbox.MboxConfig._setCollectMode CedarBackup3.extend.mbox.MboxConfig-class.html#_setCollectMode CedarBackup3.extend.mbox.MboxConfig._getMboxFiles CedarBackup3.extend.mbox.MboxConfig-class.html#_getMboxFiles CedarBackup3.extend.mbox.MboxConfig.__cmp__ CedarBackup3.extend.mbox.MboxConfig-class.html#__cmp__ CedarBackup3.extend.mbox.MboxConfig._setMboxFiles CedarBackup3.extend.mbox.MboxConfig-class.html#_setMboxFiles CedarBackup3.extend.mbox.MboxConfig.compressMode CedarBackup3.extend.mbox.MboxConfig-class.html#compressMode CedarBackup3.extend.mbox.MboxConfig._getMboxDirs CedarBackup3.extend.mbox.MboxConfig-class.html#_getMboxDirs CedarBackup3.extend.mbox.MboxConfig.__gt__ CedarBackup3.extend.mbox.MboxConfig-class.html#__gt__ CedarBackup3.extend.mbox.MboxConfig._setCompressMode CedarBackup3.extend.mbox.MboxConfig-class.html#_setCompressMode CedarBackup3.extend.mbox.MboxConfig.__eq__ CedarBackup3.extend.mbox.MboxConfig-class.html#__eq__ CedarBackup3.extend.mbox.MboxConfig._setMboxDirs CedarBackup3.extend.mbox.MboxConfig-class.html#_setMboxDirs CedarBackup3.extend.mbox.MboxConfig.mboxDirs CedarBackup3.extend.mbox.MboxConfig-class.html#mboxDirs CedarBackup3.extend.mbox.MboxConfig.collectMode CedarBackup3.extend.mbox.MboxConfig-class.html#collectMode CedarBackup3.extend.mbox.MboxConfig._getCompressMode CedarBackup3.extend.mbox.MboxConfig-class.html#_getCompressMode CedarBackup3.extend.mbox.MboxConfig.__le__ CedarBackup3.extend.mbox.MboxConfig-class.html#__le__ CedarBackup3.extend.mbox.MboxConfig.__repr__ CedarBackup3.extend.mbox.MboxConfig-class.html#__repr__ CedarBackup3.extend.mbox.MboxConfig.__ge__ CedarBackup3.extend.mbox.MboxConfig-class.html#__ge__ CedarBackup3.extend.mbox.MboxDir CedarBackup3.extend.mbox.MboxDir-class.html CedarBackup3.extend.mbox.MboxDir._getCollectMode CedarBackup3.extend.mbox.MboxDir-class.html#_getCollectMode CedarBackup3.extend.mbox.MboxDir.excludePatterns CedarBackup3.extend.mbox.MboxDir-class.html#excludePatterns CedarBackup3.extend.mbox.MboxDir.__str__ CedarBackup3.extend.mbox.MboxDir-class.html#__str__ CedarBackup3.extend.mbox.MboxDir._getAbsolutePath CedarBackup3.extend.mbox.MboxDir-class.html#_getAbsolutePath CedarBackup3.extend.mbox.MboxDir._setExcludePatterns CedarBackup3.extend.mbox.MboxDir-class.html#_setExcludePatterns CedarBackup3.extend.mbox.MboxDir.__lt__ CedarBackup3.extend.mbox.MboxDir-class.html#__lt__ CedarBackup3.extend.mbox.MboxDir.__init__ CedarBackup3.extend.mbox.MboxDir-class.html#__init__ CedarBackup3.extend.mbox.MboxDir._setCollectMode CedarBackup3.extend.mbox.MboxDir-class.html#_setCollectMode CedarBackup3.extend.mbox.MboxDir.absolutePath CedarBackup3.extend.mbox.MboxDir-class.html#absolutePath CedarBackup3.extend.mbox.MboxDir.__cmp__ CedarBackup3.extend.mbox.MboxDir-class.html#__cmp__ CedarBackup3.extend.mbox.MboxDir.relativeExcludePaths CedarBackup3.extend.mbox.MboxDir-class.html#relativeExcludePaths CedarBackup3.extend.mbox.MboxDir.compressMode CedarBackup3.extend.mbox.MboxDir-class.html#compressMode CedarBackup3.extend.mbox.MboxDir._getRelativeExcludePaths CedarBackup3.extend.mbox.MboxDir-class.html#_getRelativeExcludePaths CedarBackup3.extend.mbox.MboxDir.__gt__ CedarBackup3.extend.mbox.MboxDir-class.html#__gt__ CedarBackup3.extend.mbox.MboxDir._setCompressMode CedarBackup3.extend.mbox.MboxDir-class.html#_setCompressMode CedarBackup3.extend.mbox.MboxDir._setRelativeExcludePaths CedarBackup3.extend.mbox.MboxDir-class.html#_setRelativeExcludePaths CedarBackup3.extend.mbox.MboxDir.__eq__ CedarBackup3.extend.mbox.MboxDir-class.html#__eq__ CedarBackup3.extend.mbox.MboxDir.collectMode CedarBackup3.extend.mbox.MboxDir-class.html#collectMode CedarBackup3.extend.mbox.MboxDir._getExcludePatterns CedarBackup3.extend.mbox.MboxDir-class.html#_getExcludePatterns CedarBackup3.extend.mbox.MboxDir._getCompressMode CedarBackup3.extend.mbox.MboxDir-class.html#_getCompressMode CedarBackup3.extend.mbox.MboxDir._setAbsolutePath CedarBackup3.extend.mbox.MboxDir-class.html#_setAbsolutePath CedarBackup3.extend.mbox.MboxDir.__le__ CedarBackup3.extend.mbox.MboxDir-class.html#__le__ CedarBackup3.extend.mbox.MboxDir.__repr__ CedarBackup3.extend.mbox.MboxDir-class.html#__repr__ CedarBackup3.extend.mbox.MboxDir.__ge__ CedarBackup3.extend.mbox.MboxDir-class.html#__ge__ CedarBackup3.extend.mbox.MboxFile CedarBackup3.extend.mbox.MboxFile-class.html CedarBackup3.extend.mbox.MboxFile._getCollectMode CedarBackup3.extend.mbox.MboxFile-class.html#_getCollectMode CedarBackup3.extend.mbox.MboxFile.__str__ CedarBackup3.extend.mbox.MboxFile-class.html#__str__ CedarBackup3.extend.mbox.MboxFile._getAbsolutePath CedarBackup3.extend.mbox.MboxFile-class.html#_getAbsolutePath CedarBackup3.extend.mbox.MboxFile.__lt__ CedarBackup3.extend.mbox.MboxFile-class.html#__lt__ CedarBackup3.extend.mbox.MboxFile.__init__ CedarBackup3.extend.mbox.MboxFile-class.html#__init__ CedarBackup3.extend.mbox.MboxFile._setCollectMode CedarBackup3.extend.mbox.MboxFile-class.html#_setCollectMode CedarBackup3.extend.mbox.MboxFile.absolutePath CedarBackup3.extend.mbox.MboxFile-class.html#absolutePath CedarBackup3.extend.mbox.MboxFile.__cmp__ CedarBackup3.extend.mbox.MboxFile-class.html#__cmp__ CedarBackup3.extend.mbox.MboxFile.compressMode CedarBackup3.extend.mbox.MboxFile-class.html#compressMode CedarBackup3.extend.mbox.MboxFile.__gt__ CedarBackup3.extend.mbox.MboxFile-class.html#__gt__ CedarBackup3.extend.mbox.MboxFile._setCompressMode CedarBackup3.extend.mbox.MboxFile-class.html#_setCompressMode CedarBackup3.extend.mbox.MboxFile.__eq__ CedarBackup3.extend.mbox.MboxFile-class.html#__eq__ CedarBackup3.extend.mbox.MboxFile.collectMode CedarBackup3.extend.mbox.MboxFile-class.html#collectMode CedarBackup3.extend.mbox.MboxFile._getCompressMode CedarBackup3.extend.mbox.MboxFile-class.html#_getCompressMode CedarBackup3.extend.mbox.MboxFile._setAbsolutePath CedarBackup3.extend.mbox.MboxFile-class.html#_setAbsolutePath CedarBackup3.extend.mbox.MboxFile.__le__ CedarBackup3.extend.mbox.MboxFile-class.html#__le__ CedarBackup3.extend.mbox.MboxFile.__repr__ CedarBackup3.extend.mbox.MboxFile-class.html#__repr__ CedarBackup3.extend.mbox.MboxFile.__ge__ CedarBackup3.extend.mbox.MboxFile-class.html#__ge__ CedarBackup3.extend.mysql.LocalConfig CedarBackup3.extend.mysql.LocalConfig-class.html CedarBackup3.extend.mysql.LocalConfig.__str__ CedarBackup3.extend.mysql.LocalConfig-class.html#__str__ CedarBackup3.extend.mysql.LocalConfig.mysql CedarBackup3.extend.mysql.LocalConfig-class.html#mysql CedarBackup3.extend.mysql.LocalConfig.__lt__ CedarBackup3.extend.mysql.LocalConfig-class.html#__lt__ CedarBackup3.extend.mysql.LocalConfig._parseMysql CedarBackup3.extend.mysql.LocalConfig-class.html#_parseMysql CedarBackup3.extend.mysql.LocalConfig.__init__ CedarBackup3.extend.mysql.LocalConfig-class.html#__init__ CedarBackup3.extend.mysql.LocalConfig.__cmp__ CedarBackup3.extend.mysql.LocalConfig-class.html#__cmp__ CedarBackup3.extend.mysql.LocalConfig._setMysql CedarBackup3.extend.mysql.LocalConfig-class.html#_setMysql CedarBackup3.extend.mysql.LocalConfig._parseXmlData CedarBackup3.extend.mysql.LocalConfig-class.html#_parseXmlData CedarBackup3.extend.mysql.LocalConfig._getMysql CedarBackup3.extend.mysql.LocalConfig-class.html#_getMysql CedarBackup3.extend.mysql.LocalConfig.addConfig CedarBackup3.extend.mysql.LocalConfig-class.html#addConfig CedarBackup3.extend.mysql.LocalConfig.__gt__ CedarBackup3.extend.mysql.LocalConfig-class.html#__gt__ CedarBackup3.extend.mysql.LocalConfig.validate CedarBackup3.extend.mysql.LocalConfig-class.html#validate CedarBackup3.extend.mysql.LocalConfig.__eq__ CedarBackup3.extend.mysql.LocalConfig-class.html#__eq__ CedarBackup3.extend.mysql.LocalConfig.__le__ CedarBackup3.extend.mysql.LocalConfig-class.html#__le__ CedarBackup3.extend.mysql.LocalConfig.__repr__ CedarBackup3.extend.mysql.LocalConfig-class.html#__repr__ CedarBackup3.extend.mysql.LocalConfig.__ge__ CedarBackup3.extend.mysql.LocalConfig-class.html#__ge__ CedarBackup3.extend.mysql.MysqlConfig CedarBackup3.extend.mysql.MysqlConfig-class.html CedarBackup3.extend.mysql.MysqlConfig.all CedarBackup3.extend.mysql.MysqlConfig-class.html#all CedarBackup3.extend.mysql.MysqlConfig.__str__ CedarBackup3.extend.mysql.MysqlConfig-class.html#__str__ CedarBackup3.extend.mysql.MysqlConfig.__lt__ CedarBackup3.extend.mysql.MysqlConfig-class.html#__lt__ CedarBackup3.extend.mysql.MysqlConfig._setAll CedarBackup3.extend.mysql.MysqlConfig-class.html#_setAll CedarBackup3.extend.mysql.MysqlConfig.__init__ CedarBackup3.extend.mysql.MysqlConfig-class.html#__init__ CedarBackup3.extend.mysql.MysqlConfig._setDatabases CedarBackup3.extend.mysql.MysqlConfig-class.html#_setDatabases CedarBackup3.extend.mysql.MysqlConfig._getAll CedarBackup3.extend.mysql.MysqlConfig-class.html#_getAll CedarBackup3.extend.mysql.MysqlConfig.__cmp__ CedarBackup3.extend.mysql.MysqlConfig-class.html#__cmp__ CedarBackup3.extend.mysql.MysqlConfig._setPassword CedarBackup3.extend.mysql.MysqlConfig-class.html#_setPassword CedarBackup3.extend.mysql.MysqlConfig._getUser CedarBackup3.extend.mysql.MysqlConfig-class.html#_getUser CedarBackup3.extend.mysql.MysqlConfig._setUser CedarBackup3.extend.mysql.MysqlConfig-class.html#_setUser CedarBackup3.extend.mysql.MysqlConfig.compressMode CedarBackup3.extend.mysql.MysqlConfig-class.html#compressMode CedarBackup3.extend.mysql.MysqlConfig._getPassword CedarBackup3.extend.mysql.MysqlConfig-class.html#_getPassword CedarBackup3.extend.mysql.MysqlConfig.user CedarBackup3.extend.mysql.MysqlConfig-class.html#user CedarBackup3.extend.mysql.MysqlConfig.__gt__ CedarBackup3.extend.mysql.MysqlConfig-class.html#__gt__ CedarBackup3.extend.mysql.MysqlConfig._setCompressMode CedarBackup3.extend.mysql.MysqlConfig-class.html#_setCompressMode CedarBackup3.extend.mysql.MysqlConfig.password CedarBackup3.extend.mysql.MysqlConfig-class.html#password CedarBackup3.extend.mysql.MysqlConfig.__eq__ CedarBackup3.extend.mysql.MysqlConfig-class.html#__eq__ CedarBackup3.extend.mysql.MysqlConfig._getCompressMode CedarBackup3.extend.mysql.MysqlConfig-class.html#_getCompressMode CedarBackup3.extend.mysql.MysqlConfig.__le__ CedarBackup3.extend.mysql.MysqlConfig-class.html#__le__ CedarBackup3.extend.mysql.MysqlConfig._getDatabases CedarBackup3.extend.mysql.MysqlConfig-class.html#_getDatabases CedarBackup3.extend.mysql.MysqlConfig.__repr__ CedarBackup3.extend.mysql.MysqlConfig-class.html#__repr__ CedarBackup3.extend.mysql.MysqlConfig.databases CedarBackup3.extend.mysql.MysqlConfig-class.html#databases CedarBackup3.extend.mysql.MysqlConfig.__ge__ CedarBackup3.extend.mysql.MysqlConfig-class.html#__ge__ CedarBackup3.extend.postgresql.LocalConfig CedarBackup3.extend.postgresql.LocalConfig-class.html CedarBackup3.extend.postgresql.LocalConfig.__str__ CedarBackup3.extend.postgresql.LocalConfig-class.html#__str__ CedarBackup3.extend.postgresql.LocalConfig._parseXmlData CedarBackup3.extend.postgresql.LocalConfig-class.html#_parseXmlData CedarBackup3.extend.postgresql.LocalConfig.__lt__ CedarBackup3.extend.postgresql.LocalConfig-class.html#__lt__ CedarBackup3.extend.postgresql.LocalConfig.__init__ CedarBackup3.extend.postgresql.LocalConfig-class.html#__init__ CedarBackup3.extend.postgresql.LocalConfig._setPostgresql CedarBackup3.extend.postgresql.LocalConfig-class.html#_setPostgresql CedarBackup3.extend.postgresql.LocalConfig.__cmp__ CedarBackup3.extend.postgresql.LocalConfig-class.html#__cmp__ CedarBackup3.extend.postgresql.LocalConfig._parsePostgresql CedarBackup3.extend.postgresql.LocalConfig-class.html#_parsePostgresql CedarBackup3.extend.postgresql.LocalConfig.addConfig CedarBackup3.extend.postgresql.LocalConfig-class.html#addConfig CedarBackup3.extend.postgresql.LocalConfig.__gt__ CedarBackup3.extend.postgresql.LocalConfig-class.html#__gt__ CedarBackup3.extend.postgresql.LocalConfig.validate CedarBackup3.extend.postgresql.LocalConfig-class.html#validate CedarBackup3.extend.postgresql.LocalConfig.__eq__ CedarBackup3.extend.postgresql.LocalConfig-class.html#__eq__ CedarBackup3.extend.postgresql.LocalConfig.postgresql CedarBackup3.extend.postgresql.LocalConfig-class.html#postgresql CedarBackup3.extend.postgresql.LocalConfig._getPostgresql CedarBackup3.extend.postgresql.LocalConfig-class.html#_getPostgresql CedarBackup3.extend.postgresql.LocalConfig.__le__ CedarBackup3.extend.postgresql.LocalConfig-class.html#__le__ CedarBackup3.extend.postgresql.LocalConfig.__repr__ CedarBackup3.extend.postgresql.LocalConfig-class.html#__repr__ CedarBackup3.extend.postgresql.LocalConfig.__ge__ CedarBackup3.extend.postgresql.LocalConfig-class.html#__ge__ CedarBackup3.extend.postgresql.PostgresqlConfig CedarBackup3.extend.postgresql.PostgresqlConfig-class.html CedarBackup3.extend.postgresql.PostgresqlConfig.all CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#all CedarBackup3.extend.postgresql.PostgresqlConfig.__str__ CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#__str__ CedarBackup3.extend.postgresql.PostgresqlConfig.__lt__ CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#__lt__ CedarBackup3.extend.postgresql.PostgresqlConfig._setAll CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#_setAll CedarBackup3.extend.postgresql.PostgresqlConfig.__init__ CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#__init__ CedarBackup3.extend.postgresql.PostgresqlConfig._setDatabases CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#_setDatabases CedarBackup3.extend.postgresql.PostgresqlConfig._getAll CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#_getAll CedarBackup3.extend.postgresql.PostgresqlConfig.__cmp__ CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#__cmp__ CedarBackup3.extend.postgresql.PostgresqlConfig._getUser CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#_getUser CedarBackup3.extend.postgresql.PostgresqlConfig._setUser CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#_setUser CedarBackup3.extend.postgresql.PostgresqlConfig.compressMode CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#compressMode CedarBackup3.extend.postgresql.PostgresqlConfig.user CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#user CedarBackup3.extend.postgresql.PostgresqlConfig.__gt__ CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#__gt__ CedarBackup3.extend.postgresql.PostgresqlConfig._setCompressMode CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#_setCompressMode CedarBackup3.extend.postgresql.PostgresqlConfig.__eq__ CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#__eq__ CedarBackup3.extend.postgresql.PostgresqlConfig._getCompressMode CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#_getCompressMode CedarBackup3.extend.postgresql.PostgresqlConfig.__le__ CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#__le__ CedarBackup3.extend.postgresql.PostgresqlConfig._getDatabases CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#_getDatabases CedarBackup3.extend.postgresql.PostgresqlConfig.__repr__ CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#__repr__ CedarBackup3.extend.postgresql.PostgresqlConfig.databases CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#databases CedarBackup3.extend.postgresql.PostgresqlConfig.__ge__ CedarBackup3.extend.postgresql.PostgresqlConfig-class.html#__ge__ CedarBackup3.extend.split.LocalConfig CedarBackup3.extend.split.LocalConfig-class.html CedarBackup3.extend.split.LocalConfig.__str__ CedarBackup3.extend.split.LocalConfig-class.html#__str__ CedarBackup3.extend.split.LocalConfig._getSplit CedarBackup3.extend.split.LocalConfig-class.html#_getSplit CedarBackup3.extend.split.LocalConfig._parseXmlData CedarBackup3.extend.split.LocalConfig-class.html#_parseXmlData CedarBackup3.extend.split.LocalConfig.__lt__ CedarBackup3.extend.split.LocalConfig-class.html#__lt__ CedarBackup3.extend.split.LocalConfig.__init__ CedarBackup3.extend.split.LocalConfig-class.html#__init__ CedarBackup3.extend.split.LocalConfig.__cmp__ CedarBackup3.extend.split.LocalConfig-class.html#__cmp__ CedarBackup3.extend.split.LocalConfig._setSplit CedarBackup3.extend.split.LocalConfig-class.html#_setSplit CedarBackup3.extend.split.LocalConfig.split CedarBackup3.extend.split.LocalConfig-class.html#split CedarBackup3.extend.split.LocalConfig.addConfig CedarBackup3.extend.split.LocalConfig-class.html#addConfig CedarBackup3.extend.split.LocalConfig.__gt__ CedarBackup3.extend.split.LocalConfig-class.html#__gt__ CedarBackup3.extend.split.LocalConfig.validate CedarBackup3.extend.split.LocalConfig-class.html#validate CedarBackup3.extend.split.LocalConfig.__eq__ CedarBackup3.extend.split.LocalConfig-class.html#__eq__ CedarBackup3.extend.split.LocalConfig.__le__ CedarBackup3.extend.split.LocalConfig-class.html#__le__ CedarBackup3.extend.split.LocalConfig.__repr__ CedarBackup3.extend.split.LocalConfig-class.html#__repr__ CedarBackup3.extend.split.LocalConfig.__ge__ CedarBackup3.extend.split.LocalConfig-class.html#__ge__ CedarBackup3.extend.split.LocalConfig._parseSplit CedarBackup3.extend.split.LocalConfig-class.html#_parseSplit CedarBackup3.extend.split.SplitConfig CedarBackup3.extend.split.SplitConfig-class.html CedarBackup3.extend.split.SplitConfig.__str__ CedarBackup3.extend.split.SplitConfig-class.html#__str__ CedarBackup3.extend.split.SplitConfig.__lt__ CedarBackup3.extend.split.SplitConfig-class.html#__lt__ CedarBackup3.extend.split.SplitConfig.__init__ CedarBackup3.extend.split.SplitConfig-class.html#__init__ CedarBackup3.extend.split.SplitConfig._setSizeLimit CedarBackup3.extend.split.SplitConfig-class.html#_setSizeLimit CedarBackup3.extend.split.SplitConfig.__cmp__ CedarBackup3.extend.split.SplitConfig-class.html#__cmp__ CedarBackup3.extend.split.SplitConfig._getSplitSize CedarBackup3.extend.split.SplitConfig-class.html#_getSplitSize CedarBackup3.extend.split.SplitConfig.__gt__ CedarBackup3.extend.split.SplitConfig-class.html#__gt__ CedarBackup3.extend.split.SplitConfig.__eq__ CedarBackup3.extend.split.SplitConfig-class.html#__eq__ CedarBackup3.extend.split.SplitConfig.splitSize CedarBackup3.extend.split.SplitConfig-class.html#splitSize CedarBackup3.extend.split.SplitConfig._setSplitSize CedarBackup3.extend.split.SplitConfig-class.html#_setSplitSize CedarBackup3.extend.split.SplitConfig.__le__ CedarBackup3.extend.split.SplitConfig-class.html#__le__ CedarBackup3.extend.split.SplitConfig.__repr__ CedarBackup3.extend.split.SplitConfig-class.html#__repr__ CedarBackup3.extend.split.SplitConfig.sizeLimit CedarBackup3.extend.split.SplitConfig-class.html#sizeLimit CedarBackup3.extend.split.SplitConfig._getSizeLimit CedarBackup3.extend.split.SplitConfig-class.html#_getSizeLimit CedarBackup3.extend.split.SplitConfig.__ge__ CedarBackup3.extend.split.SplitConfig-class.html#__ge__ CedarBackup3.extend.subversion.BDBRepository CedarBackup3.extend.subversion.BDBRepository-class.html CedarBackup3.extend.subversion.Repository._getCollectMode CedarBackup3.extend.subversion.Repository-class.html#_getCollectMode CedarBackup3.extend.subversion.Repository.repositoryType CedarBackup3.extend.subversion.Repository-class.html#repositoryType CedarBackup3.extend.subversion.Repository.__lt__ CedarBackup3.extend.subversion.Repository-class.html#__lt__ CedarBackup3.extend.subversion.BDBRepository.__init__ CedarBackup3.extend.subversion.BDBRepository-class.html#__init__ CedarBackup3.extend.subversion.Repository._setCollectMode CedarBackup3.extend.subversion.Repository-class.html#_setCollectMode CedarBackup3.extend.subversion.Repository.__cmp__ CedarBackup3.extend.subversion.Repository-class.html#__cmp__ CedarBackup3.extend.subversion.Repository._setRepositoryType CedarBackup3.extend.subversion.Repository-class.html#_setRepositoryType CedarBackup3.extend.subversion.Repository.__str__ CedarBackup3.extend.subversion.Repository-class.html#__str__ CedarBackup3.extend.subversion.Repository.compressMode CedarBackup3.extend.subversion.Repository-class.html#compressMode CedarBackup3.extend.subversion.Repository._setRepositoryPath CedarBackup3.extend.subversion.Repository-class.html#_setRepositoryPath CedarBackup3.extend.subversion.Repository._getRepositoryType CedarBackup3.extend.subversion.Repository-class.html#_getRepositoryType CedarBackup3.extend.subversion.Repository.__gt__ CedarBackup3.extend.subversion.Repository-class.html#__gt__ CedarBackup3.extend.subversion.Repository._setCompressMode CedarBackup3.extend.subversion.Repository-class.html#_setCompressMode CedarBackup3.extend.subversion.Repository.__eq__ CedarBackup3.extend.subversion.Repository-class.html#__eq__ CedarBackup3.extend.subversion.Repository._getRepositoryPath CedarBackup3.extend.subversion.Repository-class.html#_getRepositoryPath CedarBackup3.extend.subversion.Repository.collectMode CedarBackup3.extend.subversion.Repository-class.html#collectMode CedarBackup3.extend.subversion.Repository._getCompressMode CedarBackup3.extend.subversion.Repository-class.html#_getCompressMode CedarBackup3.extend.subversion.Repository.repositoryPath CedarBackup3.extend.subversion.Repository-class.html#repositoryPath CedarBackup3.extend.subversion.Repository.__le__ CedarBackup3.extend.subversion.Repository-class.html#__le__ CedarBackup3.extend.subversion.BDBRepository.__repr__ CedarBackup3.extend.subversion.BDBRepository-class.html#__repr__ CedarBackup3.extend.subversion.Repository.__ge__ CedarBackup3.extend.subversion.Repository-class.html#__ge__ CedarBackup3.extend.subversion.FSFSRepository CedarBackup3.extend.subversion.FSFSRepository-class.html CedarBackup3.extend.subversion.Repository._getCollectMode CedarBackup3.extend.subversion.Repository-class.html#_getCollectMode CedarBackup3.extend.subversion.Repository.repositoryType CedarBackup3.extend.subversion.Repository-class.html#repositoryType CedarBackup3.extend.subversion.Repository.__lt__ CedarBackup3.extend.subversion.Repository-class.html#__lt__ CedarBackup3.extend.subversion.FSFSRepository.__init__ CedarBackup3.extend.subversion.FSFSRepository-class.html#__init__ CedarBackup3.extend.subversion.Repository._setCollectMode CedarBackup3.extend.subversion.Repository-class.html#_setCollectMode CedarBackup3.extend.subversion.Repository.__cmp__ CedarBackup3.extend.subversion.Repository-class.html#__cmp__ CedarBackup3.extend.subversion.Repository._setRepositoryType CedarBackup3.extend.subversion.Repository-class.html#_setRepositoryType CedarBackup3.extend.subversion.Repository.__str__ CedarBackup3.extend.subversion.Repository-class.html#__str__ CedarBackup3.extend.subversion.Repository.compressMode CedarBackup3.extend.subversion.Repository-class.html#compressMode CedarBackup3.extend.subversion.Repository._setRepositoryPath CedarBackup3.extend.subversion.Repository-class.html#_setRepositoryPath CedarBackup3.extend.subversion.Repository._getRepositoryType CedarBackup3.extend.subversion.Repository-class.html#_getRepositoryType CedarBackup3.extend.subversion.Repository.__gt__ CedarBackup3.extend.subversion.Repository-class.html#__gt__ CedarBackup3.extend.subversion.Repository._setCompressMode CedarBackup3.extend.subversion.Repository-class.html#_setCompressMode CedarBackup3.extend.subversion.Repository.__eq__ CedarBackup3.extend.subversion.Repository-class.html#__eq__ CedarBackup3.extend.subversion.Repository._getRepositoryPath CedarBackup3.extend.subversion.Repository-class.html#_getRepositoryPath CedarBackup3.extend.subversion.Repository.collectMode CedarBackup3.extend.subversion.Repository-class.html#collectMode CedarBackup3.extend.subversion.Repository._getCompressMode CedarBackup3.extend.subversion.Repository-class.html#_getCompressMode CedarBackup3.extend.subversion.Repository.repositoryPath CedarBackup3.extend.subversion.Repository-class.html#repositoryPath CedarBackup3.extend.subversion.Repository.__le__ CedarBackup3.extend.subversion.Repository-class.html#__le__ CedarBackup3.extend.subversion.FSFSRepository.__repr__ CedarBackup3.extend.subversion.FSFSRepository-class.html#__repr__ CedarBackup3.extend.subversion.Repository.__ge__ CedarBackup3.extend.subversion.Repository-class.html#__ge__ CedarBackup3.extend.subversion.LocalConfig CedarBackup3.extend.subversion.LocalConfig-class.html CedarBackup3.extend.subversion.LocalConfig._getSubversion CedarBackup3.extend.subversion.LocalConfig-class.html#_getSubversion CedarBackup3.extend.subversion.LocalConfig.__str__ CedarBackup3.extend.subversion.LocalConfig-class.html#__str__ CedarBackup3.extend.subversion.LocalConfig.__lt__ CedarBackup3.extend.subversion.LocalConfig-class.html#__lt__ CedarBackup3.extend.subversion.LocalConfig._parseXmlData CedarBackup3.extend.subversion.LocalConfig-class.html#_parseXmlData CedarBackup3.extend.subversion.LocalConfig.__init__ CedarBackup3.extend.subversion.LocalConfig-class.html#__init__ CedarBackup3.extend.subversion.LocalConfig.__cmp__ CedarBackup3.extend.subversion.LocalConfig-class.html#__cmp__ CedarBackup3.extend.subversion.LocalConfig.subversion CedarBackup3.extend.subversion.LocalConfig-class.html#subversion CedarBackup3.extend.subversion.LocalConfig._parseRepositories CedarBackup3.extend.subversion.LocalConfig-class.html#_parseRepositories CedarBackup3.extend.subversion.LocalConfig._setSubversion CedarBackup3.extend.subversion.LocalConfig-class.html#_setSubversion CedarBackup3.extend.subversion.LocalConfig._parseSubversion CedarBackup3.extend.subversion.LocalConfig-class.html#_parseSubversion CedarBackup3.extend.subversion.LocalConfig.addConfig CedarBackup3.extend.subversion.LocalConfig-class.html#addConfig CedarBackup3.extend.subversion.LocalConfig.__gt__ CedarBackup3.extend.subversion.LocalConfig-class.html#__gt__ CedarBackup3.extend.subversion.LocalConfig.validate CedarBackup3.extend.subversion.LocalConfig-class.html#validate CedarBackup3.extend.subversion.LocalConfig.__eq__ CedarBackup3.extend.subversion.LocalConfig-class.html#__eq__ CedarBackup3.extend.subversion.LocalConfig._addRepository CedarBackup3.extend.subversion.LocalConfig-class.html#_addRepository CedarBackup3.extend.subversion.LocalConfig._parseExclusions CedarBackup3.extend.subversion.LocalConfig-class.html#_parseExclusions CedarBackup3.extend.subversion.LocalConfig.__le__ CedarBackup3.extend.subversion.LocalConfig-class.html#__le__ CedarBackup3.extend.subversion.LocalConfig.__repr__ CedarBackup3.extend.subversion.LocalConfig-class.html#__repr__ CedarBackup3.extend.subversion.LocalConfig._parseRepositoryDirs CedarBackup3.extend.subversion.LocalConfig-class.html#_parseRepositoryDirs CedarBackup3.extend.subversion.LocalConfig._addRepositoryDir CedarBackup3.extend.subversion.LocalConfig-class.html#_addRepositoryDir CedarBackup3.extend.subversion.LocalConfig.__ge__ CedarBackup3.extend.subversion.LocalConfig-class.html#__ge__ CedarBackup3.extend.subversion.Repository CedarBackup3.extend.subversion.Repository-class.html CedarBackup3.extend.subversion.Repository._getCollectMode CedarBackup3.extend.subversion.Repository-class.html#_getCollectMode CedarBackup3.extend.subversion.Repository.repositoryType CedarBackup3.extend.subversion.Repository-class.html#repositoryType CedarBackup3.extend.subversion.Repository.__lt__ CedarBackup3.extend.subversion.Repository-class.html#__lt__ CedarBackup3.extend.subversion.Repository.__init__ CedarBackup3.extend.subversion.Repository-class.html#__init__ CedarBackup3.extend.subversion.Repository._setRepositoryType CedarBackup3.extend.subversion.Repository-class.html#_setRepositoryType CedarBackup3.extend.subversion.Repository.__cmp__ CedarBackup3.extend.subversion.Repository-class.html#__cmp__ CedarBackup3.extend.subversion.Repository._setCollectMode CedarBackup3.extend.subversion.Repository-class.html#_setCollectMode CedarBackup3.extend.subversion.Repository.__str__ CedarBackup3.extend.subversion.Repository-class.html#__str__ CedarBackup3.extend.subversion.Repository.__ge__ CedarBackup3.extend.subversion.Repository-class.html#__ge__ CedarBackup3.extend.subversion.Repository.compressMode CedarBackup3.extend.subversion.Repository-class.html#compressMode CedarBackup3.extend.subversion.Repository._setRepositoryPath CedarBackup3.extend.subversion.Repository-class.html#_setRepositoryPath CedarBackup3.extend.subversion.Repository._getRepositoryType CedarBackup3.extend.subversion.Repository-class.html#_getRepositoryType CedarBackup3.extend.subversion.Repository.__gt__ CedarBackup3.extend.subversion.Repository-class.html#__gt__ CedarBackup3.extend.subversion.Repository._setCompressMode CedarBackup3.extend.subversion.Repository-class.html#_setCompressMode CedarBackup3.extend.subversion.Repository.__eq__ CedarBackup3.extend.subversion.Repository-class.html#__eq__ CedarBackup3.extend.subversion.Repository.collectMode CedarBackup3.extend.subversion.Repository-class.html#collectMode CedarBackup3.extend.subversion.Repository._getCompressMode CedarBackup3.extend.subversion.Repository-class.html#_getCompressMode CedarBackup3.extend.subversion.Repository.repositoryPath CedarBackup3.extend.subversion.Repository-class.html#repositoryPath CedarBackup3.extend.subversion.Repository.__le__ CedarBackup3.extend.subversion.Repository-class.html#__le__ CedarBackup3.extend.subversion.Repository.__repr__ CedarBackup3.extend.subversion.Repository-class.html#__repr__ CedarBackup3.extend.subversion.Repository._getRepositoryPath CedarBackup3.extend.subversion.Repository-class.html#_getRepositoryPath CedarBackup3.extend.subversion.RepositoryDir CedarBackup3.extend.subversion.RepositoryDir-class.html CedarBackup3.extend.subversion.RepositoryDir.directoryPath CedarBackup3.extend.subversion.RepositoryDir-class.html#directoryPath CedarBackup3.extend.subversion.RepositoryDir._getCollectMode CedarBackup3.extend.subversion.RepositoryDir-class.html#_getCollectMode CedarBackup3.extend.subversion.RepositoryDir._getCompressMode CedarBackup3.extend.subversion.RepositoryDir-class.html#_getCompressMode CedarBackup3.extend.subversion.RepositoryDir.repositoryType CedarBackup3.extend.subversion.RepositoryDir-class.html#repositoryType CedarBackup3.extend.subversion.RepositoryDir._setExcludePatterns CedarBackup3.extend.subversion.RepositoryDir-class.html#_setExcludePatterns CedarBackup3.extend.subversion.RepositoryDir.__lt__ CedarBackup3.extend.subversion.RepositoryDir-class.html#__lt__ CedarBackup3.extend.subversion.RepositoryDir.__init__ CedarBackup3.extend.subversion.RepositoryDir-class.html#__init__ CedarBackup3.extend.subversion.RepositoryDir._setCollectMode CedarBackup3.extend.subversion.RepositoryDir-class.html#_setCollectMode CedarBackup3.extend.subversion.RepositoryDir.__cmp__ CedarBackup3.extend.subversion.RepositoryDir-class.html#__cmp__ CedarBackup3.extend.subversion.RepositoryDir._setRepositoryType CedarBackup3.extend.subversion.RepositoryDir-class.html#_setRepositoryType CedarBackup3.extend.subversion.RepositoryDir.__str__ CedarBackup3.extend.subversion.RepositoryDir-class.html#__str__ CedarBackup3.extend.subversion.RepositoryDir._getDirectoryPath CedarBackup3.extend.subversion.RepositoryDir-class.html#_getDirectoryPath CedarBackup3.extend.subversion.RepositoryDir.relativeExcludePaths CedarBackup3.extend.subversion.RepositoryDir-class.html#relativeExcludePaths CedarBackup3.extend.subversion.RepositoryDir.compressMode CedarBackup3.extend.subversion.RepositoryDir-class.html#compressMode CedarBackup3.extend.subversion.RepositoryDir._getRepositoryType CedarBackup3.extend.subversion.RepositoryDir-class.html#_getRepositoryType CedarBackup3.extend.subversion.RepositoryDir._getRelativeExcludePaths CedarBackup3.extend.subversion.RepositoryDir-class.html#_getRelativeExcludePaths CedarBackup3.extend.subversion.RepositoryDir._setDirectoryPath CedarBackup3.extend.subversion.RepositoryDir-class.html#_setDirectoryPath CedarBackup3.extend.subversion.RepositoryDir.__gt__ CedarBackup3.extend.subversion.RepositoryDir-class.html#__gt__ CedarBackup3.extend.subversion.RepositoryDir._setCompressMode CedarBackup3.extend.subversion.RepositoryDir-class.html#_setCompressMode CedarBackup3.extend.subversion.RepositoryDir._setRelativeExcludePaths CedarBackup3.extend.subversion.RepositoryDir-class.html#_setRelativeExcludePaths CedarBackup3.extend.subversion.RepositoryDir.__eq__ CedarBackup3.extend.subversion.RepositoryDir-class.html#__eq__ CedarBackup3.extend.subversion.RepositoryDir.collectMode CedarBackup3.extend.subversion.RepositoryDir-class.html#collectMode CedarBackup3.extend.subversion.RepositoryDir._getExcludePatterns CedarBackup3.extend.subversion.RepositoryDir-class.html#_getExcludePatterns CedarBackup3.extend.subversion.RepositoryDir.excludePatterns CedarBackup3.extend.subversion.RepositoryDir-class.html#excludePatterns CedarBackup3.extend.subversion.RepositoryDir.__le__ CedarBackup3.extend.subversion.RepositoryDir-class.html#__le__ CedarBackup3.extend.subversion.RepositoryDir.__repr__ CedarBackup3.extend.subversion.RepositoryDir-class.html#__repr__ CedarBackup3.extend.subversion.RepositoryDir.__ge__ CedarBackup3.extend.subversion.RepositoryDir-class.html#__ge__ CedarBackup3.extend.subversion.SubversionConfig CedarBackup3.extend.subversion.SubversionConfig-class.html CedarBackup3.extend.subversion.SubversionConfig._getCollectMode CedarBackup3.extend.subversion.SubversionConfig-class.html#_getCollectMode CedarBackup3.extend.subversion.SubversionConfig._getCompressMode CedarBackup3.extend.subversion.SubversionConfig-class.html#_getCompressMode CedarBackup3.extend.subversion.SubversionConfig.__str__ CedarBackup3.extend.subversion.SubversionConfig-class.html#__str__ CedarBackup3.extend.subversion.SubversionConfig._getRepositories CedarBackup3.extend.subversion.SubversionConfig-class.html#_getRepositories CedarBackup3.extend.subversion.SubversionConfig.__lt__ CedarBackup3.extend.subversion.SubversionConfig-class.html#__lt__ CedarBackup3.extend.subversion.SubversionConfig.__init__ CedarBackup3.extend.subversion.SubversionConfig-class.html#__init__ CedarBackup3.extend.subversion.SubversionConfig._setCollectMode CedarBackup3.extend.subversion.SubversionConfig-class.html#_setCollectMode CedarBackup3.extend.subversion.SubversionConfig.__cmp__ CedarBackup3.extend.subversion.SubversionConfig-class.html#__cmp__ CedarBackup3.extend.subversion.SubversionConfig.repositoryDirs CedarBackup3.extend.subversion.SubversionConfig-class.html#repositoryDirs CedarBackup3.extend.subversion.SubversionConfig.compressMode CedarBackup3.extend.subversion.SubversionConfig-class.html#compressMode CedarBackup3.extend.subversion.SubversionConfig.__gt__ CedarBackup3.extend.subversion.SubversionConfig-class.html#__gt__ CedarBackup3.extend.subversion.SubversionConfig._setCompressMode CedarBackup3.extend.subversion.SubversionConfig-class.html#_setCompressMode CedarBackup3.extend.subversion.SubversionConfig.__eq__ CedarBackup3.extend.subversion.SubversionConfig-class.html#__eq__ CedarBackup3.extend.subversion.SubversionConfig._getRepositoryDirs CedarBackup3.extend.subversion.SubversionConfig-class.html#_getRepositoryDirs CedarBackup3.extend.subversion.SubversionConfig.collectMode CedarBackup3.extend.subversion.SubversionConfig-class.html#collectMode CedarBackup3.extend.subversion.SubversionConfig.repositories CedarBackup3.extend.subversion.SubversionConfig-class.html#repositories CedarBackup3.extend.subversion.SubversionConfig._setRepositoryDirs CedarBackup3.extend.subversion.SubversionConfig-class.html#_setRepositoryDirs CedarBackup3.extend.subversion.SubversionConfig.__le__ CedarBackup3.extend.subversion.SubversionConfig-class.html#__le__ CedarBackup3.extend.subversion.SubversionConfig.__repr__ CedarBackup3.extend.subversion.SubversionConfig-class.html#__repr__ CedarBackup3.extend.subversion.SubversionConfig._setRepositories CedarBackup3.extend.subversion.SubversionConfig-class.html#_setRepositories CedarBackup3.extend.subversion.SubversionConfig.__ge__ CedarBackup3.extend.subversion.SubversionConfig-class.html#__ge__ CedarBackup3.filesystem.BackupFileList CedarBackup3.filesystem.BackupFileList-class.html CedarBackup3.filesystem.FilesystemList._addDirContentsInternal CedarBackup3.filesystem.FilesystemList-class.html#_addDirContentsInternal CedarBackup3.filesystem.BackupFileList.removeUnchanged CedarBackup3.filesystem.BackupFileList-class.html#removeUnchanged CedarBackup3.filesystem.FilesystemList._getExcludeBasenamePatterns CedarBackup3.filesystem.FilesystemList-class.html#_getExcludeBasenamePatterns CedarBackup3.filesystem.BackupFileList.generateFitted CedarBackup3.filesystem.BackupFileList-class.html#generateFitted CedarBackup3.filesystem.FilesystemList.addDirContents CedarBackup3.filesystem.FilesystemList-class.html#addDirContents CedarBackup3.filesystem.FilesystemList._getExcludePatterns CedarBackup3.filesystem.FilesystemList-class.html#_getExcludePatterns CedarBackup3.filesystem.FilesystemList.excludePatterns CedarBackup3.filesystem.FilesystemList-class.html#excludePatterns CedarBackup3.filesystem.FilesystemList._setExcludeFiles CedarBackup3.filesystem.FilesystemList-class.html#_setExcludeFiles CedarBackup3.filesystem.BackupFileList.generateSizeMap CedarBackup3.filesystem.BackupFileList-class.html#generateSizeMap CedarBackup3.filesystem.FilesystemList.ignoreFile CedarBackup3.filesystem.FilesystemList-class.html#ignoreFile CedarBackup3.filesystem.BackupFileList.totalSize CedarBackup3.filesystem.BackupFileList-class.html#totalSize CedarBackup3.filesystem.BackupFileList.addDir CedarBackup3.filesystem.BackupFileList-class.html#addDir CedarBackup3.filesystem.FilesystemList.removeFiles CedarBackup3.filesystem.FilesystemList-class.html#removeFiles CedarBackup3.filesystem.FilesystemList.removeLinks CedarBackup3.filesystem.FilesystemList-class.html#removeLinks CedarBackup3.filesystem.BackupFileList.generateTarfile CedarBackup3.filesystem.BackupFileList-class.html#generateTarfile CedarBackup3.filesystem.FilesystemList.removeMatch CedarBackup3.filesystem.FilesystemList-class.html#removeMatch CedarBackup3.filesystem.FilesystemList.excludeLinks CedarBackup3.filesystem.FilesystemList-class.html#excludeLinks CedarBackup3.filesystem.FilesystemList._getExcludeDirs CedarBackup3.filesystem.FilesystemList-class.html#_getExcludeDirs CedarBackup3.filesystem.FilesystemList.excludeBasenamePatterns CedarBackup3.filesystem.FilesystemList-class.html#excludeBasenamePatterns CedarBackup3.filesystem.BackupFileList._getKnapsackFunction CedarBackup3.filesystem.BackupFileList-class.html#_getKnapsackFunction CedarBackup3.filesystem.FilesystemList._setIgnoreFile CedarBackup3.filesystem.FilesystemList-class.html#_setIgnoreFile CedarBackup3.filesystem.FilesystemList._getIgnoreFile CedarBackup3.filesystem.FilesystemList-class.html#_getIgnoreFile CedarBackup3.filesystem.FilesystemList.addFile CedarBackup3.filesystem.FilesystemList-class.html#addFile CedarBackup3.filesystem.BackupFileList.generateDigestMap CedarBackup3.filesystem.BackupFileList-class.html#generateDigestMap CedarBackup3.filesystem.FilesystemList.removeInvalid CedarBackup3.filesystem.FilesystemList-class.html#removeInvalid CedarBackup3.filesystem.FilesystemList._setExcludePatterns CedarBackup3.filesystem.FilesystemList-class.html#_setExcludePatterns CedarBackup3.filesystem.FilesystemList.removeDirs CedarBackup3.filesystem.FilesystemList-class.html#removeDirs CedarBackup3.filesystem.BackupFileList.__init__ CedarBackup3.filesystem.BackupFileList-class.html#__init__ CedarBackup3.filesystem.FilesystemList.normalize CedarBackup3.filesystem.FilesystemList-class.html#normalize CedarBackup3.filesystem.FilesystemList.excludeFiles CedarBackup3.filesystem.FilesystemList-class.html#excludeFiles CedarBackup3.filesystem.FilesystemList._getExcludeLinks CedarBackup3.filesystem.FilesystemList-class.html#_getExcludeLinks CedarBackup3.filesystem.FilesystemList.verify CedarBackup3.filesystem.FilesystemList-class.html#verify CedarBackup3.filesystem.FilesystemList.excludeDirs CedarBackup3.filesystem.FilesystemList-class.html#excludeDirs CedarBackup3.filesystem.FilesystemList._setExcludeDirs CedarBackup3.filesystem.FilesystemList-class.html#_setExcludeDirs CedarBackup3.filesystem.FilesystemList._setExcludeBasenamePatterns CedarBackup3.filesystem.FilesystemList-class.html#_setExcludeBasenamePatterns CedarBackup3.filesystem.BackupFileList.generateSpan CedarBackup3.filesystem.BackupFileList-class.html#generateSpan CedarBackup3.filesystem.FilesystemList._getExcludePaths CedarBackup3.filesystem.FilesystemList-class.html#_getExcludePaths CedarBackup3.filesystem.FilesystemList._setExcludePaths CedarBackup3.filesystem.FilesystemList-class.html#_setExcludePaths CedarBackup3.filesystem.BackupFileList._getKnapsackTable CedarBackup3.filesystem.BackupFileList-class.html#_getKnapsackTable CedarBackup3.filesystem.FilesystemList._setExcludeLinks CedarBackup3.filesystem.FilesystemList-class.html#_setExcludeLinks CedarBackup3.filesystem.FilesystemList.excludePaths CedarBackup3.filesystem.FilesystemList-class.html#excludePaths CedarBackup3.filesystem.BackupFileList._generateDigest CedarBackup3.filesystem.BackupFileList-class.html#_generateDigest CedarBackup3.filesystem.FilesystemList._getExcludeFiles CedarBackup3.filesystem.FilesystemList-class.html#_getExcludeFiles CedarBackup3.filesystem.FilesystemList CedarBackup3.filesystem.FilesystemList-class.html CedarBackup3.filesystem.FilesystemList._setExcludeFiles CedarBackup3.filesystem.FilesystemList-class.html#_setExcludeFiles CedarBackup3.filesystem.FilesystemList._addDirContentsInternal CedarBackup3.filesystem.FilesystemList-class.html#_addDirContentsInternal CedarBackup3.filesystem.FilesystemList.removeInvalid CedarBackup3.filesystem.FilesystemList-class.html#removeInvalid CedarBackup3.filesystem.FilesystemList.excludeLinks CedarBackup3.filesystem.FilesystemList-class.html#excludeLinks CedarBackup3.filesystem.FilesystemList._getExcludeDirs CedarBackup3.filesystem.FilesystemList-class.html#_getExcludeDirs CedarBackup3.filesystem.FilesystemList._setExcludePatterns CedarBackup3.filesystem.FilesystemList-class.html#_setExcludePatterns CedarBackup3.filesystem.FilesystemList.excludeBasenamePatterns CedarBackup3.filesystem.FilesystemList-class.html#excludeBasenamePatterns CedarBackup3.filesystem.FilesystemList.removeDirs CedarBackup3.filesystem.FilesystemList-class.html#removeDirs CedarBackup3.filesystem.FilesystemList.__init__ CedarBackup3.filesystem.FilesystemList-class.html#__init__ CedarBackup3.filesystem.FilesystemList.normalize CedarBackup3.filesystem.FilesystemList-class.html#normalize CedarBackup3.filesystem.FilesystemList.excludeFiles CedarBackup3.filesystem.FilesystemList-class.html#excludeFiles CedarBackup3.filesystem.FilesystemList._getExcludeLinks CedarBackup3.filesystem.FilesystemList-class.html#_getExcludeLinks CedarBackup3.filesystem.FilesystemList.verify CedarBackup3.filesystem.FilesystemList-class.html#verify CedarBackup3.filesystem.FilesystemList.addDir CedarBackup3.filesystem.FilesystemList-class.html#addDir CedarBackup3.filesystem.FilesystemList._setIgnoreFile CedarBackup3.filesystem.FilesystemList-class.html#_setIgnoreFile CedarBackup3.filesystem.FilesystemList.removeFiles CedarBackup3.filesystem.FilesystemList-class.html#removeFiles CedarBackup3.filesystem.FilesystemList.excludeDirs CedarBackup3.filesystem.FilesystemList-class.html#excludeDirs CedarBackup3.filesystem.FilesystemList._setExcludeDirs CedarBackup3.filesystem.FilesystemList-class.html#_setExcludeDirs CedarBackup3.filesystem.FilesystemList.ignoreFile CedarBackup3.filesystem.FilesystemList-class.html#ignoreFile CedarBackup3.filesystem.FilesystemList._setExcludeBasenamePatterns CedarBackup3.filesystem.FilesystemList-class.html#_setExcludeBasenamePatterns CedarBackup3.filesystem.FilesystemList.removeLinks CedarBackup3.filesystem.FilesystemList-class.html#removeLinks CedarBackup3.filesystem.FilesystemList._getExcludePaths CedarBackup3.filesystem.FilesystemList-class.html#_getExcludePaths CedarBackup3.filesystem.FilesystemList._getExcludeBasenamePatterns CedarBackup3.filesystem.FilesystemList-class.html#_getExcludeBasenamePatterns CedarBackup3.filesystem.FilesystemList._setExcludePaths CedarBackup3.filesystem.FilesystemList-class.html#_setExcludePaths CedarBackup3.filesystem.FilesystemList._getIgnoreFile CedarBackup3.filesystem.FilesystemList-class.html#_getIgnoreFile CedarBackup3.filesystem.FilesystemList._setExcludeLinks CedarBackup3.filesystem.FilesystemList-class.html#_setExcludeLinks CedarBackup3.filesystem.FilesystemList.addDirContents CedarBackup3.filesystem.FilesystemList-class.html#addDirContents CedarBackup3.filesystem.FilesystemList.excludePaths CedarBackup3.filesystem.FilesystemList-class.html#excludePaths CedarBackup3.filesystem.FilesystemList.addFile CedarBackup3.filesystem.FilesystemList-class.html#addFile CedarBackup3.filesystem.FilesystemList._getExcludePatterns CedarBackup3.filesystem.FilesystemList-class.html#_getExcludePatterns CedarBackup3.filesystem.FilesystemList.excludePatterns CedarBackup3.filesystem.FilesystemList-class.html#excludePatterns CedarBackup3.filesystem.FilesystemList.removeMatch CedarBackup3.filesystem.FilesystemList-class.html#removeMatch CedarBackup3.filesystem.FilesystemList._getExcludeFiles CedarBackup3.filesystem.FilesystemList-class.html#_getExcludeFiles CedarBackup3.filesystem.PurgeItemList CedarBackup3.filesystem.PurgeItemList-class.html CedarBackup3.filesystem.FilesystemList._setExcludeFiles CedarBackup3.filesystem.FilesystemList-class.html#_setExcludeFiles CedarBackup3.filesystem.FilesystemList._addDirContentsInternal CedarBackup3.filesystem.FilesystemList-class.html#_addDirContentsInternal CedarBackup3.filesystem.FilesystemList.removeInvalid CedarBackup3.filesystem.FilesystemList-class.html#removeInvalid CedarBackup3.filesystem.FilesystemList.excludeLinks CedarBackup3.filesystem.FilesystemList-class.html#excludeLinks CedarBackup3.filesystem.FilesystemList._getExcludeDirs CedarBackup3.filesystem.FilesystemList-class.html#_getExcludeDirs CedarBackup3.filesystem.FilesystemList._setExcludePatterns CedarBackup3.filesystem.FilesystemList-class.html#_setExcludePatterns CedarBackup3.filesystem.FilesystemList.excludeBasenamePatterns CedarBackup3.filesystem.FilesystemList-class.html#excludeBasenamePatterns CedarBackup3.filesystem.FilesystemList.removeDirs CedarBackup3.filesystem.FilesystemList-class.html#removeDirs CedarBackup3.filesystem.PurgeItemList.__init__ CedarBackup3.filesystem.PurgeItemList-class.html#__init__ CedarBackup3.filesystem.FilesystemList.normalize CedarBackup3.filesystem.FilesystemList-class.html#normalize CedarBackup3.filesystem.FilesystemList.excludeFiles CedarBackup3.filesystem.FilesystemList-class.html#excludeFiles CedarBackup3.filesystem.FilesystemList._getExcludeLinks CedarBackup3.filesystem.FilesystemList-class.html#_getExcludeLinks CedarBackup3.filesystem.FilesystemList.verify CedarBackup3.filesystem.FilesystemList-class.html#verify CedarBackup3.filesystem.FilesystemList.addDir CedarBackup3.filesystem.FilesystemList-class.html#addDir CedarBackup3.filesystem.FilesystemList._setIgnoreFile CedarBackup3.filesystem.FilesystemList-class.html#_setIgnoreFile CedarBackup3.filesystem.FilesystemList.removeFiles CedarBackup3.filesystem.FilesystemList-class.html#removeFiles CedarBackup3.filesystem.FilesystemList.excludeDirs CedarBackup3.filesystem.FilesystemList-class.html#excludeDirs CedarBackup3.filesystem.FilesystemList._setExcludeDirs CedarBackup3.filesystem.FilesystemList-class.html#_setExcludeDirs CedarBackup3.filesystem.PurgeItemList.removeYoungFiles CedarBackup3.filesystem.PurgeItemList-class.html#removeYoungFiles CedarBackup3.filesystem.FilesystemList.ignoreFile CedarBackup3.filesystem.FilesystemList-class.html#ignoreFile CedarBackup3.filesystem.FilesystemList._setExcludeBasenamePatterns CedarBackup3.filesystem.FilesystemList-class.html#_setExcludeBasenamePatterns CedarBackup3.filesystem.FilesystemList.removeLinks CedarBackup3.filesystem.FilesystemList-class.html#removeLinks CedarBackup3.filesystem.PurgeItemList.purgeItems CedarBackup3.filesystem.PurgeItemList-class.html#purgeItems CedarBackup3.filesystem.FilesystemList._getExcludePaths CedarBackup3.filesystem.FilesystemList-class.html#_getExcludePaths CedarBackup3.filesystem.FilesystemList._getExcludeBasenamePatterns CedarBackup3.filesystem.FilesystemList-class.html#_getExcludeBasenamePatterns CedarBackup3.filesystem.FilesystemList._setExcludePaths CedarBackup3.filesystem.FilesystemList-class.html#_setExcludePaths CedarBackup3.filesystem.FilesystemList._getIgnoreFile CedarBackup3.filesystem.FilesystemList-class.html#_getIgnoreFile CedarBackup3.filesystem.FilesystemList._setExcludeLinks CedarBackup3.filesystem.FilesystemList-class.html#_setExcludeLinks CedarBackup3.filesystem.FilesystemList.excludePaths CedarBackup3.filesystem.FilesystemList-class.html#excludePaths CedarBackup3.filesystem.PurgeItemList.addDirContents CedarBackup3.filesystem.PurgeItemList-class.html#addDirContents CedarBackup3.filesystem.FilesystemList.addFile CedarBackup3.filesystem.FilesystemList-class.html#addFile CedarBackup3.filesystem.FilesystemList._getExcludePatterns CedarBackup3.filesystem.FilesystemList-class.html#_getExcludePatterns CedarBackup3.filesystem.FilesystemList.excludePatterns CedarBackup3.filesystem.FilesystemList-class.html#excludePatterns CedarBackup3.filesystem.FilesystemList.removeMatch CedarBackup3.filesystem.FilesystemList-class.html#removeMatch CedarBackup3.filesystem.FilesystemList._getExcludeFiles CedarBackup3.filesystem.FilesystemList-class.html#_getExcludeFiles CedarBackup3.filesystem.SpanItem CedarBackup3.filesystem.SpanItem-class.html CedarBackup3.filesystem.SpanItem.__init__ CedarBackup3.filesystem.SpanItem-class.html#__init__ CedarBackup3.peer.LocalPeer CedarBackup3.peer.LocalPeer-class.html CedarBackup3.peer.LocalPeer._copyLocalFile CedarBackup3.peer.LocalPeer-class.html#_copyLocalFile CedarBackup3.peer.LocalPeer._setIgnoreFailureMode CedarBackup3.peer.LocalPeer-class.html#_setIgnoreFailureMode CedarBackup3.peer.LocalPeer._getName CedarBackup3.peer.LocalPeer-class.html#_getName CedarBackup3.peer.LocalPeer.__init__ CedarBackup3.peer.LocalPeer-class.html#__init__ CedarBackup3.peer.LocalPeer.checkCollectIndicator CedarBackup3.peer.LocalPeer-class.html#checkCollectIndicator CedarBackup3.peer.LocalPeer.writeStageIndicator CedarBackup3.peer.LocalPeer-class.html#writeStageIndicator CedarBackup3.peer.LocalPeer._getIgnoreFailureMode CedarBackup3.peer.LocalPeer-class.html#_getIgnoreFailureMode CedarBackup3.peer.LocalPeer._copyLocalDir CedarBackup3.peer.LocalPeer-class.html#_copyLocalDir CedarBackup3.peer.LocalPeer.ignoreFailureMode CedarBackup3.peer.LocalPeer-class.html#ignoreFailureMode CedarBackup3.peer.LocalPeer._getCollectDir CedarBackup3.peer.LocalPeer-class.html#_getCollectDir CedarBackup3.peer.LocalPeer.name CedarBackup3.peer.LocalPeer-class.html#name CedarBackup3.peer.LocalPeer.collectDir CedarBackup3.peer.LocalPeer-class.html#collectDir CedarBackup3.peer.LocalPeer._setCollectDir CedarBackup3.peer.LocalPeer-class.html#_setCollectDir CedarBackup3.peer.LocalPeer.stagePeer CedarBackup3.peer.LocalPeer-class.html#stagePeer CedarBackup3.peer.LocalPeer._setName CedarBackup3.peer.LocalPeer-class.html#_setName CedarBackup3.peer.RemotePeer CedarBackup3.peer.RemotePeer-class.html CedarBackup3.peer.RemotePeer._getWorkingDir CedarBackup3.peer.RemotePeer-class.html#_getWorkingDir CedarBackup3.peer.RemotePeer._setLocalUser CedarBackup3.peer.RemotePeer-class.html#_setLocalUser CedarBackup3.peer.RemotePeer._getLocalUser CedarBackup3.peer.RemotePeer-class.html#_getLocalUser CedarBackup3.peer.RemotePeer._getRcpCommand CedarBackup3.peer.RemotePeer-class.html#_getRcpCommand CedarBackup3.peer.RemotePeer._copyRemoteFile CedarBackup3.peer.RemotePeer-class.html#_copyRemoteFile CedarBackup3.peer.RemotePeer._buildCbackCommand CedarBackup3.peer.RemotePeer-class.html#_buildCbackCommand CedarBackup3.peer.RemotePeer.cbackCommand CedarBackup3.peer.RemotePeer-class.html#cbackCommand CedarBackup3.peer.RemotePeer._setIgnoreFailureMode CedarBackup3.peer.RemotePeer-class.html#_setIgnoreFailureMode CedarBackup3.peer.RemotePeer.localUser CedarBackup3.peer.RemotePeer-class.html#localUser CedarBackup3.peer.RemotePeer.executeRemoteCommand CedarBackup3.peer.RemotePeer-class.html#executeRemoteCommand CedarBackup3.peer.RemotePeer._getName CedarBackup3.peer.RemotePeer-class.html#_getName CedarBackup3.peer.RemotePeer.__init__ CedarBackup3.peer.RemotePeer-class.html#__init__ CedarBackup3.peer.RemotePeer.writeStageIndicator CedarBackup3.peer.RemotePeer-class.html#writeStageIndicator CedarBackup3.peer.RemotePeer._setCbackCommand CedarBackup3.peer.RemotePeer-class.html#_setCbackCommand CedarBackup3.peer.RemotePeer._getCbackCommand CedarBackup3.peer.RemotePeer-class.html#_getCbackCommand CedarBackup3.peer.RemotePeer.remoteUser CedarBackup3.peer.RemotePeer-class.html#remoteUser CedarBackup3.peer.RemotePeer.workingDir CedarBackup3.peer.RemotePeer-class.html#workingDir CedarBackup3.peer.RemotePeer.checkCollectIndicator CedarBackup3.peer.RemotePeer-class.html#checkCollectIndicator CedarBackup3.peer.RemotePeer._getDirContents CedarBackup3.peer.RemotePeer-class.html#_getDirContents CedarBackup3.peer.RemotePeer._copyRemoteDir CedarBackup3.peer.RemotePeer-class.html#_copyRemoteDir CedarBackup3.peer.RemotePeer.executeManagedAction CedarBackup3.peer.RemotePeer-class.html#executeManagedAction CedarBackup3.peer.RemotePeer._getIgnoreFailureMode CedarBackup3.peer.RemotePeer-class.html#_getIgnoreFailureMode CedarBackup3.peer.RemotePeer.ignoreFailureMode CedarBackup3.peer.RemotePeer-class.html#ignoreFailureMode CedarBackup3.peer.RemotePeer._setWorkingDir CedarBackup3.peer.RemotePeer-class.html#_setWorkingDir CedarBackup3.peer.RemotePeer.rcpCommand CedarBackup3.peer.RemotePeer-class.html#rcpCommand CedarBackup3.peer.RemotePeer.rshCommand CedarBackup3.peer.RemotePeer-class.html#rshCommand CedarBackup3.peer.RemotePeer.name CedarBackup3.peer.RemotePeer-class.html#name CedarBackup3.peer.RemotePeer._getCollectDir CedarBackup3.peer.RemotePeer-class.html#_getCollectDir CedarBackup3.peer.RemotePeer._setRemoteUser CedarBackup3.peer.RemotePeer-class.html#_setRemoteUser CedarBackup3.peer.RemotePeer._setRcpCommand CedarBackup3.peer.RemotePeer-class.html#_setRcpCommand CedarBackup3.peer.RemotePeer._executeRemoteCommand CedarBackup3.peer.RemotePeer-class.html#_executeRemoteCommand CedarBackup3.peer.RemotePeer.collectDir CedarBackup3.peer.RemotePeer-class.html#collectDir CedarBackup3.peer.RemotePeer._setCollectDir CedarBackup3.peer.RemotePeer-class.html#_setCollectDir CedarBackup3.peer.RemotePeer._getRemoteUser CedarBackup3.peer.RemotePeer-class.html#_getRemoteUser CedarBackup3.peer.RemotePeer.stagePeer CedarBackup3.peer.RemotePeer-class.html#stagePeer CedarBackup3.peer.RemotePeer._pushLocalFile CedarBackup3.peer.RemotePeer-class.html#_pushLocalFile CedarBackup3.peer.RemotePeer._setName CedarBackup3.peer.RemotePeer-class.html#_setName CedarBackup3.peer.RemotePeer._getRshCommand CedarBackup3.peer.RemotePeer-class.html#_getRshCommand CedarBackup3.peer.RemotePeer._setRshCommand CedarBackup3.peer.RemotePeer-class.html#_setRshCommand CedarBackup3.tools.amazons3.Options CedarBackup3.tools.amazons3.Options-class.html CedarBackup3.tools.amazons3.Options._getMode CedarBackup3.tools.amazons3.Options-class.html#_getMode CedarBackup3.tools.amazons3.Options.stacktrace CedarBackup3.tools.amazons3.Options-class.html#stacktrace CedarBackup3.tools.amazons3.Options.help CedarBackup3.tools.amazons3.Options-class.html#help CedarBackup3.tools.amazons3.Options.__str__ CedarBackup3.tools.amazons3.Options-class.html#__str__ CedarBackup3.tools.amazons3.Options._setS3BucketUrl CedarBackup3.tools.amazons3.Options-class.html#_setS3BucketUrl CedarBackup3.tools.amazons3.Options._setStacktrace CedarBackup3.tools.amazons3.Options-class.html#_setStacktrace CedarBackup3.tools.amazons3.Options.verifyOnly CedarBackup3.tools.amazons3.Options-class.html#verifyOnly CedarBackup3.tools.amazons3.Options.owner CedarBackup3.tools.amazons3.Options-class.html#owner CedarBackup3.tools.amazons3.Options._setQuiet CedarBackup3.tools.amazons3.Options-class.html#_setQuiet CedarBackup3.tools.amazons3.Options._setVersion CedarBackup3.tools.amazons3.Options-class.html#_setVersion CedarBackup3.tools.amazons3.Options.__lt__ CedarBackup3.tools.amazons3.Options-class.html#__lt__ CedarBackup3.tools.amazons3.Options._getVerbose CedarBackup3.tools.amazons3.Options-class.html#_getVerbose CedarBackup3.tools.amazons3.Options.verbose CedarBackup3.tools.amazons3.Options-class.html#verbose CedarBackup3.tools.amazons3.Options._setHelp CedarBackup3.tools.amazons3.Options-class.html#_setHelp CedarBackup3.tools.amazons3.Options._getVerifyOnly CedarBackup3.tools.amazons3.Options-class.html#_getVerifyOnly CedarBackup3.tools.amazons3.Options._getDebug CedarBackup3.tools.amazons3.Options-class.html#_getDebug CedarBackup3.tools.amazons3.Options.sourceDir CedarBackup3.tools.amazons3.Options-class.html#sourceDir CedarBackup3.tools.amazons3.Options._parseArgumentList CedarBackup3.tools.amazons3.Options-class.html#_parseArgumentList CedarBackup3.tools.amazons3.Options.buildArgumentList CedarBackup3.tools.amazons3.Options-class.html#buildArgumentList CedarBackup3.tools.amazons3.Options.__cmp__ CedarBackup3.tools.amazons3.Options-class.html#__cmp__ CedarBackup3.tools.amazons3.Options._getStacktrace CedarBackup3.tools.amazons3.Options-class.html#_getStacktrace CedarBackup3.tools.amazons3.Options._setOwner CedarBackup3.tools.amazons3.Options-class.html#_setOwner CedarBackup3.tools.amazons3.Options._setMode CedarBackup3.tools.amazons3.Options-class.html#_setMode CedarBackup3.tools.amazons3.Options.__init__ CedarBackup3.tools.amazons3.Options-class.html#__init__ CedarBackup3.tools.amazons3.Options._getQuiet CedarBackup3.tools.amazons3.Options-class.html#_getQuiet CedarBackup3.tools.amazons3.Options.mode CedarBackup3.tools.amazons3.Options-class.html#mode CedarBackup3.tools.amazons3.Options._getVersion CedarBackup3.tools.amazons3.Options-class.html#_getVersion CedarBackup3.tools.amazons3.Options._getLogfile CedarBackup3.tools.amazons3.Options-class.html#_getLogfile CedarBackup3.tools.amazons3.Options._setOutput CedarBackup3.tools.amazons3.Options-class.html#_setOutput CedarBackup3.tools.amazons3.Options.version CedarBackup3.tools.amazons3.Options-class.html#version CedarBackup3.tools.amazons3.Options._setVerifyOnly CedarBackup3.tools.amazons3.Options-class.html#_setVerifyOnly CedarBackup3.tools.amazons3.Options.debug CedarBackup3.tools.amazons3.Options-class.html#debug CedarBackup3.tools.amazons3.Options.ignoreWarnings CedarBackup3.tools.amazons3.Options-class.html#ignoreWarnings CedarBackup3.tools.amazons3.Options._setDiagnostics CedarBackup3.tools.amazons3.Options-class.html#_setDiagnostics CedarBackup3.tools.amazons3.Options._setSourceDir CedarBackup3.tools.amazons3.Options-class.html#_setSourceDir CedarBackup3.tools.amazons3.Options.__gt__ CedarBackup3.tools.amazons3.Options-class.html#__gt__ CedarBackup3.tools.amazons3.Options.validate CedarBackup3.tools.amazons3.Options-class.html#validate CedarBackup3.tools.amazons3.Options.logfile CedarBackup3.tools.amazons3.Options-class.html#logfile CedarBackup3.tools.amazons3.Options.__eq__ CedarBackup3.tools.amazons3.Options-class.html#__eq__ CedarBackup3.tools.amazons3.Options.buildArgumentString CedarBackup3.tools.amazons3.Options-class.html#buildArgumentString CedarBackup3.tools.amazons3.Options._setDebug CedarBackup3.tools.amazons3.Options-class.html#_setDebug CedarBackup3.tools.amazons3.Options._setIgnoreWarnings CedarBackup3.tools.amazons3.Options-class.html#_setIgnoreWarnings CedarBackup3.tools.amazons3.Options._getSourceDir CedarBackup3.tools.amazons3.Options-class.html#_getSourceDir CedarBackup3.tools.amazons3.Options._getOwner CedarBackup3.tools.amazons3.Options-class.html#_getOwner CedarBackup3.tools.amazons3.Options.s3BucketUrl CedarBackup3.tools.amazons3.Options-class.html#s3BucketUrl CedarBackup3.tools.amazons3.Options._getOutput CedarBackup3.tools.amazons3.Options-class.html#_getOutput CedarBackup3.tools.amazons3.Options._setLogfile CedarBackup3.tools.amazons3.Options-class.html#_setLogfile CedarBackup3.tools.amazons3.Options.quiet CedarBackup3.tools.amazons3.Options-class.html#quiet CedarBackup3.tools.amazons3.Options.__repr__ CedarBackup3.tools.amazons3.Options-class.html#__repr__ CedarBackup3.tools.amazons3.Options.diagnostics CedarBackup3.tools.amazons3.Options-class.html#diagnostics CedarBackup3.tools.amazons3.Options._getDiagnostics CedarBackup3.tools.amazons3.Options-class.html#_getDiagnostics CedarBackup3.tools.amazons3.Options.output CedarBackup3.tools.amazons3.Options-class.html#output CedarBackup3.tools.amazons3.Options._setVerbose CedarBackup3.tools.amazons3.Options-class.html#_setVerbose CedarBackup3.tools.amazons3.Options._getHelp CedarBackup3.tools.amazons3.Options-class.html#_getHelp CedarBackup3.tools.amazons3.Options._getIgnoreWarnings CedarBackup3.tools.amazons3.Options-class.html#_getIgnoreWarnings CedarBackup3.tools.amazons3.Options._getS3BucketUrl CedarBackup3.tools.amazons3.Options-class.html#_getS3BucketUrl CedarBackup3.tools.span.SpanOptions CedarBackup3.tools.span.SpanOptions-class.html CedarBackup3.cli.Options._getMode CedarBackup3.cli.Options-class.html#_getMode CedarBackup3.cli.Options.stacktrace CedarBackup3.cli.Options-class.html#stacktrace CedarBackup3.cli.Options.managed CedarBackup3.cli.Options-class.html#managed CedarBackup3.cli.Options.help CedarBackup3.cli.Options-class.html#help CedarBackup3.cli.Options._getFull CedarBackup3.cli.Options-class.html#_getFull CedarBackup3.cli.Options.__str__ CedarBackup3.cli.Options-class.html#__str__ CedarBackup3.cli.Options._setStacktrace CedarBackup3.cli.Options-class.html#_setStacktrace CedarBackup3.cli.Options.actions CedarBackup3.cli.Options-class.html#actions CedarBackup3.cli.Options.owner CedarBackup3.cli.Options-class.html#owner CedarBackup3.cli.Options.__lt__ CedarBackup3.cli.Options-class.html#__lt__ CedarBackup3.cli.Options._setVersion CedarBackup3.cli.Options-class.html#_setVersion CedarBackup3.cli.Options._setQuiet CedarBackup3.cli.Options-class.html#_setQuiet CedarBackup3.cli.Options._getVerbose CedarBackup3.cli.Options-class.html#_getVerbose CedarBackup3.cli.Options.verbose CedarBackup3.cli.Options-class.html#verbose CedarBackup3.cli.Options._setHelp CedarBackup3.cli.Options-class.html#_setHelp CedarBackup3.cli.Options._getDiagnostics CedarBackup3.cli.Options-class.html#_getDiagnostics CedarBackup3.cli.Options._getDebug CedarBackup3.cli.Options-class.html#_getDebug CedarBackup3.cli.Options._parseArgumentList CedarBackup3.cli.Options-class.html#_parseArgumentList CedarBackup3.cli.Options.buildArgumentList CedarBackup3.cli.Options-class.html#buildArgumentList CedarBackup3.cli.Options._getManagedOnly CedarBackup3.cli.Options-class.html#_getManagedOnly CedarBackup3.cli.Options.__cmp__ CedarBackup3.cli.Options-class.html#__cmp__ CedarBackup3.cli.Options._setOutput CedarBackup3.cli.Options-class.html#_setOutput CedarBackup3.cli.Options._setOwner CedarBackup3.cli.Options-class.html#_setOwner CedarBackup3.cli.Options._setMode CedarBackup3.cli.Options-class.html#_setMode CedarBackup3.cli.Options.__init__ CedarBackup3.cli.Options-class.html#__init__ CedarBackup3.cli.Options._getQuiet CedarBackup3.cli.Options-class.html#_getQuiet CedarBackup3.cli.Options.managedOnly CedarBackup3.cli.Options-class.html#managedOnly CedarBackup3.cli.Options._getManaged CedarBackup3.cli.Options-class.html#_getManaged CedarBackup3.cli.Options.config CedarBackup3.cli.Options-class.html#config CedarBackup3.cli.Options.__repr__ CedarBackup3.cli.Options-class.html#__repr__ CedarBackup3.cli.Options._getVersion CedarBackup3.cli.Options-class.html#_getVersion CedarBackup3.cli.Options._getLogfile CedarBackup3.cli.Options-class.html#_getLogfile CedarBackup3.cli.Options.full CedarBackup3.cli.Options-class.html#full CedarBackup3.cli.Options._getConfig CedarBackup3.cli.Options-class.html#_getConfig CedarBackup3.cli.Options._setConfig CedarBackup3.cli.Options-class.html#_setConfig CedarBackup3.cli.Options._getStacktrace CedarBackup3.cli.Options-class.html#_getStacktrace CedarBackup3.cli.Options._setFull CedarBackup3.cli.Options-class.html#_setFull CedarBackup3.cli.Options.version CedarBackup3.cli.Options-class.html#version CedarBackup3.cli.Options._setManagedOnly CedarBackup3.cli.Options-class.html#_setManagedOnly CedarBackup3.cli.Options._setDiagnostics CedarBackup3.cli.Options-class.html#_setDiagnostics CedarBackup3.cli.Options.__gt__ CedarBackup3.cli.Options-class.html#__gt__ CedarBackup3.tools.span.SpanOptions.validate CedarBackup3.tools.span.SpanOptions-class.html#validate CedarBackup3.cli.Options.logfile CedarBackup3.cli.Options-class.html#logfile CedarBackup3.cli.Options.__eq__ CedarBackup3.cli.Options-class.html#__eq__ CedarBackup3.cli.Options.buildArgumentString CedarBackup3.cli.Options-class.html#buildArgumentString CedarBackup3.cli.Options._setDebug CedarBackup3.cli.Options-class.html#_setDebug CedarBackup3.cli.Options._setManaged CedarBackup3.cli.Options-class.html#_setManaged CedarBackup3.cli.Options._setActions CedarBackup3.cli.Options-class.html#_setActions CedarBackup3.cli.Options._getHelp CedarBackup3.cli.Options-class.html#_getHelp CedarBackup3.cli.Options._getOwner CedarBackup3.cli.Options-class.html#_getOwner CedarBackup3.cli.Options._setLogfile CedarBackup3.cli.Options-class.html#_setLogfile CedarBackup3.cli.Options.quiet CedarBackup3.cli.Options-class.html#quiet CedarBackup3.cli.Options.__le__ CedarBackup3.cli.Options-class.html#__le__ CedarBackup3.cli.Options.mode CedarBackup3.cli.Options-class.html#mode CedarBackup3.cli.Options.diagnostics CedarBackup3.cli.Options-class.html#diagnostics CedarBackup3.cli.Options.debug CedarBackup3.cli.Options-class.html#debug CedarBackup3.cli.Options.output CedarBackup3.cli.Options-class.html#output CedarBackup3.cli.Options._setVerbose CedarBackup3.cli.Options-class.html#_setVerbose CedarBackup3.cli.Options._getOutput CedarBackup3.cli.Options-class.html#_getOutput CedarBackup3.cli.Options._getActions CedarBackup3.cli.Options-class.html#_getActions CedarBackup3.cli.Options.__ge__ CedarBackup3.cli.Options-class.html#__ge__ CedarBackup3.util.AbsolutePathList CedarBackup3.util.AbsolutePathList-class.html CedarBackup3.util.UnorderedList.__lt__ CedarBackup3.util.UnorderedList-class.html#__lt__ CedarBackup3.util.UnorderedList.mixedsort CedarBackup3.util.UnorderedList-class.html#mixedsort CedarBackup3.util.AbsolutePathList.append CedarBackup3.util.AbsolutePathList-class.html#append CedarBackup3.util.UnorderedList.__ne__ CedarBackup3.util.UnorderedList-class.html#__ne__ CedarBackup3.util.AbsolutePathList.extend CedarBackup3.util.AbsolutePathList-class.html#extend CedarBackup3.util.UnorderedList.__gt__ CedarBackup3.util.UnorderedList-class.html#__gt__ CedarBackup3.util.UnorderedList.__eq__ CedarBackup3.util.UnorderedList-class.html#__eq__ CedarBackup3.util.AbsolutePathList.insert CedarBackup3.util.AbsolutePathList-class.html#insert CedarBackup3.util.UnorderedList.mixedkey CedarBackup3.util.UnorderedList-class.html#mixedkey CedarBackup3.util.UnorderedList.__le__ CedarBackup3.util.UnorderedList-class.html#__le__ CedarBackup3.util.UnorderedList.__ge__ CedarBackup3.util.UnorderedList-class.html#__ge__ CedarBackup3.util.Diagnostics CedarBackup3.util.Diagnostics-class.html CedarBackup3.util.Diagnostics._getEncoding CedarBackup3.util.Diagnostics-class.html#_getEncoding CedarBackup3.util.Diagnostics.encoding CedarBackup3.util.Diagnostics-class.html#encoding CedarBackup3.util.Diagnostics.locale CedarBackup3.util.Diagnostics-class.html#locale CedarBackup3.util.Diagnostics.__str__ CedarBackup3.util.Diagnostics-class.html#__str__ CedarBackup3.util.Diagnostics.getValues CedarBackup3.util.Diagnostics-class.html#getValues CedarBackup3.util.Diagnostics.interpreter CedarBackup3.util.Diagnostics-class.html#interpreter CedarBackup3.util.Diagnostics.__init__ CedarBackup3.util.Diagnostics-class.html#__init__ CedarBackup3.util.Diagnostics.platform CedarBackup3.util.Diagnostics-class.html#platform CedarBackup3.util.Diagnostics.version CedarBackup3.util.Diagnostics-class.html#version CedarBackup3.util.Diagnostics.printDiagnostics CedarBackup3.util.Diagnostics-class.html#printDiagnostics CedarBackup3.util.Diagnostics._getVersion CedarBackup3.util.Diagnostics-class.html#_getVersion CedarBackup3.util.Diagnostics._getTimestamp CedarBackup3.util.Diagnostics-class.html#_getTimestamp CedarBackup3.util.Diagnostics.timestamp CedarBackup3.util.Diagnostics-class.html#timestamp CedarBackup3.util.Diagnostics._getPlatform CedarBackup3.util.Diagnostics-class.html#_getPlatform CedarBackup3.util.Diagnostics.logDiagnostics CedarBackup3.util.Diagnostics-class.html#logDiagnostics CedarBackup3.util.Diagnostics._buildDiagnosticLines CedarBackup3.util.Diagnostics-class.html#_buildDiagnosticLines CedarBackup3.util.Diagnostics._getInterpreter CedarBackup3.util.Diagnostics-class.html#_getInterpreter CedarBackup3.util.Diagnostics._getMaxLength CedarBackup3.util.Diagnostics-class.html#_getMaxLength CedarBackup3.util.Diagnostics._getLocale CedarBackup3.util.Diagnostics-class.html#_getLocale CedarBackup3.util.Diagnostics.__repr__ CedarBackup3.util.Diagnostics-class.html#__repr__ CedarBackup3.util.DirectedGraph CedarBackup3.util.DirectedGraph-class.html CedarBackup3.util.DirectedGraph._DISCOVERED CedarBackup3.util.DirectedGraph-class.html#_DISCOVERED CedarBackup3.util.DirectedGraph.__str__ CedarBackup3.util.DirectedGraph-class.html#__str__ CedarBackup3.util.DirectedGraph.topologicalSort CedarBackup3.util.DirectedGraph-class.html#topologicalSort CedarBackup3.util.DirectedGraph._EXPLORED CedarBackup3.util.DirectedGraph-class.html#_EXPLORED CedarBackup3.util.DirectedGraph.__lt__ CedarBackup3.util.DirectedGraph-class.html#__lt__ CedarBackup3.util.DirectedGraph._getName CedarBackup3.util.DirectedGraph-class.html#_getName CedarBackup3.util.DirectedGraph.__init__ CedarBackup3.util.DirectedGraph-class.html#__init__ CedarBackup3.util.DirectedGraph.__cmp__ CedarBackup3.util.DirectedGraph-class.html#__cmp__ CedarBackup3.util.DirectedGraph._UNDISCOVERED CedarBackup3.util.DirectedGraph-class.html#_UNDISCOVERED CedarBackup3.util.DirectedGraph.createVertex CedarBackup3.util.DirectedGraph-class.html#createVertex CedarBackup3.util.DirectedGraph._topologicalSort CedarBackup3.util.DirectedGraph-class.html#_topologicalSort CedarBackup3.util.DirectedGraph.__gt__ CedarBackup3.util.DirectedGraph-class.html#__gt__ CedarBackup3.util.DirectedGraph.createEdge CedarBackup3.util.DirectedGraph-class.html#createEdge CedarBackup3.util.DirectedGraph.__eq__ CedarBackup3.util.DirectedGraph-class.html#__eq__ CedarBackup3.util.DirectedGraph.name CedarBackup3.util.DirectedGraph-class.html#name CedarBackup3.util.DirectedGraph.__le__ CedarBackup3.util.DirectedGraph-class.html#__le__ CedarBackup3.util.DirectedGraph.__repr__ CedarBackup3.util.DirectedGraph-class.html#__repr__ CedarBackup3.util.DirectedGraph.__ge__ CedarBackup3.util.DirectedGraph-class.html#__ge__ CedarBackup3.util.ObjectTypeList CedarBackup3.util.ObjectTypeList-class.html CedarBackup3.util.UnorderedList.__lt__ CedarBackup3.util.UnorderedList-class.html#__lt__ CedarBackup3.util.ObjectTypeList.append CedarBackup3.util.ObjectTypeList-class.html#append CedarBackup3.util.UnorderedList.mixedsort CedarBackup3.util.UnorderedList-class.html#mixedsort CedarBackup3.util.ObjectTypeList.__init__ CedarBackup3.util.ObjectTypeList-class.html#__init__ CedarBackup3.util.UnorderedList.__ne__ CedarBackup3.util.UnorderedList-class.html#__ne__ CedarBackup3.util.ObjectTypeList.extend CedarBackup3.util.ObjectTypeList-class.html#extend CedarBackup3.util.UnorderedList.__gt__ CedarBackup3.util.UnorderedList-class.html#__gt__ CedarBackup3.util.UnorderedList.__eq__ CedarBackup3.util.UnorderedList-class.html#__eq__ CedarBackup3.util.ObjectTypeList.insert CedarBackup3.util.ObjectTypeList-class.html#insert CedarBackup3.util.UnorderedList.mixedkey CedarBackup3.util.UnorderedList-class.html#mixedkey CedarBackup3.util.UnorderedList.__le__ CedarBackup3.util.UnorderedList-class.html#__le__ CedarBackup3.util.UnorderedList.__ge__ CedarBackup3.util.UnorderedList-class.html#__ge__ CedarBackup3.util.PathResolverSingleton CedarBackup3.util.PathResolverSingleton-class.html CedarBackup3.util.PathResolverSingleton._Helper CedarBackup3.util.PathResolverSingleton._Helper-class.html CedarBackup3.util.PathResolverSingleton.getInstance CedarBackup3.util.PathResolverSingleton-class.html#getInstance CedarBackup3.util.PathResolverSingleton._instance CedarBackup3.util.PathResolverSingleton-class.html#_instance CedarBackup3.util.PathResolverSingleton.lookup CedarBackup3.util.PathResolverSingleton-class.html#lookup CedarBackup3.util.PathResolverSingleton._mapping CedarBackup3.util.PathResolverSingleton-class.html#_mapping CedarBackup3.util.PathResolverSingleton.__init__ CedarBackup3.util.PathResolverSingleton-class.html#__init__ CedarBackup3.util.PathResolverSingleton.fill CedarBackup3.util.PathResolverSingleton-class.html#fill CedarBackup3.util.PathResolverSingleton._Helper CedarBackup3.util.PathResolverSingleton._Helper-class.html CedarBackup3.util.PathResolverSingleton._Helper.__call__ CedarBackup3.util.PathResolverSingleton._Helper-class.html#__call__ CedarBackup3.util.PathResolverSingleton._Helper.__init__ CedarBackup3.util.PathResolverSingleton._Helper-class.html#__init__ CedarBackup3.util.Pipe CedarBackup3.util.Pipe-class.html CedarBackup3.util.Pipe.__init__ CedarBackup3.util.Pipe-class.html#__init__ CedarBackup3.util.RegexList CedarBackup3.util.RegexList-class.html CedarBackup3.util.UnorderedList.__lt__ CedarBackup3.util.UnorderedList-class.html#__lt__ CedarBackup3.util.UnorderedList.mixedsort CedarBackup3.util.UnorderedList-class.html#mixedsort CedarBackup3.util.RegexList.append CedarBackup3.util.RegexList-class.html#append CedarBackup3.util.UnorderedList.__ne__ CedarBackup3.util.UnorderedList-class.html#__ne__ CedarBackup3.util.RegexList.extend CedarBackup3.util.RegexList-class.html#extend CedarBackup3.util.UnorderedList.__gt__ CedarBackup3.util.UnorderedList-class.html#__gt__ CedarBackup3.util.UnorderedList.__eq__ CedarBackup3.util.UnorderedList-class.html#__eq__ CedarBackup3.util.RegexList.insert CedarBackup3.util.RegexList-class.html#insert CedarBackup3.util.UnorderedList.mixedkey CedarBackup3.util.UnorderedList-class.html#mixedkey CedarBackup3.util.UnorderedList.__le__ CedarBackup3.util.UnorderedList-class.html#__le__ CedarBackup3.util.UnorderedList.__ge__ CedarBackup3.util.UnorderedList-class.html#__ge__ CedarBackup3.util.RegexMatchList CedarBackup3.util.RegexMatchList-class.html CedarBackup3.util.UnorderedList.__lt__ CedarBackup3.util.UnorderedList-class.html#__lt__ CedarBackup3.util.RegexMatchList.append CedarBackup3.util.RegexMatchList-class.html#append CedarBackup3.util.UnorderedList.mixedsort CedarBackup3.util.UnorderedList-class.html#mixedsort CedarBackup3.util.RegexMatchList.__init__ CedarBackup3.util.RegexMatchList-class.html#__init__ CedarBackup3.util.UnorderedList.__ne__ CedarBackup3.util.UnorderedList-class.html#__ne__ CedarBackup3.util.RegexMatchList.extend CedarBackup3.util.RegexMatchList-class.html#extend CedarBackup3.util.UnorderedList.__gt__ CedarBackup3.util.UnorderedList-class.html#__gt__ CedarBackup3.util.UnorderedList.__eq__ CedarBackup3.util.UnorderedList-class.html#__eq__ CedarBackup3.util.RegexMatchList.insert CedarBackup3.util.RegexMatchList-class.html#insert CedarBackup3.util.UnorderedList.mixedkey CedarBackup3.util.UnorderedList-class.html#mixedkey CedarBackup3.util.UnorderedList.__le__ CedarBackup3.util.UnorderedList-class.html#__le__ CedarBackup3.util.UnorderedList.__ge__ CedarBackup3.util.UnorderedList-class.html#__ge__ CedarBackup3.util.RestrictedContentList CedarBackup3.util.RestrictedContentList-class.html CedarBackup3.util.UnorderedList.__lt__ CedarBackup3.util.UnorderedList-class.html#__lt__ CedarBackup3.util.RestrictedContentList.append CedarBackup3.util.RestrictedContentList-class.html#append CedarBackup3.util.UnorderedList.mixedsort CedarBackup3.util.UnorderedList-class.html#mixedsort CedarBackup3.util.RestrictedContentList.__init__ CedarBackup3.util.RestrictedContentList-class.html#__init__ CedarBackup3.util.UnorderedList.__ne__ CedarBackup3.util.UnorderedList-class.html#__ne__ CedarBackup3.util.RestrictedContentList.extend CedarBackup3.util.RestrictedContentList-class.html#extend CedarBackup3.util.UnorderedList.__gt__ CedarBackup3.util.UnorderedList-class.html#__gt__ CedarBackup3.util.UnorderedList.__eq__ CedarBackup3.util.UnorderedList-class.html#__eq__ CedarBackup3.util.RestrictedContentList.insert CedarBackup3.util.RestrictedContentList-class.html#insert CedarBackup3.util.UnorderedList.mixedkey CedarBackup3.util.UnorderedList-class.html#mixedkey CedarBackup3.util.UnorderedList.__le__ CedarBackup3.util.UnorderedList-class.html#__le__ CedarBackup3.util.UnorderedList.__ge__ CedarBackup3.util.UnorderedList-class.html#__ge__ CedarBackup3.util.UnorderedList CedarBackup3.util.UnorderedList-class.html CedarBackup3.util.UnorderedList.__lt__ CedarBackup3.util.UnorderedList-class.html#__lt__ CedarBackup3.util.UnorderedList.mixedsort CedarBackup3.util.UnorderedList-class.html#mixedsort CedarBackup3.util.UnorderedList.__ne__ CedarBackup3.util.UnorderedList-class.html#__ne__ CedarBackup3.util.UnorderedList.__gt__ CedarBackup3.util.UnorderedList-class.html#__gt__ CedarBackup3.util.UnorderedList.__eq__ CedarBackup3.util.UnorderedList-class.html#__eq__ CedarBackup3.util.UnorderedList.mixedkey CedarBackup3.util.UnorderedList-class.html#mixedkey CedarBackup3.util.UnorderedList.__le__ CedarBackup3.util.UnorderedList-class.html#__le__ CedarBackup3.util.UnorderedList.__ge__ CedarBackup3.util.UnorderedList-class.html#__ge__ CedarBackup3.util._Vertex CedarBackup3.util._Vertex-class.html CedarBackup3.util._Vertex.__init__ CedarBackup3.util._Vertex-class.html#__init__ CedarBackup3.writers.cdwriter.CdWriter CedarBackup3.writers.cdwriter.CdWriter-class.html CedarBackup3.writers.cdwriter.CdWriter._createImage CedarBackup3.writers.cdwriter.CdWriter-class.html#_createImage CedarBackup3.writers.cdwriter.CdWriter._calculateCapacity CedarBackup3.writers.cdwriter.CdWriter-class.html#_calculateCapacity CedarBackup3.writers.cdwriter.CdWriter._buildPropertiesArgs CedarBackup3.writers.cdwriter.CdWriter-class.html#_buildPropertiesArgs CedarBackup3.writers.cdwriter.CdWriter.writeImage CedarBackup3.writers.cdwriter.CdWriter-class.html#writeImage CedarBackup3.writers.cdwriter.CdWriter.deviceHasTray CedarBackup3.writers.cdwriter.CdWriter-class.html#deviceHasTray CedarBackup3.writers.cdwriter.CdWriter.openTray CedarBackup3.writers.cdwriter.CdWriter-class.html#openTray CedarBackup3.writers.cdwriter.CdWriter.addImageEntry CedarBackup3.writers.cdwriter.CdWriter-class.html#addImageEntry CedarBackup3.writers.cdwriter.CdWriter._buildWriteArgs CedarBackup3.writers.cdwriter.CdWriter-class.html#_buildWriteArgs CedarBackup3.writers.cdwriter.CdWriter.unlockTray CedarBackup3.writers.cdwriter.CdWriter-class.html#unlockTray CedarBackup3.writers.cdwriter.CdWriter._parseBoundariesOutput CedarBackup3.writers.cdwriter.CdWriter-class.html#_parseBoundariesOutput CedarBackup3.writers.cdwriter.CdWriter._getHardwareId CedarBackup3.writers.cdwriter.CdWriter-class.html#_getHardwareId CedarBackup3.writers.cdwriter.CdWriter.refreshMedia CedarBackup3.writers.cdwriter.CdWriter-class.html#refreshMedia CedarBackup3.writers.cdwriter.CdWriter.closeTray CedarBackup3.writers.cdwriter.CdWriter-class.html#closeTray CedarBackup3.writers.cdwriter.CdWriter.initializeImage CedarBackup3.writers.cdwriter.CdWriter-class.html#initializeImage CedarBackup3.writers.cdwriter.CdWriter.deviceCanEject CedarBackup3.writers.cdwriter.CdWriter-class.html#deviceCanEject CedarBackup3.writers.cdwriter.CdWriter.__init__ CedarBackup3.writers.cdwriter.CdWriter-class.html#__init__ CedarBackup3.writers.cdwriter.CdWriter.refreshMediaDelay CedarBackup3.writers.cdwriter.CdWriter-class.html#refreshMediaDelay CedarBackup3.writers.cdwriter.CdWriter._buildCloseTrayArgs CedarBackup3.writers.cdwriter.CdWriter-class.html#_buildCloseTrayArgs CedarBackup3.writers.cdwriter.CdWriter._getDeviceHasTray CedarBackup3.writers.cdwriter.CdWriter-class.html#_getDeviceHasTray CedarBackup3.writers.cdwriter.CdWriter.getEstimatedImageSize CedarBackup3.writers.cdwriter.CdWriter-class.html#getEstimatedImageSize CedarBackup3.writers.cdwriter.CdWriter.media CedarBackup3.writers.cdwriter.CdWriter-class.html#media CedarBackup3.writers.cdwriter.CdWriter._retrieveProperties CedarBackup3.writers.cdwriter.CdWriter-class.html#_retrieveProperties CedarBackup3.writers.cdwriter.CdWriter.deviceVendor CedarBackup3.writers.cdwriter.CdWriter-class.html#deviceVendor CedarBackup3.writers.cdwriter.CdWriter.hardwareId CedarBackup3.writers.cdwriter.CdWriter-class.html#hardwareId CedarBackup3.writers.cdwriter.CdWriter._getDeviceCanEject CedarBackup3.writers.cdwriter.CdWriter-class.html#_getDeviceCanEject CedarBackup3.writers.cdwriter.CdWriter._getMedia CedarBackup3.writers.cdwriter.CdWriter-class.html#_getMedia CedarBackup3.writers.cdwriter.CdWriter.isRewritable CedarBackup3.writers.cdwriter.CdWriter-class.html#isRewritable CedarBackup3.writers.cdwriter.CdWriter.deviceType CedarBackup3.writers.cdwriter.CdWriter-class.html#deviceType CedarBackup3.writers.cdwriter.CdWriter.setImageNewDisc CedarBackup3.writers.cdwriter.CdWriter-class.html#setImageNewDisc CedarBackup3.writers.cdwriter.CdWriter.driveSpeed CedarBackup3.writers.cdwriter.CdWriter-class.html#driveSpeed CedarBackup3.writers.cdwriter.CdWriter._getDevice CedarBackup3.writers.cdwriter.CdWriter-class.html#_getDevice CedarBackup3.writers.cdwriter.CdWriter.deviceBufferSize CedarBackup3.writers.cdwriter.CdWriter-class.html#deviceBufferSize CedarBackup3.writers.cdwriter.CdWriter._getDeviceType CedarBackup3.writers.cdwriter.CdWriter-class.html#_getDeviceType CedarBackup3.writers.cdwriter.CdWriter._getDeviceSupportsMulti CedarBackup3.writers.cdwriter.CdWriter-class.html#_getDeviceSupportsMulti CedarBackup3.writers.cdwriter.CdWriter._getScsiId CedarBackup3.writers.cdwriter.CdWriter-class.html#_getScsiId CedarBackup3.writers.cdwriter.CdWriter._buildBlankArgs CedarBackup3.writers.cdwriter.CdWriter-class.html#_buildBlankArgs CedarBackup3.writers.cdwriter.CdWriter._getDriveSpeed CedarBackup3.writers.cdwriter.CdWriter-class.html#_getDriveSpeed CedarBackup3.writers.cdwriter.CdWriter._getDeviceVendor CedarBackup3.writers.cdwriter.CdWriter-class.html#_getDeviceVendor CedarBackup3.writers.cdwriter.CdWriter._writeImage CedarBackup3.writers.cdwriter.CdWriter-class.html#_writeImage CedarBackup3.writers.cdwriter.CdWriter.deviceId CedarBackup3.writers.cdwriter.CdWriter-class.html#deviceId CedarBackup3.writers.cdwriter.CdWriter._blankMedia CedarBackup3.writers.cdwriter.CdWriter-class.html#_blankMedia CedarBackup3.writers.cdwriter.CdWriter._buildOpenTrayArgs CedarBackup3.writers.cdwriter.CdWriter-class.html#_buildOpenTrayArgs CedarBackup3.writers.cdwriter.CdWriter._getDeviceBufferSize CedarBackup3.writers.cdwriter.CdWriter-class.html#_getDeviceBufferSize CedarBackup3.writers.cdwriter.CdWriter.deviceSupportsMulti CedarBackup3.writers.cdwriter.CdWriter-class.html#deviceSupportsMulti CedarBackup3.writers.cdwriter.CdWriter._getEjectDelay CedarBackup3.writers.cdwriter.CdWriter-class.html#_getEjectDelay CedarBackup3.writers.cdwriter.CdWriter._getRefreshMediaDelay CedarBackup3.writers.cdwriter.CdWriter-class.html#_getRefreshMediaDelay CedarBackup3.writers.cdwriter.CdWriter.scsiId CedarBackup3.writers.cdwriter.CdWriter-class.html#scsiId CedarBackup3.writers.cdwriter.CdWriter._buildUnlockTrayArgs CedarBackup3.writers.cdwriter.CdWriter-class.html#_buildUnlockTrayArgs CedarBackup3.writers.cdwriter.CdWriter.device CedarBackup3.writers.cdwriter.CdWriter-class.html#device CedarBackup3.writers.cdwriter.CdWriter._getDeviceId CedarBackup3.writers.cdwriter.CdWriter-class.html#_getDeviceId CedarBackup3.writers.cdwriter.CdWriter.retrieveCapacity CedarBackup3.writers.cdwriter.CdWriter-class.html#retrieveCapacity CedarBackup3.writers.cdwriter.CdWriter._getBoundaries CedarBackup3.writers.cdwriter.CdWriter-class.html#_getBoundaries CedarBackup3.writers.cdwriter.CdWriter._buildBoundariesArgs CedarBackup3.writers.cdwriter.CdWriter-class.html#_buildBoundariesArgs CedarBackup3.writers.cdwriter.CdWriter._parsePropertiesOutput CedarBackup3.writers.cdwriter.CdWriter-class.html#_parsePropertiesOutput CedarBackup3.writers.cdwriter.CdWriter.ejectDelay CedarBackup3.writers.cdwriter.CdWriter-class.html#ejectDelay CedarBackup3.writers.cdwriter.MediaCapacity CedarBackup3.writers.cdwriter.MediaCapacity-class.html CedarBackup3.writers.cdwriter.MediaCapacity._getBytesUsed CedarBackup3.writers.cdwriter.MediaCapacity-class.html#_getBytesUsed CedarBackup3.writers.cdwriter.MediaCapacity.bytesUsed CedarBackup3.writers.cdwriter.MediaCapacity-class.html#bytesUsed CedarBackup3.writers.cdwriter.MediaCapacity.bytesAvailable CedarBackup3.writers.cdwriter.MediaCapacity-class.html#bytesAvailable CedarBackup3.writers.cdwriter.MediaCapacity.__str__ CedarBackup3.writers.cdwriter.MediaCapacity-class.html#__str__ CedarBackup3.writers.cdwriter.MediaCapacity.utilized CedarBackup3.writers.cdwriter.MediaCapacity-class.html#utilized CedarBackup3.writers.cdwriter.MediaCapacity.__init__ CedarBackup3.writers.cdwriter.MediaCapacity-class.html#__init__ CedarBackup3.writers.cdwriter.MediaCapacity._getTotalCapacity CedarBackup3.writers.cdwriter.MediaCapacity-class.html#_getTotalCapacity CedarBackup3.writers.cdwriter.MediaCapacity.boundaries CedarBackup3.writers.cdwriter.MediaCapacity-class.html#boundaries CedarBackup3.writers.cdwriter.MediaCapacity._getUtilized CedarBackup3.writers.cdwriter.MediaCapacity-class.html#_getUtilized CedarBackup3.writers.cdwriter.MediaCapacity._getBytesAvailable CedarBackup3.writers.cdwriter.MediaCapacity-class.html#_getBytesAvailable CedarBackup3.writers.cdwriter.MediaCapacity.totalCapacity CedarBackup3.writers.cdwriter.MediaCapacity-class.html#totalCapacity CedarBackup3.writers.cdwriter.MediaCapacity._getBoundaries CedarBackup3.writers.cdwriter.MediaCapacity-class.html#_getBoundaries CedarBackup3.writers.cdwriter.MediaDefinition CedarBackup3.writers.cdwriter.MediaDefinition-class.html CedarBackup3.writers.cdwriter.MediaDefinition.initialLeadIn CedarBackup3.writers.cdwriter.MediaDefinition-class.html#initialLeadIn CedarBackup3.writers.cdwriter.MediaDefinition.rewritable CedarBackup3.writers.cdwriter.MediaDefinition-class.html#rewritable CedarBackup3.writers.cdwriter.MediaDefinition.__init__ CedarBackup3.writers.cdwriter.MediaDefinition-class.html#__init__ CedarBackup3.writers.cdwriter.MediaDefinition.capacity CedarBackup3.writers.cdwriter.MediaDefinition-class.html#capacity CedarBackup3.writers.cdwriter.MediaDefinition.leadIn CedarBackup3.writers.cdwriter.MediaDefinition-class.html#leadIn CedarBackup3.writers.cdwriter.MediaDefinition.mediaType CedarBackup3.writers.cdwriter.MediaDefinition-class.html#mediaType CedarBackup3.writers.cdwriter.MediaDefinition._setValues CedarBackup3.writers.cdwriter.MediaDefinition-class.html#_setValues CedarBackup3.writers.cdwriter.MediaDefinition._getMediaType CedarBackup3.writers.cdwriter.MediaDefinition-class.html#_getMediaType CedarBackup3.writers.cdwriter.MediaDefinition._getInitialLeadIn CedarBackup3.writers.cdwriter.MediaDefinition-class.html#_getInitialLeadIn CedarBackup3.writers.cdwriter.MediaDefinition._getLeadIn CedarBackup3.writers.cdwriter.MediaDefinition-class.html#_getLeadIn CedarBackup3.writers.cdwriter.MediaDefinition._getCapacity CedarBackup3.writers.cdwriter.MediaDefinition-class.html#_getCapacity CedarBackup3.writers.cdwriter.MediaDefinition._getRewritable CedarBackup3.writers.cdwriter.MediaDefinition-class.html#_getRewritable CedarBackup3.writers.cdwriter._ImageProperties CedarBackup3.writers.cdwriter._ImageProperties-class.html CedarBackup3.writers.cdwriter._ImageProperties.__init__ CedarBackup3.writers.cdwriter._ImageProperties-class.html#__init__ CedarBackup3.writers.dvdwriter.DvdWriter CedarBackup3.writers.dvdwriter.DvdWriter-class.html CedarBackup3.writers.dvdwriter.DvdWriter._buildWriteArgs CedarBackup3.writers.dvdwriter.DvdWriter-class.html#_buildWriteArgs CedarBackup3.writers.dvdwriter.DvdWriter.refreshMedia CedarBackup3.writers.dvdwriter.DvdWriter-class.html#refreshMedia CedarBackup3.writers.dvdwriter.DvdWriter.writeImage CedarBackup3.writers.dvdwriter.DvdWriter-class.html#writeImage CedarBackup3.writers.dvdwriter.DvdWriter.deviceHasTray CedarBackup3.writers.dvdwriter.DvdWriter-class.html#deviceHasTray CedarBackup3.writers.dvdwriter.DvdWriter.openTray CedarBackup3.writers.dvdwriter.DvdWriter-class.html#openTray CedarBackup3.writers.dvdwriter.DvdWriter.addImageEntry CedarBackup3.writers.dvdwriter.DvdWriter-class.html#addImageEntry CedarBackup3.writers.dvdwriter.DvdWriter.unlockTray CedarBackup3.writers.dvdwriter.DvdWriter-class.html#unlockTray CedarBackup3.writers.dvdwriter.DvdWriter._getHardwareId CedarBackup3.writers.dvdwriter.DvdWriter-class.html#_getHardwareId CedarBackup3.writers.dvdwriter.DvdWriter.closeTray CedarBackup3.writers.dvdwriter.DvdWriter-class.html#closeTray CedarBackup3.writers.dvdwriter.DvdWriter.initializeImage CedarBackup3.writers.dvdwriter.DvdWriter-class.html#initializeImage CedarBackup3.writers.dvdwriter.DvdWriter.deviceCanEject CedarBackup3.writers.dvdwriter.DvdWriter-class.html#deviceCanEject CedarBackup3.writers.dvdwriter.DvdWriter.__init__ CedarBackup3.writers.dvdwriter.DvdWriter-class.html#__init__ CedarBackup3.writers.dvdwriter.DvdWriter.refreshMediaDelay CedarBackup3.writers.dvdwriter.DvdWriter-class.html#refreshMediaDelay CedarBackup3.writers.dvdwriter.DvdWriter._getDeviceHasTray CedarBackup3.writers.dvdwriter.DvdWriter-class.html#_getDeviceHasTray CedarBackup3.writers.dvdwriter.DvdWriter.getEstimatedImageSize CedarBackup3.writers.dvdwriter.DvdWriter-class.html#getEstimatedImageSize CedarBackup3.writers.dvdwriter.DvdWriter.media CedarBackup3.writers.dvdwriter.DvdWriter-class.html#media CedarBackup3.writers.dvdwriter.DvdWriter._parseSectorsUsed CedarBackup3.writers.dvdwriter.DvdWriter-class.html#_parseSectorsUsed CedarBackup3.writers.dvdwriter.DvdWriter.hardwareId CedarBackup3.writers.dvdwriter.DvdWriter-class.html#hardwareId CedarBackup3.writers.dvdwriter.DvdWriter._getDeviceCanEject CedarBackup3.writers.dvdwriter.DvdWriter-class.html#_getDeviceCanEject CedarBackup3.writers.dvdwriter.DvdWriter._getMedia CedarBackup3.writers.dvdwriter.DvdWriter-class.html#_getMedia CedarBackup3.writers.dvdwriter.DvdWriter.isRewritable CedarBackup3.writers.dvdwriter.DvdWriter-class.html#isRewritable CedarBackup3.writers.dvdwriter.DvdWriter.setImageNewDisc CedarBackup3.writers.dvdwriter.DvdWriter-class.html#setImageNewDisc CedarBackup3.writers.dvdwriter.DvdWriter.driveSpeed CedarBackup3.writers.dvdwriter.DvdWriter-class.html#driveSpeed CedarBackup3.writers.dvdwriter.DvdWriter._getDevice CedarBackup3.writers.dvdwriter.DvdWriter-class.html#_getDevice CedarBackup3.writers.dvdwriter.DvdWriter._getScsiId CedarBackup3.writers.dvdwriter.DvdWriter-class.html#_getScsiId CedarBackup3.writers.dvdwriter.DvdWriter._getDriveSpeed CedarBackup3.writers.dvdwriter.DvdWriter-class.html#_getDriveSpeed CedarBackup3.writers.dvdwriter.DvdWriter._writeImage CedarBackup3.writers.dvdwriter.DvdWriter-class.html#_writeImage CedarBackup3.writers.dvdwriter.DvdWriter.ejectDelay CedarBackup3.writers.dvdwriter.DvdWriter-class.html#ejectDelay CedarBackup3.writers.dvdwriter.DvdWriter._searchForOverburn CedarBackup3.writers.dvdwriter.DvdWriter-class.html#_searchForOverburn CedarBackup3.writers.dvdwriter.DvdWriter.device CedarBackup3.writers.dvdwriter.DvdWriter-class.html#device CedarBackup3.writers.dvdwriter.DvdWriter._getEjectDelay CedarBackup3.writers.dvdwriter.DvdWriter-class.html#_getEjectDelay CedarBackup3.writers.dvdwriter.DvdWriter._getRefreshMediaDelay CedarBackup3.writers.dvdwriter.DvdWriter-class.html#_getRefreshMediaDelay CedarBackup3.writers.dvdwriter.DvdWriter.scsiId CedarBackup3.writers.dvdwriter.DvdWriter-class.html#scsiId CedarBackup3.writers.dvdwriter.DvdWriter.retrieveCapacity CedarBackup3.writers.dvdwriter.DvdWriter-class.html#retrieveCapacity CedarBackup3.writers.dvdwriter.DvdWriter._retrieveSectorsUsed CedarBackup3.writers.dvdwriter.DvdWriter-class.html#_retrieveSectorsUsed CedarBackup3.writers.dvdwriter.DvdWriter._getEstimatedImageSize CedarBackup3.writers.dvdwriter.DvdWriter-class.html#_getEstimatedImageSize CedarBackup3.writers.dvdwriter.MediaCapacity CedarBackup3.writers.dvdwriter.MediaCapacity-class.html CedarBackup3.writers.dvdwriter.MediaCapacity._getBytesUsed CedarBackup3.writers.dvdwriter.MediaCapacity-class.html#_getBytesUsed CedarBackup3.writers.dvdwriter.MediaCapacity.bytesUsed CedarBackup3.writers.dvdwriter.MediaCapacity-class.html#bytesUsed CedarBackup3.writers.dvdwriter.MediaCapacity.bytesAvailable CedarBackup3.writers.dvdwriter.MediaCapacity-class.html#bytesAvailable CedarBackup3.writers.dvdwriter.MediaCapacity.__str__ CedarBackup3.writers.dvdwriter.MediaCapacity-class.html#__str__ CedarBackup3.writers.dvdwriter.MediaCapacity.utilized CedarBackup3.writers.dvdwriter.MediaCapacity-class.html#utilized CedarBackup3.writers.dvdwriter.MediaCapacity.__init__ CedarBackup3.writers.dvdwriter.MediaCapacity-class.html#__init__ CedarBackup3.writers.dvdwriter.MediaCapacity._getTotalCapacity CedarBackup3.writers.dvdwriter.MediaCapacity-class.html#_getTotalCapacity CedarBackup3.writers.dvdwriter.MediaCapacity._getUtilized CedarBackup3.writers.dvdwriter.MediaCapacity-class.html#_getUtilized CedarBackup3.writers.dvdwriter.MediaCapacity._getBytesAvailable CedarBackup3.writers.dvdwriter.MediaCapacity-class.html#_getBytesAvailable CedarBackup3.writers.dvdwriter.MediaCapacity.totalCapacity CedarBackup3.writers.dvdwriter.MediaCapacity-class.html#totalCapacity CedarBackup3.writers.dvdwriter.MediaDefinition CedarBackup3.writers.dvdwriter.MediaDefinition-class.html CedarBackup3.writers.dvdwriter.MediaDefinition.capacity CedarBackup3.writers.dvdwriter.MediaDefinition-class.html#capacity CedarBackup3.writers.dvdwriter.MediaDefinition.mediaType CedarBackup3.writers.dvdwriter.MediaDefinition-class.html#mediaType CedarBackup3.writers.dvdwriter.MediaDefinition._setValues CedarBackup3.writers.dvdwriter.MediaDefinition-class.html#_setValues CedarBackup3.writers.dvdwriter.MediaDefinition._getMediaType CedarBackup3.writers.dvdwriter.MediaDefinition-class.html#_getMediaType CedarBackup3.writers.dvdwriter.MediaDefinition._getRewritable CedarBackup3.writers.dvdwriter.MediaDefinition-class.html#_getRewritable CedarBackup3.writers.dvdwriter.MediaDefinition.rewritable CedarBackup3.writers.dvdwriter.MediaDefinition-class.html#rewritable CedarBackup3.writers.dvdwriter.MediaDefinition.__init__ CedarBackup3.writers.dvdwriter.MediaDefinition-class.html#__init__ CedarBackup3.writers.dvdwriter.MediaDefinition._getCapacity CedarBackup3.writers.dvdwriter.MediaDefinition-class.html#_getCapacity CedarBackup3.writers.dvdwriter._ImageProperties CedarBackup3.writers.dvdwriter._ImageProperties-class.html CedarBackup3.writers.dvdwriter._ImageProperties.__init__ CedarBackup3.writers.dvdwriter._ImageProperties-class.html#__init__ CedarBackup3.writers.util.IsoImage CedarBackup3.writers.util.IsoImage-class.html CedarBackup3.writers.util.IsoImage.preparerId CedarBackup3.writers.util.IsoImage-class.html#preparerId CedarBackup3.writers.util.IsoImage._buildWriteArgs CedarBackup3.writers.util.IsoImage-class.html#_buildWriteArgs CedarBackup3.writers.util.IsoImage.writeImage CedarBackup3.writers.util.IsoImage-class.html#writeImage CedarBackup3.writers.util.IsoImage._setVolumeId CedarBackup3.writers.util.IsoImage-class.html#_setVolumeId CedarBackup3.writers.util.IsoImage._setBiblioFile CedarBackup3.writers.util.IsoImage-class.html#_setBiblioFile CedarBackup3.writers.util.IsoImage._setDevice CedarBackup3.writers.util.IsoImage-class.html#_setDevice CedarBackup3.writers.util.IsoImage.getEstimatedSize CedarBackup3.writers.util.IsoImage-class.html#getEstimatedSize CedarBackup3.writers.util.IsoImage._getGraftPoint CedarBackup3.writers.util.IsoImage-class.html#_getGraftPoint CedarBackup3.writers.util.IsoImage._setUseRockRidge CedarBackup3.writers.util.IsoImage-class.html#_setUseRockRidge CedarBackup3.writers.util.IsoImage.addEntry CedarBackup3.writers.util.IsoImage-class.html#addEntry CedarBackup3.writers.util.IsoImage.graftPoint CedarBackup3.writers.util.IsoImage-class.html#graftPoint CedarBackup3.writers.util.IsoImage.applicationId CedarBackup3.writers.util.IsoImage-class.html#applicationId CedarBackup3.writers.util.IsoImage.__init__ CedarBackup3.writers.util.IsoImage-class.html#__init__ CedarBackup3.writers.util.IsoImage.biblioFile CedarBackup3.writers.util.IsoImage-class.html#biblioFile CedarBackup3.writers.util.IsoImage._buildGeneralArgs CedarBackup3.writers.util.IsoImage-class.html#_buildGeneralArgs CedarBackup3.writers.util.IsoImage._getUseRockRidge CedarBackup3.writers.util.IsoImage-class.html#_getUseRockRidge CedarBackup3.writers.util.IsoImage._getPublisherId CedarBackup3.writers.util.IsoImage-class.html#_getPublisherId CedarBackup3.writers.util.IsoImage._getEstimatedSize CedarBackup3.writers.util.IsoImage-class.html#_getEstimatedSize CedarBackup3.writers.util.IsoImage._setPreparerId CedarBackup3.writers.util.IsoImage-class.html#_setPreparerId CedarBackup3.writers.util.IsoImage.boundaries CedarBackup3.writers.util.IsoImage-class.html#boundaries CedarBackup3.writers.util.IsoImage._getDevice CedarBackup3.writers.util.IsoImage-class.html#_getDevice CedarBackup3.writers.util.IsoImage._getApplicationId CedarBackup3.writers.util.IsoImage-class.html#_getApplicationId CedarBackup3.writers.util.IsoImage._setBoundaries CedarBackup3.writers.util.IsoImage-class.html#_setBoundaries CedarBackup3.writers.util.IsoImage.volumeId CedarBackup3.writers.util.IsoImage-class.html#volumeId CedarBackup3.writers.util.IsoImage._buildDirEntries CedarBackup3.writers.util.IsoImage-class.html#_buildDirEntries CedarBackup3.writers.util.IsoImage._setPublisherId CedarBackup3.writers.util.IsoImage-class.html#_setPublisherId CedarBackup3.writers.util.IsoImage.device CedarBackup3.writers.util.IsoImage-class.html#device CedarBackup3.writers.util.IsoImage._setGraftPoint CedarBackup3.writers.util.IsoImage-class.html#_setGraftPoint CedarBackup3.writers.util.IsoImage._setApplicationId CedarBackup3.writers.util.IsoImage-class.html#_setApplicationId CedarBackup3.writers.util.IsoImage._buildSizeArgs CedarBackup3.writers.util.IsoImage-class.html#_buildSizeArgs CedarBackup3.writers.util.IsoImage._getVolumeId CedarBackup3.writers.util.IsoImage-class.html#_getVolumeId CedarBackup3.writers.util.IsoImage.publisherId CedarBackup3.writers.util.IsoImage-class.html#publisherId CedarBackup3.writers.util.IsoImage._getBoundaries CedarBackup3.writers.util.IsoImage-class.html#_getBoundaries CedarBackup3.writers.util.IsoImage._getPreparerId CedarBackup3.writers.util.IsoImage-class.html#_getPreparerId CedarBackup3.writers.util.IsoImage.useRockRidge CedarBackup3.writers.util.IsoImage-class.html#useRockRidge CedarBackup3.writers.util.IsoImage._getBiblioFile CedarBackup3.writers.util.IsoImage-class.html#_getBiblioFile CedarBackup3.xmlutil.Serializer CedarBackup3.xmlutil.Serializer-class.html CedarBackup3.xmlutil.Serializer._visitNodeList CedarBackup3.xmlutil.Serializer-class.html#_visitNodeList CedarBackup3.xmlutil.Serializer.serialize CedarBackup3.xmlutil.Serializer-class.html#serialize CedarBackup3.xmlutil.Serializer._visitEntityReference CedarBackup3.xmlutil.Serializer-class.html#_visitEntityReference CedarBackup3.xmlutil.Serializer._visitDocumentFragment CedarBackup3.xmlutil.Serializer-class.html#_visitDocumentFragment CedarBackup3.xmlutil.Serializer._visitElement CedarBackup3.xmlutil.Serializer-class.html#_visitElement CedarBackup3.xmlutil.Serializer.__init__ CedarBackup3.xmlutil.Serializer-class.html#__init__ CedarBackup3.xmlutil.Serializer._visitCDATASection CedarBackup3.xmlutil.Serializer-class.html#_visitCDATASection CedarBackup3.xmlutil.Serializer._visitDocumentType CedarBackup3.xmlutil.Serializer-class.html#_visitDocumentType CedarBackup3.xmlutil.Serializer._visitNamedNodeMap CedarBackup3.xmlutil.Serializer-class.html#_visitNamedNodeMap CedarBackup3.xmlutil.Serializer._visitAttr CedarBackup3.xmlutil.Serializer-class.html#_visitAttr CedarBackup3.xmlutil.Serializer._visitProlog CedarBackup3.xmlutil.Serializer-class.html#_visitProlog CedarBackup3.xmlutil.Serializer._tryIndent CedarBackup3.xmlutil.Serializer-class.html#_tryIndent CedarBackup3.xmlutil.Serializer._visitDocument CedarBackup3.xmlutil.Serializer-class.html#_visitDocument CedarBackup3.xmlutil.Serializer._visitNotation CedarBackup3.xmlutil.Serializer-class.html#_visitNotation CedarBackup3.xmlutil.Serializer._visitEntity CedarBackup3.xmlutil.Serializer-class.html#_visitEntity CedarBackup3.xmlutil.Serializer._write CedarBackup3.xmlutil.Serializer-class.html#_write CedarBackup3.xmlutil.Serializer._visitProcessingInstruction CedarBackup3.xmlutil.Serializer-class.html#_visitProcessingInstruction CedarBackup3.xmlutil.Serializer._visitComment CedarBackup3.xmlutil.Serializer-class.html#_visitComment CedarBackup3.xmlutil.Serializer._visit CedarBackup3.xmlutil.Serializer-class.html#_visit CedarBackup3.xmlutil.Serializer._visitText CedarBackup3.xmlutil.Serializer-class.html#_visitText CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.subversion.SubversionConfig-class.html0000664000175000017500000011052612657665545033421 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.subversion.SubversionConfig
    Package CedarBackup3 :: Package extend :: Module subversion :: Class SubversionConfig
    [hide private]
    [frames] | no frames]

    Class SubversionConfig

    source code

    object --+
             |
            SubversionConfig
    

    Class representing Subversion configuration.

    Subversion configuration is used for backing up Subversion repositories.

    The following restrictions exist on data in this class:

    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.
    • The repositories list must be a list of Repository objects.
    • The repositoryDirs list must be a list of RepositoryDir objects.

    For the two lists, validation is accomplished through the util.ObjectTypeList list implementation that overrides common list methods and transparently ensures that each element has the correct type.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, collectMode=None, compressMode=None, repositories=None, repositoryDirs=None)
    Constructor for the SubversionConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Original Python 2 comparison operator.
    source code
     
    __eq__(self, other)
    Equals operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __lt__(self, other)
    Less-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    __gt__(self, other)
    Greater-than operator, iplemented in terms of original Python 2 compare operator.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setRepositories(self, value)
    Property target used to set the repositories list.
    source code
     
    _getRepositories(self)
    Property target used to get the repositories list.
    source code
     
    _setRepositoryDirs(self, value)
    Property target used to set the repositoryDirs list.
    source code
     
    _getRepositoryDirs(self)
    Property target used to get the repositoryDirs list.
    source code
     
    __ge__(x, y)
    x>=y
     
    __le__(x, y)
    x<=y

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      collectMode
    Default collect mode.
      compressMode
    Default compress mode.
      repositories
    List of Subversion repositories to back up.
      repositoryDirs
    List of Subversion parent directories to back up.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, collectMode=None, compressMode=None, repositories=None, repositoryDirs=None)
    (Constructor)

    source code 

    Constructor for the SubversionConfig class.

    Parameters:
    • collectMode - Default collect mode.
    • compressMode - Default compress mode.
    • repositories - List of Subversion repositories to back up.
    • repositoryDirs - List of Subversion parent directories to back up.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Original Python 2 comparison operator. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setRepositories(self, value)

    source code 

    Property target used to set the repositories list. Either the value must be None or each element must be a Repository.

    Raises:
    • ValueError - If the value is not a Repository

    _setRepositoryDirs(self, value)

    source code 

    Property target used to set the repositoryDirs list. Either the value must be None or each element must be a Repository.

    Raises:
    • ValueError - If the value is not a Repository

    Property Details [hide private]

    collectMode

    Default collect mode.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Default compress mode.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    repositories

    List of Subversion repositories to back up.

    Get Method:
    _getRepositories(self) - Property target used to get the repositories list.
    Set Method:
    _setRepositories(self, value) - Property target used to set the repositories list.

    repositoryDirs

    List of Subversion parent directories to back up.

    Get Method:
    _getRepositoryDirs(self) - Property target used to get the repositoryDirs list.
    Set Method:
    _setRepositoryDirs(self, value) - Property target used to set the repositoryDirs list.

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.sysinfo-module.html0000664000175000017500000005753612657665544027622 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.sysinfo
    Package CedarBackup3 :: Package extend :: Module sysinfo
    [hide private]
    [frames] | no frames]

    Module sysinfo

    source code

    Provides an extension to save off important system recovery information.

    This is a simple Cedar Backup extension used to save off important system recovery information. It saves off three types of information:

    • Currently-installed Debian packages via dpkg --get-selections
    • Disk partition information via fdisk -l
    • System-wide mounted filesystem contents, via ls -laR

    The saved-off information is placed into the collect directory and is compressed using bzip2 to save space.

    This extension relies on the options and collect configurations in the standard Cedar Backup configuration file, but requires no new configuration of its own. No public functions other than the action are exposed since all of this is pretty simple.


    Note: If the dpkg or fdisk commands cannot be found in their normal locations or executed by the current user, those steps will be skipped and a note will be logged at the INFO level.

    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the sysinfo backup action.
    source code
     
    _dumpDebianPackages(targetDir, backupUser, backupGroup, compress=True)
    Dumps a list of currently installed Debian packages via dpkg.
    source code
     
    _dumpPartitionTable(targetDir, backupUser, backupGroup, compress=True)
    Dumps information about the partition table via fdisk.
    source code
     
    _dumpFilesystemContents(targetDir, backupUser, backupGroup, compress=True)
    Dumps complete listing of filesystem contents via ls -laR.
    source code
     
    _getOutputFile(targetDir, name, compress=True)
    Opens the output file used for saving a dump to the filesystem.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup3.log.extend.sysinfo")
      DPKG_PATH = '/usr/bin/dpkg'
      FDISK_PATH = '/sbin/fdisk'
      DPKG_COMMAND = ['/usr/bin/dpkg', '--get-selections']
      FDISK_COMMAND = ['/sbin/fdisk', '-l']
      LS_COMMAND = ['ls', '-laR', '/']
      __package__ = 'CedarBackup3.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the sysinfo backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If the backup process fails for some reason.

    _dumpDebianPackages(targetDir, backupUser, backupGroup, compress=True)

    source code 

    Dumps a list of currently installed Debian packages via dpkg.

    Parameters:
    • targetDir - Directory to write output file into.
    • backupUser - User which should own the resulting file.
    • backupGroup - Group which should own the resulting file.
    • compress - Indicates whether to compress the output file.
    Raises:
    • IOError - If the dump fails for some reason.

    _dumpPartitionTable(targetDir, backupUser, backupGroup, compress=True)

    source code 

    Dumps information about the partition table via fdisk.

    Parameters:
    • targetDir - Directory to write output file into.
    • backupUser - User which should own the resulting file.
    • backupGroup - Group which should own the resulting file.
    • compress - Indicates whether to compress the output file.
    Raises:
    • IOError - If the dump fails for some reason.

    _dumpFilesystemContents(targetDir, backupUser, backupGroup, compress=True)

    source code 

    Dumps complete listing of filesystem contents via ls -laR.

    Parameters:
    • targetDir - Directory to write output file into.
    • backupUser - User which should own the resulting file.
    • backupGroup - Group which should own the resulting file.
    • compress - Indicates whether to compress the output file.
    Raises:
    • IOError - If the dump fails for some reason.

    _getOutputFile(targetDir, name, compress=True)

    source code 

    Opens the output file used for saving a dump to the filesystem.

    The filename will be name.txt (or name.txt.bz2 if compress is True), written in the target directory.

    Parameters:
    • targetDir - Target directory to write file in.
    • name - Name of the file to create.
    • compress - Indicates whether to write compressed output.
    Returns:
    Tuple of (Output file object, filename), file opened in binary mode for use with executeCommand()

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.extend.subversion.FSFSRepository-class.html0000664000175000017500000003476212657665545033004 0ustar pronovicpronovic00000000000000 CedarBackup3.extend.subversion.FSFSRepository
    Package CedarBackup3 :: Package extend :: Module subversion :: Class FSFSRepository
    [hide private]
    [frames] | no frames]

    Class FSFSRepository

    source code

    object --+    
             |    
    Repository --+
                 |
                FSFSRepository
    

    Class representing Subversion FSFS repository configuration. This object is deprecated. Use a simple Repository instead.

    Instance Methods [hide private]
     
    __init__(self, repositoryPath=None, collectMode=None, compressMode=None)
    Constructor for the FSFSRepository class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code

    Inherited from Repository: __cmp__, __eq__, __ge__, __gt__, __le__, __lt__, __str__

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]

    Inherited from Repository: collectMode, compressMode, repositoryPath, repositoryType

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, repositoryPath=None, collectMode=None, compressMode=None)
    (Constructor)

    source code 

    Constructor for the FSFSRepository class.

    Parameters:
    • repositoryType - Type of repository, for reference
    • repositoryPath - Absolute path to a Subversion repository on disk.
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    CedarBackup3-3.1.6/doc/interface/CedarBackup3.config-module.html0000664000175000017500000012422112657665544026071 0ustar pronovicpronovic00000000000000 CedarBackup3.config
    Package CedarBackup3 :: Module config
    [hide private]
    [frames] | no frames]

    Module config

    source code

    Provides configuration-related objects.

    Summary

    Cedar Backup stores all of its configuration in an XML document typically called cback3.conf. The standard location for this document is in /etc, but users can specify a different location if they want to.

    The Config class is a Python object representation of a Cedar Backup XML configuration file. The representation is two-way: XML data can be used to create a Config object, and then changes to the object can be propogated back to disk. A Config object can even be used to create a configuration file from scratch programmatically.

    The Config class is intended to be the only Python-language interface to Cedar Backup configuration on disk. Cedar Backup will use the class as its internal representation of configuration, and applications external to Cedar Backup itself (such as a hypothetical third-party configuration tool written in Python or a third party extension module) should also use the class when they need to read and write configuration files.

    Backwards Compatibility

    The configuration file format has changed between Cedar Backup 1.x and Cedar Backup 2.x. Any Cedar Backup 1.x configuration file is also a valid Cedar Backup 2.x configuration file. However, it doesn't work to go the other direction, as the 2.x configuration files contains additional configuration is not accepted by older versions of the software.

    XML Configuration Structure

    A Config object can either be created "empty", or can be created based on XML input (either in the form of a string or read in from a file on disk). Generally speaking, the XML input must result in a Config object which passes the validations laid out below in the Validation section.

    An XML configuration file is composed of seven sections:

    • reference: specifies reference information about the file (author, revision, etc)
    • extensions: specifies mappings to Cedar Backup extensions (external code)
    • options: specifies global configuration options
    • peers: specifies the set of peers in a master's backup pool
    • collect: specifies configuration related to the collect action
    • stage: specifies configuration related to the stage action
    • store: specifies configuration related to the store action
    • purge: specifies configuration related to the purge action

    Each section is represented by an class in this module, and then the overall Config class is a composition of the various other classes.

    Any configuration section that is missing in the XML document (or has not been filled into an "empty" document) will just be set to None in the object representation. The same goes for individual fields within each configuration section. Keep in mind that the document might not be completely valid if some sections or fields aren't filled in - but that won't matter until validation takes place (see the Validation section below).

    Unicode vs. String Data

    By default, all string data that comes out of XML documents in Python is unicode data (i.e. u"whatever"). This is fine for many things, but when it comes to filesystem paths, it can cause us some problems. We really want strings to be encoded in the filesystem encoding rather than being unicode. So, most elements in configuration which represent filesystem paths are coverted to plain strings using util.encodePath. The main exception is the various absoluteExcludePath and relativeExcludePath lists. These are not converted, because they are generally only used for filtering, not for filesystem operations.

    Validation

    There are two main levels of validation in the Config class and its children. The first is field-level validation. Field-level validation comes into play when a given field in an object is assigned to or updated. We use Python's property functionality to enforce specific validations on field values, and in some places we even use customized list classes to enforce validations on list members. You should expect to catch a ValueError exception when making assignments to configuration class fields.

    The second level of validation is post-completion validation. Certain validations don't make sense until a document is fully "complete". We don't want these validations to apply all of the time, because it would make building up a document from scratch a real pain. For instance, we might have to do things in the right order to keep from throwing exceptions, etc.

    All of these post-completion validations are encapsulated in the Config.validate method. This method can be called at any time by a client, and will always be called immediately after creating a Config object from XML data and before exporting a Config object to XML. This way, we get decent ease-of-use but we also don't accept or emit invalid configuration files.

    The Config.validate implementation actually takes two passes to completely validate a configuration document. The first pass at validation is to ensure that the proper sections are filled into the document. There are default requirements, but the caller has the opportunity to override these defaults.

    The second pass at validation ensures that any filled-in section contains valid data. Any section which is not set to None is validated according to the rules for that section (see below).

    Reference Validations

    No validations.

    Extensions Validations

    The list of actions may be either None or an empty list [] if desired. Each extended action must include a name, a module and a function. Then, an extended action must include either an index or dependency information. Which one is required depends on which order mode is configured.

    Options Validations

    All fields must be filled in except the rsh command. The rcp and rsh commands are used as default values for all remote peers. Remote peers can also rely on the backup user as the default remote user name if they choose.

    Peers Validations

    Local peers must be completely filled in, including both name and collect directory. Remote peers must also fill in the name and collect directory, but can leave the remote user and rcp command unset. In this case, the remote user is assumed to match the backup user from the options section and rcp command is taken directly from the options section.

    Collect Validations

    The target directory must be filled in. The collect mode, archive mode and ignore file are all optional. The list of absolute paths to exclude and patterns to exclude may be either None or an empty list [] if desired.

    Each collect directory entry must contain an absolute path to collect, and then must either be able to take collect mode, archive mode and ignore file configuration from the parent CollectConfig object, or must set each value on its own. The list of absolute paths to exclude, relative paths to exclude and patterns to exclude may be either None or an empty list [] if desired. Any list of absolute paths to exclude or patterns to exclude will be combined with the same list in the CollectConfig object to make the complete list for a given directory.

    Stage Validations

    The target directory must be filled in. There must be at least one peer (remote or local) between the two lists of peers. A list with no entries can be either None or an empty list [] if desired.

    If a set of peers is provided, this configuration completely overrides configuration in the peers configuration section, and the same validations apply.

    Store Validations

    The device type and drive speed are optional, and all other values are required (missing booleans will be set to defaults, which is OK).

    The image writer functionality in the writer module is supposed to be able to handle a device speed of None. Any caller which needs a "real" (non-None) value for the device type can use DEFAULT_DEVICE_TYPE, which is guaranteed to be sensible.

    Purge Validations

    The list of purge directories may be either None or an empty list [] if desired. All purge directories must contain a path and a retain days value.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      ActionDependencies
    Class representing dependencies associated with an extended action.
      ActionHook
    Class representing a hook associated with an action.
      PreActionHook
    Class representing a pre-action hook associated with an action.
      PostActionHook
    Class representing a pre-action hook associated with an action.
      ExtendedAction
    Class representing an extended action.
      CommandOverride
    Class representing a piece of Cedar Backup command override configuration.
      CollectFile
    Class representing a Cedar Backup collect file.
      CollectDir
    Class representing a Cedar Backup collect directory.
      PurgeDir
    Class representing a Cedar Backup purge directory.
      LocalPeer
    Class representing a Cedar Backup peer.
      RemotePeer
    Class representing a Cedar Backup peer.
      ReferenceConfig
    Class representing a Cedar Backup reference configuration.
      ExtensionsConfig
    Class representing Cedar Backup extensions configuration.
      OptionsConfig
    Class representing a Cedar Backup global options configuration.
      PeersConfig
    Class representing Cedar Backup global peer configuration.
      CollectConfig
    Class representing a Cedar Backup collect configuration.
      StageConfig
    Class representing a Cedar Backup stage configuration.
      StoreConfig
    Class representing a Cedar Backup store configuration.
      PurgeConfig
    Class representing a Cedar Backup purge configuration.
      Config
    Class representing a Cedar Backup XML configuration document.
      ByteQuantity
    Class representing a byte quantity.
      BlankBehavior
    Class representing optimized store-action media blanking behavior.
    Functions [hide private]
     
    readByteQuantity(parent, name)
    Read a byte size value from an XML document.
    source code
     
    addByteQuantityNode(xmlDom, parentNode, nodeName, byteQuantity)
    Adds a text node as the next child of a parent, to contain a byte size.
    source code
    Variables [hide private]
      DEFAULT_DEVICE_TYPE = 'cdwriter'
    The default device type.
      DEFAULT_MEDIA_TYPE = 'cdrw-74'
    The default media type.
      VALID_DEVICE_TYPES = ['cdwriter', 'dvdwriter']
    List of valid device types.
      VALID_MEDIA_TYPES = ['cdr-74', 'cdrw-74', 'cdr-80', 'cdrw-80',...
    List of valid media types.
      VALID_COLLECT_MODES = ['daily', 'weekly', 'incr']
    List of valid collect modes.
      VALID_ARCHIVE_MODES = ['tar', 'targz', 'tarbz2']
    List of valid archive modes.
      VALID_ORDER_MODES = ['index', 'dependency']
    List of valid extension order modes.
      logger = logging.getLogger("CedarBackup3.log.config")
      VALID_CD_MEDIA_TYPES = ['cdr-74', 'cdrw-74', 'cdr-80', 'cdrw-80']
      VALID_DVD_MEDIA_TYPES = ['dvd+r', 'dvd+rw']
      VALID_COMPRESS_MODES = ['none', 'gzip', 'bzip2']
    List of valid compress modes.
      VALID_BLANK_MODES = ['daily', 'weekly']
      VALID_BYTE_UNITS = [0, 1, 2, 4]
      VALID_FAILURE_MODES = ['none', 'all', 'daily', 'weekly']
      REWRITABLE_MEDIA_TYPES = ['cdrw-74', 'cdrw-80', 'dvd+rw']
      ACTION_NAME_REGEX = '^[a-z0-9]*$'
      __package__ = 'CedarBackup3'
    Function Details [hide private]

    readByteQuantity(parent, name)

    source code 

    Read a byte size value from an XML document.

    A byte size value is an interpreted string value. If the string value ends with "MB" or "GB", then the string before that is interpreted as megabytes or gigabytes. Otherwise, it is intepreted as bytes.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    ByteQuantity parsed from XML document

    addByteQuantityNode(xmlDom, parentNode, nodeName, byteQuantity)

    source code 

    Adds a text node as the next child of a parent, to contain a byte size.

    If the byteQuantity is None, then the node will be created, but will be empty (i.e. will contain no text node child).

    The size in bytes will be normalized. If it is larger than 1.0 GB, it will be shown in GB ("1.0 GB"). If it is larger than 1.0 MB ("1.0 MB"), it will be shown in MB. Otherwise, it will be shown in bytes ("423413").

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    • byteQuantity - ByteQuantity object to put into the XML document
    Returns:
    Reference to the newly-created node.

    Variables Details [hide private]

    VALID_MEDIA_TYPES

    List of valid media types.
    Value:
    ['cdr-74', 'cdrw-74', 'cdr-80', 'cdrw-80', 'dvd+r', 'dvd+rw']
    

    CedarBackup3-3.1.6/doc/manual/0002775000175000017500000000000012657665551017540 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/doc/manual/ch05s04.html0000664000175000017500000004451412657665550021521 0ustar pronovicpronovic00000000000000Setting up a Client Peer Node

    Setting up a Client Peer Node

    Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a client is a little simpler than configuring a master.

    Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own.

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Note

    See AppendixD, Securing Password-less SSH Connections for some important notes on how to optionally further secure password-less SSH connections to your clients.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure the master in your backup pool.

    You will not be able to complete the client configuration until at least step 3 of the master's configuration has been completed. In particular, you will need to know the master's public SSH identity to fully configure a client.

    To find the master's public SSH identity, log in as the backup user on the master and cat the public identity file ~/.ssh/id_rsa.pub:

    user@machine> cat ~/.ssh/id_rsa.pub
    ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA0vOKjlfwohPg1oPRdrmwHk75l3mI9Tb/WRZfVnu2Pw69
    uyphM9wBLRo6QfOC2T8vZCB8o/ZIgtAM3tkM0UgQHxKBXAZ+H36TOgg7BcI20I93iGtzpsMA/uXQy8kH
    HgZooYqQ9pw+ZduXgmPcAAv2b5eTm07wRqFt/U84k6bhTzs= user@machine
             

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a "ready made" backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa:

    user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
    Generating public/private rsa key pair.
    Created directory '/home/user/.ssh'.
    Your identification has been saved in /home/user/.ssh/id_rsa.
    Your public key has been saved in /home/user/.ssh/id_rsa.pub.
    The key fingerprint is:
    11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine
             

    The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable only by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644).

    Finally, take the master's public SSH identity (which you found in step 2) and cut-and-paste it into the file ~/.ssh/authorized_keys. Make sure the identity value is pasted into the file all on one line, and that the authorized_keys file is owned by your backup user and has permissions 600.

    If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly as big as the amount of data that will be backed up on a nightly basis (more if you elect not to purge it all every night).

    You should create a collect directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more "standard" location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above), create a configuration file for your machine. Since you are working with a client, you must configure all action-specific sections for the collect and purge actions.

    The usual location for the Cedar Backup config file is /etc/cback3.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback3 script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback3 validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems. This command only validates configuration on the one client, not the master or any other clients in a pool.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test your backup.

    Use the command cback3 --full collect purge. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback3.log) for errors.

    Step 9: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file:

    30 00 * * * root  cback3 collect
    30 06 * * * root  cback3 purge
             

    You should consider adding the --output or -O switch to your cback3 command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    You will need to coordinate the collect and purge actions on the client so that the collect action completes before the master attempts to stage, and so that the purge action does not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. [23]

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup3. As installed, this file contains several different settings, all commented out. Uncomment the Client machine entries in the file, and change the lines so that the backup goes off when you want it to.

    CedarBackup3-3.1.6/doc/manual/ch05s02.html0000664000175000017500000034622012657665550021516 0ustar pronovicpronovic00000000000000Configuration File Format

    Configuration File Format

    Cedar Backup is configured through an XML [19] configuration file, usually called /etc/cback3.conf. The configuration file contains the following sections: reference, options, collect, stage, store, purge and extensions.

    All configuration files must contain the two general configuration sections, the reference section and the options section. Besides that, administrators need only configure actions they intend to use. For instance, on a client machine, administrators will generally only configure the collect and purge sections, while on a master machine they will have to configure all four action-related sections. [20] The extensions section is always optional and can be omitted unless extensions are in use.

    Note

    Even though the Mac OS X (darwin) filesystem is not case-sensitive, Cedar Backup configuration is generally case-sensitive on that platform, just like on all other platforms. For instance, even though the files Ken and ken might be the same on the Mac OS X filesystem, an exclusion in Cedar Backup configuration for ken will only match the file if it is actually on the filesystem with a lower-case k as its first letter. This won't surprise the typical UNIX user, but might surprise someone who's gotten into the Mac Mindset.

    Sample Configuration File

    Both the Python source distribution and the Debian package come with a sample configuration file. The Debian package includes its sample in /usr/share/doc/cedar-backup3/examples/cback3.conf.sample.

    This is a sample configuration file similar to the one provided in the source package. Documentation below provides more information about each of the individual configuration sections.

    <?xml version="1.0"?>
    <cb_config>
       <reference>
          <author>Kenneth J. Pronovici</author>
          <revision>1.3</revision>
          <description>Sample</description>
       </reference>
       <options>
          <starting_day>tuesday</starting_day>
          <working_dir>/opt/backup/tmp</working_dir>
          <backup_user>backup</backup_user>
          <backup_group>group</backup_group>
          <rcp_command>/usr/bin/scp -B</rcp_command>
       </options>
       <peers>
          <peer>
             <name>debian</name>
             <type>local</type>
             <collect_dir>/opt/backup/collect</collect_dir>
          </peer>
       </peers>
       <collect>
          <collect_dir>/opt/backup/collect</collect_dir>
          <collect_mode>daily</collect_mode>
          <archive_mode>targz</archive_mode>
          <ignore_file>.cbignore</ignore_file>
          <dir>
             <abs_path>/etc</abs_path>
             <collect_mode>incr</collect_mode>
          </dir>
          <file>
             <abs_path>/home/root/.profile</abs_path>
             <collect_mode>weekly</collect_mode>
          </file>
       </collect>
       <stage>
          <staging_dir>/opt/backup/staging</staging_dir>
       </stage>
       <store>
          <source_dir>/opt/backup/staging</source_dir>
          <media_type>cdrw-74</media_type>
          <device_type>cdwriter</device_type>
          <target_device>/dev/cdrw</target_device>
          <target_scsi_id>0,0,0</target_scsi_id>
          <drive_speed>4</drive_speed>
          <check_data>Y</check_data>
          <check_media>Y</check_media>
          <warn_midnite>Y</warn_midnite>
       </store>
       <purge>
          <dir>
             <abs_path>/opt/backup/stage</abs_path>
             <retain_days>7</retain_days>
          </dir>
          <dir>
             <abs_path>/opt/backup/collect</abs_path>
             <retain_days>0</retain_days>
          </dir>
       </purge>
    </cb_config>
             

    Reference Configuration

    The reference configuration section contains free-text elements that exist only for reference.. The section itself is required, but the individual elements may be left blank if desired.

    This is an example reference configuration section:

    <reference>
       <author>Kenneth J. Pronovici</author>
       <revision>Revision 1.3</revision>
       <description>Sample</description>
       <generator>Yet to be Written Config Tool (tm)</description>
    </reference>
             

    The following elements are part of the reference configuration section:

    author

    Author of the configuration file.

    Restrictions: None

    revision

    Revision of the configuration file.

    Restrictions: None

    description

    Description of the configuration file.

    Restrictions: None

    generator

    Tool that generated the configuration file, if any.

    Restrictions: None

    Options Configuration

    The options configuration section contains configuration options that are not specific to any one action.

    This is an example options configuration section:

    <options>
       <starting_day>tuesday</starting_day>
       <working_dir>/opt/backup/tmp</working_dir>
       <backup_user>backup</backup_user>
       <backup_group>backup</backup_group>
       <rcp_command>/usr/bin/scp -B</rcp_command>
       <rsh_command>/usr/bin/ssh</rsh_command>
       <cback_command>/usr/bin/cback</cback_command>
       <managed_actions>collect, purge</managed_actions>
       <override>
          <command>cdrecord</command>
          <abs_path>/opt/local/bin/cdrecord</abs_path>
       </override>
       <override>
          <command>mkisofs</command>
          <abs_path>/opt/local/bin/mkisofs</abs_path>
       </override>
       <pre_action_hook>
          <action>collect</action>
          <command>echo "I AM A PRE-ACTION HOOK RELATED TO COLLECT"</command>
       </pre_action_hook>
       <post_action_hook>
          <action>collect</action>
          <command>echo "I AM A POST-ACTION HOOK RELATED TO COLLECT"</command>
       </post_action_hook>
    </options>
             

    The following elements are part of the options configuration section:

    starting_day

    Day that starts the week.

    Cedar Backup is built around the idea of weekly backups. The starting day of week is the day that media will be rebuilt from scratch and that incremental backup information will be cleared.

    Restrictions: Must be a day of the week in English, i.e. monday, tuesday, etc. The validation is case-sensitive.

    working_dir

    Working (temporary) directory to use for backups.

    This directory is used for writing temporary files, such as tar file or ISO filesystem images as they are being built. It is also used to store day-to-day information about incremental backups.

    The working directory should contain enough free space to hold temporary tar files (on a client) or to build an ISO filesystem image (on a master).

    Restrictions: Must be an absolute path

    backup_user

    Effective user that backups should run as.

    This user must exist on the machine which is being configured and should not be root (although that restriction is not enforced).

    This value is also used as the default remote backup user for remote peers.

    Restrictions: Must be non-empty

    backup_group

    Effective group that backups should run as.

    This group must exist on the machine which is being configured, and should not be root or some other powerful group (although that restriction is not enforced).

    Restrictions: Must be non-empty

    rcp_command

    Default rcp-compatible copy command for staging.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This value is used as the default value for all remote peers. Technically, this value is not needed by clients, but we require it for all config files anyway.

    Restrictions: Must be non-empty

    rsh_command

    Default rsh-compatible command to use for remote shells.

    The rsh command should be the exact command used for remote shells, including any required options.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Restrictions: Must be non-empty

    cback_command

    Default cback-compatible command to use on managed remote clients.

    The cback command should be the exact command used for for executing cback on a remote managed client, including any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration.

    Restrictions: Must be non-empty

    managed_actions

    Default set of actions that are managed on remote clients.

    This is a comma-separated list of actions that the master will manage on behalf of remote clients. Typically, it would include only collect-like actions and purge.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Restrictions: Must be non-empty.

    override

    Command to override with a customized path.

    This is a subsection which contains a command to override with a customized path. This functionality would be used if root's $PATH does not include a particular required command, or if there is a need to use a version of a command that is different than the one listed on the $PATH. Most users will only use this section when directed to, in order to fix a problem.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    command

    Name of the command to be overridden, i.e. cdrecord.

    Restrictions: Must be a non-empty string.

    abs_path

    The absolute path where the overridden command can be found.

    Restrictions: Must be an absolute path.

    pre_action_hook

    Hook configuring a command to be executed before an action.

    This is a subsection which configures a command to be executed immediately before a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    action

    Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists.

    Restrictions: Must be a non-empty string.

    command

    Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command.

    Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands.

    Restrictions: Must be a non-empty string.

    post_action_hook

    Hook configuring a command to be executed after an action.

    This is a subsection which configures a command to be executed immediately after a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    action

    Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists.

    Restrictions: Must be a non-empty string.

    command

    Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command.

    Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands.

    Restrictions: Must be a non-empty string.

    Peers Configuration

    The peers configuration section contains a list of the peers managed by a master. This section is only required on a master.

    This is an example peers configuration section:

    <peers>
       <peer>
          <name>machine1</name>
          <type>local</type>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
       <peer>
          <name>machine2</name>
          <type>remote</type>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
          <ignore_failures>all</ignore_failures>
       </peer>
       <peer>
          <name>machine3</name>
          <type>remote</type>
          <managed>Y</managed>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
          <rcp_command>/usr/bin/scp</rcp_command>
          <rsh_command>/usr/bin/ssh</rsh_command>
          <cback_command>/usr/bin/cback</cback_command>
          <managed_actions>collect, purge</managed_actions>
       </peer>
    </peers>
             

    The following elements are part of the peers configuration section:

    peer (local version)

    Local client peer in a backup pool.

    This is a subsection which contains information about a specific local client peer managed by a master.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    The local peer subsection must contain the following fields:

    name

    Name of the peer, typically a valid hostname.

    For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a local peer, it must always be local.

    Restrictions: Must be local.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp).

    Restrictions: Must be an absolute path.

    ignore_failures

    Ignore failure mode for this peer

    The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator.

    The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup.

    Restrictions: If set, must be one of "none", "all", "daily", or "weekly".

    peer (remote version)

    Remote client peer in a backup pool.

    This is a subsection which contains information about a specific remote client peer managed by a master. A remote peer is one which can be reached via an rsh-based network call.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    The remote peer subsection must contain the following fields:

    name

    Hostname of the peer.

    For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a remote peer, it must always be remote.

    Restrictions: Must be remote.

    managed

    Indicates whether this peer is managed.

    A managed peer (or managed client) is a peer for which the master manages all of the backup activites via a remote shell.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command).

    Restrictions: Must be an absolute path.

    ignore_failures

    Ignore failure mode for this peer

    The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator.

    The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup.

    Restrictions: If set, must be one of "none", "all", "daily", or "weekly".

    backup_user

    Name of backup user on the remote peer.

    This username will be used when copying files from the remote peer via an rsh-based network connection.

    This field is optional. if it doesn't exist, the backup will use the default backup user from the options section.

    Restrictions: Must be non-empty.

    rcp_command

    The rcp-compatible copy command for this peer.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section.

    Restrictions: Must be non-empty.

    rsh_command

    The rsh-compatible command for this peer.

    The rsh command should be the exact command used for remote shells, including any required options.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default rsh command from the options section.

    Restrictions: Must be non-empty

    cback_command

    The cback-compatible command for this peer.

    The cback command should be the exact command used for for executing cback on the peer as part of a managed backup. This value must include any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default cback command from the options section.

    Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration.

    Restrictions: Must be non-empty

    managed_actions

    Set of actions that are managed for this peer.

    This is a comma-separated list of actions that the master will manage on behalf this peer. Typically, it would include only collect-like actions and purge.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default list of managed actions from the options section.

    Restrictions: Must be non-empty.

    Collect Configuration

    The collect configuration section contains configuration options related the the collect action. This section contains a variable number of elements, including an optional exclusion section and a repeating subsection used to specify which directories and/or files to collect. You can also configure an ignore indicator file, which lets users mark their own directories as not backed up.

    In order to actually execute the collect action, you must have configured at least one collect directory or one collect file. However, if you are only including collect configuration for use by an extension, then it's OK to leave out these sections. The validation will take place only when the collect action is executed.

    This is an example collect configuration section:

    <collect>
       <collect_dir>/opt/backup/collect</collect_dir>
       <collect_mode>daily</collect_mode>
       <archive_mode>targz</archive_mode>
       <ignore_file>.cbignore</ignore_file>
       <exclude>
          <abs_path>/etc</abs_path>
          <pattern>.*\.conf</pattern>
       </exclude>
       <file>
          <abs_path>/home/root/.profile</abs_path>
       </file>
       <dir>
          <abs_path>/etc</abs_path>
       </dir>
       <dir>
          <abs_path>/var/log</abs_path>
          <collect_mode>incr</collect_mode>
       </dir>
       <dir>
          <abs_path>/opt</abs_path>
          <collect_mode>weekly</collect_mode>
          <exclude>
             <abs_path>/opt/large</abs_path>
             <rel_path>backup</rel_path>
             <pattern>.*tmp</pattern>
          </exclude>
       </dir>
    </collect>
             

    The following elements are part of the collect configuration section:

    collect_dir

    Directory to collect files into.

    On a client, this is the directory which tarfiles for individual collect directories are written into. The master then stages files from this directory into its own staging directory.

    This field is always required. It must contain enough free space to collect all of the backed-up files on the machine in a compressed form.

    Restrictions: Must be an absolute path

    collect_mode

    Default collect mode.

    The collect mode describes how frequently a directory is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This value is the collect mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Default archive mode for collect files.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This value is the archive mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of tar, targz or tarbz2.

    ignore_file

    Default ignore file name.

    The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration.

    The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it.

    This value is the ignore file name that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be non-empty

    recursion_level

    Recursion level to use when collecting directories.

    This is an integer value that Cedar Backup will consider when generating archive files for a configured collect directory.

    Normally, Cedar Backup generates one archive file per collect directory. So, if you collect /etc you get etc.tar.gz. Most of the time, this is what you want. However, you may sometimes wish to generate multiple archive files for a single collect directory.

    The most obvious example is for /home. By default, Cedar Backup will generate home.tar.gz. If instead, you want one archive file per home directory you can set a recursion level of 1. Cedar Backup will generate home-user1.tar.gz, home-user2.tar.gz, etc.

    Higher recursion levels (2, 3, etc.) are legal, and it doesn't matter if the configured recursion level is deeper than the directory tree that is being collected. You can use a negative recursion level (like -1) to specify an infinite level of recursion. This will exhaust the tree in the same way as if the recursion level is set too high.

    This field is optional. if it doesn't exist, the backup will use the default recursion level of zero.

    Restrictions: Must be an integer.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of absolute paths and patterns to be excluded across all configured directories. For a given directory, the set of absolute paths and patterns to exclude is built from this list and any list that exists on the directory itself. Directories cannot override or remove entries that are in this list, however.

    This section is optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    abs_path

    An absolute path to be recursively excluded from the backup.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be an absolute path.

    pattern

    A pattern to be recursively excluded from the backup.

    The pattern must be a Python regular expression. [21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    file

    A file to be collected.

    This is a subsection which contains information about a specific file to be collected (backed up).

    This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed.

    The collect file subsection contains the following fields:

    abs_path

    Absolute path of the file to collect.

    Restrictions: Must be an absolute path.

    collect_mode

    Collect mode for this file

    The collect mode describes how frequently a file is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Archive mode for this file.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This field is optional. if it doesn't exist, the backup will use the default archive mode.

    Restrictions: Must be one of tar, targz or tarbz2.

    dir

    A directory to be collected.

    This is a subsection which contains information about a specific directory to be collected (backed up).

    This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed.

    The collect directory subsection contains the following fields:

    abs_path

    Absolute path of the directory to collect.

    The path may be either a directory, a soft link to a directory, or a hard link to a directory. All three are treated the same at this level.

    The contents of the directory will be recursively collected. The backup will contain all of the files in the directory, as well as the contents of all of the subdirectories within the directory, etc.

    Soft links within the directory are treated as files, i.e. they are copied verbatim (as a link) and their contents are not backed up.

    Restrictions: Must be an absolute path.

    collect_mode

    Collect mode for this directory

    The collect mode describes how frequently a directory is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Archive mode for this directory.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This field is optional. if it doesn't exist, the backup will use the default archive mode.

    Restrictions: Must be one of tar, targz or tarbz2.

    ignore_file

    Ignore file name for this directory.

    The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration.

    The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it.

    This field is optional. If it doesn't exist, the backup will use the default ignore file name.

    Restrictions: Must be non-empty

    link_depth

    Link depth value to use for this directory.

    The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links within the collect directory, a depth of 1 follows only links immediately within the collect directory, a depth of 2 follows the links at the next level down, etc.

    This field is optional. If it doesn't exist, the backup will assume a value of zero, meaning that soft links within the collect directory will never be followed.

    Restrictions: If set, must be an integer ≥ 0.

    dereference

    Whether to dereference soft links.

    If this flag is set, links that are being followed will be dereferenced before being added to the backup. The link will be added (as a link), and then the directory or file that the link points at will be added as well.

    This value only applies to a directory where soft links are being followed (per the link_depth configuration option). It never applies to a configured collect directory itself, only to other directories within the collect directory.

    This field is optional. If it doesn't exist, the backup will assume that links should never be dereferenced.

    Restrictions: Must be a boolean (Y or N).

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this collect directory. This list is combined with the program-wide list to build a complete list for the directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    abs_path

    An absolute path to be recursively excluded from the backup.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be an absolute path.

    rel_path

    A relative path to be recursively excluded from the backup.

    The path is assumed to be relative to the collect directory itself. For instance, if the configured directory is /opt/web a configured relative path of something/else would exclude the path /opt/web/something/else.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value something/else would exclude any files within something/else as well as files within other directories under something/else.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    Stage Configuration

    The stage configuration section contains configuration options related the the stage action. The section indicates where date from peers can be staged to.

    This section can also (optionally) override the list of peers so that not all peers are staged. If you provide any peers in this section, then the list of peers here completely replaces the list of peers in the peers configuration section for the purposes of staging.

    This is an example stage configuration section for the simple case where the list of peers is taken from peers configuration:

    <stage>
       <staging_dir>/opt/backup/stage</staging_dir>
    </stage>
             

    This is an example stage configuration section that overrides the default list of peers:

    <stage>
       <staging_dir>/opt/backup/stage</staging_dir>
       <peer>
          <name>machine1</name>
          <type>local</type>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
       <peer>
          <name>machine2</name>
          <type>remote</type>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
    </stage>
             

    The following elements are part of the stage configuration section:

    staging_dir

    Directory to stage files into.

    This is the directory into which the master stages collected data from each of the clients. Within the staging directory, data is staged into date-based directories by peer name. For instance, peer daystrom backed up on 19 Feb 2005 would be staged into something like 2005/02/19/daystrom relative to the staging directory itself.

    This field is always required. The directory must contain enough free space to stage all of the files collected from all of the various machines in a backup pool. Many administrators set up purging to keep staging directories around for a week or more, which requires even more space.

    Restrictions: Must be an absolute path

    peer (local version)

    Local client peer in a backup pool.

    This is a subsection which contains information about a specific local client peer to be staged (backed up). A local peer is one whose collect directory can be reached without requiring any rsh-based network calls. It is possible that a remote peer might be staged as a local peer if its collect directory is mounted to the master via NFS, AFS or some other method.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration.

    The local peer subsection must contain the following fields:

    name

    Name of the peer, typically a valid hostname.

    For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a local peer, it must always be local.

    Restrictions: Must be local.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp).

    Restrictions: Must be an absolute path.

    peer (remote version)

    Remote client peer in a backup pool.

    This is a subsection which contains information about a specific remote client peer to be staged (backed up). A remote peer is one whose collect directory can only be reached via an rsh-based network call.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration.

    The remote peer subsection must contain the following fields:

    name

    Hostname of the peer.

    For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a remote peer, it must always be remote.

    Restrictions: Must be remote.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command).

    Restrictions: Must be an absolute path.

    backup_user

    Name of backup user on the remote peer.

    This username will be used when copying files from the remote peer via an rsh-based network connection.

    This field is optional. if it doesn't exist, the backup will use the default backup user from the options section.

    Restrictions: Must be non-empty.

    rcp_command

    The rcp-compatible copy command for this peer.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section.

    Restrictions: Must be non-empty.

    Store Configuration

    The store configuration section contains configuration options related the the store action. This section contains several optional fields. Most fields control the way media is written using the writer device.

    This is an example store configuration section:

    <store>
       <source_dir>/opt/backup/stage</source_dir>
       <media_type>cdrw-74</media_type>
       <device_type>cdwriter</device_type>
       <target_device>/dev/cdrw</target_device>
       <target_scsi_id>0,0,0</target_scsi_id>
       <drive_speed>4</drive_speed>
       <check_data>Y</check_data>
       <check_media>Y</check_media>
       <warn_midnite>Y</warn_midnite>
       <no_eject>N</no_eject>
       <refresh_media_delay>15</refresh_media_delay>
       <eject_delay>2</eject_delay>
       <blank_behavior>
          <mode>weekly</mode>
          <factor>1.3</factor>
       </blank_behavior>
    </store>
             

    The following elements are part of the store configuration section:

    source_dir

    Directory whose contents should be written to media.

    This directory must be a Cedar Backup staging directory, as configured in the staging configuration section. Only certain data from that directory (typically, data from the current day) will be written to disc.

    Restrictions: Must be an absolute path

    device_type

    Type of the device used to write the media.

    This field controls which type of writer device will be used by Cedar Backup. Currently, Cedar Backup supports CD writers (cdwriter) and DVD writers (dvdwriter).

    This field is optional. If it doesn't exist, the cdwriter device type is assumed.

    Restrictions: If set, must be either cdwriter or dvdwriter.

    media_type

    Type of the media in the device.

    Unless you want to throw away a backup disc every week, you are probably best off using rewritable media.

    You must choose a media type that is appropriate for the device type you chose above. For more information on media types, see the section called “Media and Device Types” (in Chapter2, Basic Concepts).

    Restrictions: Must be one of cdr-74, cdrw-74, cdr-80 or cdrw-80 if device type is cdwriter; or one of dvd+r or dvd+rw if device type is dvdwriter.

    target_device

    Filesystem device name for writer device.

    This value is required for both CD writers and DVD writers.

    This is the UNIX device name for the writer drive, for instance /dev/scd0 or a symlink like /dev/cdrw.

    In some cases, this device name is used to directly write to media. This is true all of the time for DVD writers, and is true for CD writers when a SCSI id (see below) has not been specified.

    Besides this, the device name is also needed in order to do several pre-write checks (such as whether the device might already be mounted) as well as the post-write consistency check, if enabled.

    Note: some users have reported intermittent problems when using a symlink as the target device on Linux, especially with DVD media. If you experience problems, try using the real device name rather than the symlink.

    Restrictions: Must be an absolute path.

    target_scsi_id

    SCSI id for the writer device.

    This value is optional for CD writers and is ignored for DVD writers.

    If you have configured your CD writer hardware to work through the normal filesystem device path, then you can leave this parameter unset. Cedar Backup will just use the target device (above) when talking to cdrecord.

    Otherwise, if you have SCSI CD writer hardware or you have configured your non-SCSI hardware to operate like a SCSI device, then you need to provide Cedar Backup with a SCSI id it can use when talking with cdrecord.

    For the purposes of Cedar Backup, a valid SCSI identifier must either be in the standard SCSI identifier form scsibus,target,lun or in the specialized-method form <method>:scsibus,target,lun.

    An example of a standard SCSI identifier is 1,6,2. Today, the two most common examples of the specialized-method form are ATA:scsibus,target,lun and ATAPI:scsibus,target,lun, but you may occassionally see other values (like OLDATAPI in some forks of cdrecord).

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Restrictions: If set, must be a valid SCSI identifier.

    drive_speed

    Speed of the drive, i.e. 2 for a 2x device.

    This field is optional. If it doesn't exist, the underlying device-related functionality will use the default drive speed.

    For DVD writers, it is best to leave this value unset, so growisofs can pick an appropriate speed. For CD writers, since media can be speed-sensitive, it is probably best to set a sensible value based on your specific writer and media.

    Restrictions: If set, must be an integer ≥ 1.

    check_data

    Whether the media should be validated.

    This field indicates whether a resulting image on the media should be validated after the write completes, by running a consistency check against it. If this check is enabled, the contents of the staging directory are directly compared to the media, and an error is reported if there is a mismatch.

    Practice shows that some drives can encounter an error when writing a multisession disc, but not report any problems. This consistency check allows us to catch the problem. By default, the consistency check is disabled, but most users should choose to enable it unless they have a good reason not to.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    check_media

    Whether the media should be checked before writing to it.

    By default, Cedar Backup does not check its media before writing to it. It will write to any media in the backup device. If you set this flag to Y, Cedar Backup will make sure that the media has been initialized before writing to it. (Rewritable media is initialized using the initialize action.)

    If the configured media is not rewritable (like CD-R), then this behavior is modified slightly. For this kind of media, the check passes either if the media has been initialized or if the media appears unused.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    warn_midnite

    Whether to generate warnings for crossing midnite.

    This field indicates whether warnings should be generated if the store operation has to cross a midnite boundary in order to find data to write to disc. For instance, a warning would be generated if valid store data was only found in the day before or day after the current day.

    Configuration for some users is such that the store operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something strange might have happened.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    no_eject

    Indicates that the writer device should not be ejected.

    Under some circumstances, Cedar Backup ejects (opens and closes) the writer device. This is done because some writer devices need to re-load the media before noticing a media state change (like a new session).

    For most writer devices this is safe, because they have a tray that can be opened and closed. If your writer device does not have a tray and Cedar Backup does not properly detect this, then set this flag. Cedar Backup will not ever issue an eject command to your writer.

    Note: this could cause problems with your backup. For instance, with many writers, the check data step may fail if the media is not reloaded first. If this happens to you, you may need to get a different writer device.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    refresh_media_delay

    Number of seconds to delay after refreshing media

    This field is optional. If it doesn't exist, no delay will occur.

    Some devices seem to take a little while to stablize after refreshing the media (i.e. closing and opening the tray). During this period, operations on the media may fail. If your device behaves like this, you can try setting a delay of 10-15 seconds.

    Restrictions: If set, must be an integer ≥ 1.

    eject_delay

    Number of seconds to delay after ejecting the tray

    This field is optional. If it doesn't exist, no delay will occur.

    If your system seems to have problems opening and closing the tray, one possibility is that the open/close sequence is happening too quickly — either the tray isn't fully open when Cedar Backup tries to close it, or it doesn't report being open. To work around that problem, set an eject delay of a few seconds.

    Restrictions: If set, must be an integer ≥ 1.

    blank_behavior

    Optimized blanking strategy.

    For more information about Cedar Backup's optimized blanking strategy, see the section called “Optimized Blanking Stategy”.

    This entire configuration section is optional. However, if you choose to provide it, you must configure both a blanking mode and a blanking factor.

    blank_mode

    Blanking mode.

    Restrictions:Must be one of "daily" or "weekly".

    blank_factor

    Blanking factor.

    Restrictions:Must be a floating point number ≥ 0.

    Purge Configuration

    The purge configuration section contains configuration options related the the purge action. This section contains a set of directories to be purged, along with information about the schedule at which they should be purged.

    Typically, Cedar Backup should be configured to purge collect directories daily (retain days of 0).

    If you are tight on space, staging directories can also be purged daily. However, if you have space to spare, you should consider purging about once per week. That way, if your backup media is damaged, you will be able to recreate the week's backup using the rebuild action.

    You should also purge the working directory periodically, once every few weeks or once per month. This way, if any unneeded files are left around, perhaps because a backup was interrupted or because configuration changed, they will eventually be removed. The working directory should not be purged any more frequently than once per week, otherwise you will risk destroying data used for incremental backups.

    This is an example purge configuration section:

    <purge>
       <dir>
          <abs_path>/opt/backup/stage</abs_path>
          <retain_days>7</retain_days>
       </dir>
       <dir>
          <abs_path>/opt/backup/collect</abs_path>
          <retain_days>0</retain_days>
       </dir>
    </purge>
             

    The following elements are part of the purge configuration section:

    dir

    A directory to purge within.

    This is a subsection which contains information about a specific directory to purge within.

    This section can be repeated as many times as is necessary. At least one purge directory must be configured.

    The purge directory subsection contains the following fields:

    abs_path

    Absolute path of the directory to purge within.

    The contents of the directory will be purged based on age. The purge will remove any files that were last modified more than retain days days ago. Empty directories will also eventually be removed. The purge directory itself will never be removed.

    The path may be either a directory, a soft link to a directory, or a hard link to a directory. Soft links within the directory (if any) are treated as files.

    Restrictions: Must be an absolute path.

    retain_days

    Number of days to retain old files.

    Once it has been more than this many days since a file was last modified, it is a candidate for removal.

    Restrictions: Must be an integer ≥ 0.

    Extensions Configuration

    The extensions configuration section is used to configure third-party extensions to Cedar Backup. If you don't intend to use any extensions, or don't know what extensions are, then you can safely leave this section out of your configuration file. It is optional.

    Extensions configuration is used to specify extended actions implemented by code external to Cedar Backup. An administrator can use this section to map command-line Cedar Backup actions to third-party extension functions.

    Each extended action has a name, which is mapped to a Python function within a particular module. Each action also has an index associated with it. This index is used to properly order execution when more than one action is specified on the command line. The standard actions have predefined indexes, and extended actions are interleaved into the normal order of execution using those indexes. The collect action has index 100, the stage index has action 200, the store action has index 300 and the purge action has index 400.

    Warning

    Extended actions should always be configured to run before the standard action they are associated with. This is because of the way indicator files are used in Cedar Backup. For instance, the staging process considers the collect action to be complete for a peer if the file cback.collect can be found in that peer's collect directory.

    If you were to run the standard collect action before your other collect-like actions, the indicator file would be written after the collect action completes but before all of the other actions even run. Because of this, there's a chance the stage process might back up the collect directory before the entire set of collect-like actions have completed — and you would get no warning about this in your email!

    So, imagine that a third-party developer provided a Cedar Backup extension to back up a certain kind of database repository, and you wanted to map that extension to the database command-line action. You have been told that this function is called foo.bar(). You think of this backup as a collect kind of action, so you want it to be performed immediately before the collect action.

    To configure this extension, you would list an action with a name database, a module foo, a function name bar and an index of 99.

    This is how the hypothetical action would be configured:

    <extensions>
       <action>
          <name>database</name>
          <module>foo</module>
          <function>bar</function>
          <index>99</index>
       </action>
    </extensions>
             

    The following elements are part of the extensions configuration section:

    action

    This is a subsection that contains configuration related to a single extended action.

    This section can be repeated as many times as is necessary.

    The action subsection contains the following fields:

    name

    Name of the extended action.

    Restrictions: Must be a non-empty string consisting of only lower-case letters and digits.

    module

    Name of the Python module associated with the extension function.

    Restrictions: Must be a non-empty string and a valid Python identifier.

    function

    Name of the Python extension function within the module.

    Restrictions: Must be a non-empty string and a valid Python identifier.

    index

    Index of action, for execution ordering.

    Restrictions: Must be an integer ≥ 0.

    CedarBackup3-3.1.6/doc/manual/ch01s04.html0000664000175000017500000001743012657665550021512 0ustar pronovicpronovic00000000000000History

    History

    Cedar Backup began life in late 2000 as a set of Perl scripts called kbackup. These scripts met an immediate need (which was to back up skyjammer.com and some personal machines) but proved to be unstable, overly verbose and rather difficult to maintain.

    In early 2002, work began on a rewrite of kbackup. The goal was to address many of the shortcomings of the original application, as well as to clean up the code and make it available to the general public. While doing research related to code I could borrow or base the rewrite on, I discovered that there was already an existing backup package with the name kbackup, so I decided to change the name to Cedar Backup instead.

    Because I had become fed up with the prospect of maintaining a large volume of Perl code, I decided to abandon that language in favor of Python. [3] At the time, I chose Python mostly because I was interested in learning it, but in retrospect it turned out to be a very good decision. From my perspective, Python has almost all of the strengths of Perl, but few of its inherent weaknesses (I feel that primarily, Python code often ends up being much more readable than Perl code).

    Around this same time, skyjammer.com and cedar-solutions.com were converted to run Debian GNU/Linux (potato) [4] and I entered the Debian new maintainer queue, so I also made it a goal to implement Debian packages along with a Python source distribution for the new release.

    Version 1.0 of Cedar Backup was released in June of 2002. We immediately began using it to back up skyjammer.com and cedar-solutions.com, where it proved to be much more stable than the original code.

    In the meantime, I continued to improve as a Python programmer and also started doing a significant amount of professional development in Java. It soon became obvious that the internal structure of Cedar Backup 1.0, while much better than kbackup, still left something to be desired. In November 2003, I began an attempt at cleaning up the codebase. I converted all of the internal documentation to use Epydoc, [5] and updated the code to use the newly-released Python logging package [6] after having a good experience with Java's log4j. However, I was still not satisfied with the code, which did not lend itself to the automated regression testing I had used when working with junit in my Java code.

    So, rather than releasing the cleaned-up code, I instead began another ground-up rewrite in May 2004. With this rewrite, I applied everything I had learned from other Java and Python projects I had undertaken over the last few years. I structured the code to take advantage of Python's unique ability to blend procedural code with object-oriented code, and I made automated unit testing a primary requirement. The result was the 2.0 release, which is cleaner, more compact, better focused, and better documented than any release before it. Utility code is less application-specific, and is now usable as a general-purpose library. The 2.0 release also includes a complete regression test suite of over 3000 tests, which will help to ensure that quality is maintained as development continues into the future. [7]

    The 3.0 release of Cedar Backup is a Python 3 conversion of the 2.0 release, with minimal additional functionality. The conversion from Python 2 to Python 3 started in mid-2015, about 5 years before the anticipated deprecation of Python 2 in 2020. Most users should consider transitioning to the 3.0 release.



    [4] Debian's stable releases are named after characters in the Toy Story movie.

    [5] Epydoc is a Python code documentation tool. See http://epydoc.sourceforge.net/.

    [7] Tests are implemented using Python's unit test framework. See http://docs.python.org/lib/module-unittest.html.

    CedarBackup3-3.1.6/doc/manual/ch02s04.html0000664000175000017500000005074312657665550021517 0ustar pronovicpronovic00000000000000The Backup Process

    The Backup Process

    The Cedar Backup backup process is structured in terms of a set of decoupled actions which execute independently (based on a schedule in cron) rather than through some highly coordinated flow of control.

    This design decision has both positive and negative consequences. On the one hand, the code is much simpler and can choose to simply abort or log an error if its expectations are not met. On the other hand, the administrator must coordinate the various actions during initial set-up. See the section called “Coordination between Master and Clients” (later in this chapter) for more information on this subject.

    A standard backup run consists of four steps (actions), some of which execute on the master machine, and some of which execute on one or more client machines. These actions are: collect, stage, store and purge.

    In general, more than one action may be specified on the command-line. If more than one action is specified, then actions will be taken in a sensible order (generally collect, stage, store, purge). A special all action is also allowed, which implies all of the standard actions in the same sensible order.

    The cback3 command also supports several actions that are not part of the standard backup run and cannot be executed along with any other actions. These actions are validate, initialize and rebuild. All of the various actions are discussed further below.

    See Chapter5, Configuration for more information on how a backup run is configured.

    The Collect Action

    The collect action is the first action in a standard backup run. It executes on both master and client nodes. Based on configuration, this action traverses the peer's filesystem and gathers files to be backed up. Each configured high-level directory is collected up into its own tar file in the collect directory. The tarfiles can either be uncompressed (.tar) or compressed with either gzip (.tar.gz) or bzip2 (.tar.bz2).

    There are three supported collect modes: daily, weekly and incremental. Directories configured for daily backups are backed up every day. Directories configured for weekly backups are backed up on the first day of the week. Directories configured for incremental backups are traversed every day, but only the files which have changed (based on a saved-off SHA hash) are actually backed up.

    Collect configuration also allows for a variety of ways to filter files and directories out of the backup. For instance, administrators can configure an ignore indicator file [9] or specify absolute paths or filename patterns [10] to be excluded. You can even configure a backup link farm rather than explicitly listing files and directories in configuration.

    This action is optional on the master. You only need to configure and execute the collect action on the master if you have data to back up on that machine. If you plan to use the master only as a consolidation point to collect data from other machines, then there is no need to execute the collect action there. If you run the collect action on the master, it behaves the same there as anywhere else, and you have to stage the master's collected data just like any other client (typically by configuring a local peer in the stage action).

    The Stage Action

    The stage action is the second action in a standard backup run. It executes on the master peer node. The master works down the list of peers in its backup pool and stages (copies) the collected backup files from each of them into a daily staging directory by peer name.

    For the purposes of this action, the master node can be configured to treat itself as a client node. If you intend to back up data on the master, configure the master as a local peer. Otherwise, just configure each of the clients as a remote peer.

    Local and remote client peers are treated differently. Local peer collect directories are assumed to be accessible via normal copy commands (i.e. on a mounted filesystem) while remote peer collect directories are accessed via an RSH-compatible command such as ssh.

    If a given peer is not ready to be staged, the stage process will log an error, abort the backup for that peer, and then move on to its other peers. This way, one broken peer cannot break a backup for other peers which are up and running.

    Keep in mind that Cedar Backup is flexible about what actions must be executed as part of a backup. If you would prefer, you can stop the backup process at this step, and skip the store step. In this case, the staged directories will represent your backup rather than a disc.

    Note

    Directories collected by another process can be staged by Cedar Backup. If the file cback.collect exists in a collect directory when the stage action is taken, then that directory will be staged.

    The Store Action

    The store action is the third action in a standard backup run. It executes on the master peer node. The master machine determines the location of the current staging directory, and then writes the contents of that staging directory to disc. After the contents of the directory have been written to disc, an optional validation step ensures that the write was successful.

    If the backup is running on the first day of the week, if the drive does not support multisession discs, or if the --full option is passed to the cback3 command, the disc will be rebuilt from scratch. Otherwise, a new ISO session will be added to the disc each day the backup runs.

    This action is entirely optional. If you would prefer to just stage backup data from a set of peers to a master machine, and have the staged directories represent your backup rather than a disc, this is fine.

    Warning

    The store action is not supported on the Mac OS X (darwin) platform. On that platform, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality works on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully.

    The Purge Action

    The purge action is the fourth and final action in a standard backup run. It executes both on the master and client peer nodes. Configuration specifies how long to retain files in certain directories, and older files and empty directories are purged.

    Typically, collect directories are purged daily, and stage directories are purged weekly or slightly less often (if a disc gets corrupted, older backups may still be available on the master). Some users also choose to purge the configured working directory (which is used for temporary files) to eliminate any leftover files which might have resulted from changes to configuration.

    The All Action

    The all action is a pseudo-action which causes all of the actions in a standard backup run to be executed together in order. It cannot be combined with any other actions on the command line.

    Extensions cannot be executed as part of the all action. If you need to execute an extended action, you must specify the other actions you want to run individually on the command line. [11]

    The all action does not have its own configuration. Instead, it relies on the individual configuration sections for all of the other actions.

    The Validate Action

    The validate action is used to validate configuration on a particular peer node, either master or client. It cannot be combined with any other actions on the command line.

    The validate action checks that the configuration file can be found, that the configuration file is valid, and that certain portions of the configuration file make sense (for instance, making sure that specified users exist, directories are readable and writable as necessary, etc.).

    The Initialize Action

    The initialize action is used to initialize media for use with Cedar Backup. This is an optional step. By default, Cedar Backup does not need to use initialized media and will write to whatever media exists in the writer device.

    However, if the check media store configuration option is set to true, Cedar Backup will check the media before writing to it and will error out if the media has not been initialized.

    Initializing the media consists of writing a mostly-empty image using a known media label (the media label will begin with CEDAR BACKUP).

    Note that only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup. You can still configure Cedar Backup to check non-rewritable media; in this case, the check will also pass if the media is apparently unused (i.e. has no media label).

    The Rebuild Action

    The rebuild action is an exception-handling action that is executed independent of a standard backup run. It cannot be combined with any other actions on the command line.

    The rebuild action attempts to rebuild this week's disc from any remaining unpurged staging directories. Typically, it is used to make a copy of a backup, replace lost or damaged media, or to switch to new media mid-week for some other reason.

    To decide what data to write to disc again, the rebuild action looks back and finds the first day of the current week. Then, it finds any remaining staging directories between that date and the current date. If any staging directories are found, they are all written to disc in one big ISO session.

    The rebuild action does not have its own configuration. It relies on configuration for other other actions, especially the store action.



    [9] Analagous to .cvsignore in CVS

    [10] In terms of Python regular expressions

    [11] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. I am not planning to change the way this works.

    CedarBackup3-3.1.6/doc/manual/ch02s07.html0000664000175000017500000001362212657665550021515 0ustar pronovicpronovic00000000000000Media and Device Types

    Media and Device Types

    Cedar Backup is focused around writing backups to CD or DVD media using a standard SCSI or IDE writer. In Cedar Backup terms, the disc itself is referred to as the media, and the CD/DVD drive is referred to as the device or sometimes the backup device. [12]

    When using a new enough backup device, a new multisession ISO image [13] is written to the media on the first day of the week, and then additional multisession images are added to the media each day that Cedar Backup runs. This way, the media is complete and usable at the end of every backup run, but a single disc can be used all week long. If your backup device does not support multisession images — which is really unusual today — then a new ISO image will be written to the media each time Cedar Backup runs (and you should probably confine yourself to the daily backup mode to avoid losing data).

    Cedar Backup currently supports four different kinds of CD media:

    cdr-74

    74-minute non-rewritable CD media

    cdrw-74

    74-minute rewritable CD media

    cdr-80

    80-minute non-rewritable CD media

    cdrw-80

    80-minute rewritable CD media

    I have chosen to support just these four types of CD media because they seem to be the most standard of the various types commonly sold in the U.S. as of this writing (early 2005). If you regularly use an unsupported media type and would like Cedar Backup to support it, send me information about the capacity of the media in megabytes (MB) and whether it is rewritable.

    Cedar Backup also supports two kinds of DVD media:

    dvd+r

    Single-layer non-rewritable DVD+R media

    dvd+rw

    Single-layer rewritable DVD+RW media

    The underlying growisofs utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type.



    [12] My original backup device was an old Sony CRX140E 4X CD-RW drive. It has since died, and I currently develop using a Lite-On 1673S DVDRW drive.

    [13] An ISO image is the standard way of creating a filesystem to be copied to a CD or DVD. It is essentially a filesystem-within-a-file and many UNIX operating systems can actually mount ISO image files just like hard drives, floppy disks or actual CDs. See Wikipedia for more information: http://en.wikipedia.org/wiki/ISO_image.

    CedarBackup3-3.1.6/doc/manual/ch03s02.html0000664000175000017500000001142012657665550021503 0ustar pronovicpronovic00000000000000Installing on a Debian System

    Installing on a Debian System

    The easiest way to install Cedar Backup onto a Debian system is by using a tool such as apt-get or aptitude.

    If you are running a Debian release which contains Cedar Backup, you can use your normal Debian mirror as an APT data source. (The Debian jessie release is the first release to contain Cedar Backup 3.) Otherwise, you need to install from the Cedar Solutions APT data source. [15] To do this, add the Cedar Solutions APT data source to your /etc/apt/sources.list file.

    After you have configured the proper APT data source, install Cedar Backup using this set of commands:

    $ apt-get update
    $ apt-get install cedar-backup3 cedar-backup3-doc
          

    Several of the Cedar Backup dependencies are listed as recommended rather than required. If you are installing Cedar Backup on a master machine, you must install some or all of the recommended dependencies, depending on which actions you intend to execute. The stage action normally requires ssh, and the store action requires eject and either cdrecord/mkisofs or dvd+rw-tools. Clients must also install some sort of ssh server if a remote master will collect backups from them.

    If you would prefer, you can also download the .deb files and install them by hand with a tool such as dpkg. You can find these files files in the Cedar Solutions APT source.

    In either case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration.

    Note

    The Debian package-management tools must generally be run as root. It is safe to install Cedar Backup to a non-standard location and run it as a non-root user. However, to do this, you must install the source distribution instead of the Debian package.

    CedarBackup3-3.1.6/doc/manual/pr01s03.html0000664000175000017500000000641212657665550021536 0ustar pronovicpronovic00000000000000Conventions Used in This Book

    Conventions Used in This Book

    This section covers the various conventions used in this manual.

    Typographic Conventions

    Term

    Used for first use of important terms.

    Command

    Used for commands, command output, and switches

    Replaceable

    Used for replaceable items in code and text

    Filenames

    Used for file and directory names

    Icons

    Note

    This icon designates a note relating to the surrounding text.

    Tip

    This icon designates a helpful tip relating to the surrounding text.

    Warning

    This icon designates a warning relating to the surrounding text.

    CedarBackup3-3.1.6/doc/manual/ch06s04.html0000664000175000017500000002553012657665550021517 0ustar pronovicpronovic00000000000000MySQL Extension

    MySQL Extension

    The MySQL Extension is a Cedar Backup extension used to back up MySQL [26] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Note

    This extension always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I will update this extension or provide another.

    The backup is done via the mysqldump command included with the MySQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases.

    The extension assumes that all configured databases can be backed up by a single user. Often, the root database user will be used. An alternative is to create a separate MySQL backup user and grant that user rights to read (but not write) various databases as needed. This second option is probably your best choice.

    Warning

    The extension accepts a username and password in configuration. However, you probably do not want to list those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to mysqldump via the command-line --user and --password switches, which will be visible to other users in the process listing.

    Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in /root/.my.cnf:

    [mysqldump]
    user     = root
    password = <secret>
             

    Of course, if you are executing the backup as a user other than root, then you would create the file in that user's home directory instead.

    As a side note, it is also possible to configure .my.cnf such that Cedar Backup can back up a remote database server:

    [mysqldump]
    host = remote.host
             

    For this to work, you will also need to grant privileges properly for the user which is executing the backup. See your MySQL documentation for more information about how this can be done.

    Regardless of whether you are using ~/.my.cnf or /etc/cback3.conf to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode 0600).

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>mysql</name>
          <module>CedarBackup3.extend.mysql</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mysql configuration section. This is an example MySQL configuration section:

    <mysql>
       <compress_mode>bzip2</compress_mode>
       <all>Y</all>
    </mysql>
          

    If you have decided to configure login information in Cedar Backup rather than using MySQL configuration, then you would add the username and password fields to configuration:

    <mysql>
       <user>root</user>
       <password>password</password>
       <compress_mode>bzip2</compress_mode>
       <all>Y</all>
    </mysql>
          

    The following elements are part of the MySQL configuration section:

    user

    Database user.

    The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. Typically, this would be root (i.e. the database root user, not the system root user).

    This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above.

    Restrictions: If provided, must be non-empty.

    password

    Password associated with the database user.

    This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above.

    Restrictions: If provided, must be non-empty.

    compress_mode

    Compress mode.

    MySQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    Restrictions: Must be one of none, gzip or bzip2.

    all

    Indicates whether to back up all databases.

    If this value is Y, then all MySQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below).

    If you choose this option, the entire database backup will go into one big dump file.

    Restrictions: Must be a boolean (Y or N).

    database

    Named database to be backed up.

    If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file.

    This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y.

    Restrictions: Must be non-empty.

    CedarBackup3-3.1.6/doc/manual/ch06s09.html0000664000175000017500000001217712657665550021527 0ustar pronovicpronovic00000000000000Capacity Extension

    Capacity Extension

    The capacity extension checks the current capacity of the media in the writer and prints a warning if the media exceeds an indicated capacity. The capacity is indicated either by a maximum percentage utilized or by a minimum number of bytes that must remain unused.

    This action can be run at any time, but is probably best run as the last action on any given day, so you get as much notice as possible that your media is full and needs to be replaced.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions> <action>
          <name>capacity</name>
          <module>CedarBackup3.extend.capacity</module>
          <function>executeAction</function>
          <index>299</index>
       </action>
    </extensions>
          

    This extension relies on the options and store configuration sections in the standard Cedar Backup configuration file, and then also requires its own capacity configuration section. This is an example Capacity configuration section that configures the extension to warn if the media is more than 95.5% full:

    <capacity>
       <max_percentage>95.5</max_percentage>
    </capacity>
          

    This example configures the extension to warn if the media has fewer than 16 MB free:

    <capacity>
       <min_bytes>16 MB</min_bytes>
    </capacity>
          

    The following elements are part of the Capacity configuration section:

    max_percentage

    Maximum percentage of the media that may be utilized.

    You must provide either this value or the min_bytes value.

    Restrictions: Must be a floating point number between 0.0 and 100.0

    min_bytes

    Minimum number of free bytes that must be available.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    You must provide either this value or the max_percentage value.

    Restrictions: Must be a byte quantity as described above.

    CedarBackup3-3.1.6/doc/manual/ch04s02.html0000664000175000017500000003220512657665550021510 0ustar pronovicpronovic00000000000000The cback3 command

    The cback3 command

    Introduction

    Cedar Backup's primary command-line interface is the cback3 command. It controls the entire backup process.

    Syntax

    The cback3 command has the following syntax:

     Usage: cback3 [switches] action(s)
    
     The following switches are accepted:
    
       -h, --help         Display this usage/help listing
       -V, --version      Display version information
       -b, --verbose      Print verbose output as well as logging to disk
       -q, --quiet        Run quietly (display no output to the screen)
       -c, --config       Path to config file (default: /etc/cback3.conf)
       -f, --full         Perform a full backup, regardless of configuration
       -M, --managed      Include managed clients when executing actions
       -N, --managed-only Include ONLY managed clients when executing actions
       -l, --logfile      Path to logfile (default: /var/log/cback3.log)
       -o, --owner        Logfile ownership, user:group (default: root:adm)
       -m, --mode         Octal logfile permissions mode (default: 640)
       -O, --output       Record some sub-command (i.e. cdrecord) output to the log
       -d, --debug        Write debugging information to the log (implies --output)
       -s, --stack        Dump a Python stack trace instead of swallowing exceptions
       -D, --diagnostics  Print runtime diagnostics to the screen and exit
    
     The following actions may be specified:
    
       all                Take all normal actions (collect, stage, store, purge)
       collect            Take the collect action
       stage              Take the stage action
       store              Take the store action
       purge              Take the purge action
       rebuild            Rebuild "this week's" disc if possible
       validate           Validate configuration only
       initialize         Initialize media for use with Cedar Backup
    
     You may also specify extended actions that have been defined in
     configuration.
    
     You must specify at least one action to take.  More than one of
     the "collect", "stage", "store" or "purge" actions and/or
     extended actions may be specified in any arbitrary order; they
     will be executed in a sensible order.  The "all", "rebuild",
     "validate", and "initialize" actions may not be combined with
     other actions.
             

    Note that the all action only executes the standard four actions. It never executes any of the configured extensions. [18]

    Switches

    -h, --help

    Display usage/help listing.

    -V, --version

    Display version information.

    -b, --verbose

    Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen.

    -q, --quiet

    Run quietly (display no output to the screen).

    -c, --config

    Specify the path to an alternate configuration file. The default configuration file is /etc/cback3.conf.

    -f, --full

    Perform a full backup, regardless of configuration. For the collect action, this means that any existing information related to incremental backups will be ignored and rewritten; for the store action, this means that a new disc will be started.

    -M, --managed

    Include managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client after executing the action locally.

    -N, --managed-only

    Include only managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client — but do not execute the action locally.

    -l, --logfile

    Specify the path to an alternate logfile. The default logfile file is /var/log/cback3.log.

    -o, --owner

    Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback3 command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values.

    -m, --mode

    Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback3 command is executed, it will retain its existing ownership and mode.

    -O, --output

    Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference.

    -d, --debug

    Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well.

    -s, --stack

    Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report.

    -D, --diagnostics

    Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report.

    Actions

    You can find more information about the various actions in the section called “The Backup Process” (in Chapter2, Basic Concepts). In general, you may specify any combination of the collect, stage, store or purge actions, and the specified actions will be executed in a sensible order. Or, you can specify one of the all, rebuild, validate, or initialize actions (but these actions may not be combined with other actions).

    If you have configured any Cedar Backup extensions, then the actions associated with those extensions may also be specified on the command line. If you specify any other actions along with an extended action, the actions will be executed in a sensible order per configuration. The all action never executes extended actions, however.



    [18] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. Better to be definitive than confusing.

    CedarBackup3-3.1.6/doc/manual/ch02s03.html0000664000175000017500000000471612657665550021515 0ustar pronovicpronovic00000000000000Cedar Backup Pools

    Cedar Backup Pools

    There are two kinds of machines in a Cedar Backup pool. One machine (the master) has a CD or DVD writer on it and writes the backup to disc. The others (clients) collect data to be written to disc by the master. Collectively, the master and client machines in a pool are called peer machines.

    Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one) and in fact more users seem to use it like this than any other way.

    CedarBackup3-3.1.6/doc/manual/ch05s07.html0000664000175000017500000001311412657665550021514 0ustar pronovicpronovic00000000000000Optimized Blanking Stategy

    Optimized Blanking Stategy

    When the optimized blanking strategy has not been configured, Cedar Backup uses a simplistic approach: rewritable media is blanked at the beginning of every week, period.

    Since rewritable media can be blanked only a finite number of times before becoming unusable, some users — especially users of rewritable DVD media with its large capacity — may prefer to blank the media less often.

    If the optimized blanking strategy is configured, Cedar Backup will use a blanking factor and attempt to determine whether future backups will fit on the current media. If it looks like backups will fit, then the media will not be blanked.

    This feature will only be useful (assuming single disc is used for the whole week's backups) if the estimated total size of the weekly backup is considerably smaller than the capacity of the media (no more than 50% of the total media capacity), and only if the size of the backup can be expected to remain fairly constant over time (no frequent rapid growth expected).

    There are two blanking modes: daily and weekly. If the weekly blanking mode is set, Cedar Backup will only estimate future capacity (and potentially blank the disc) once per week, on the starting day of the week. If the daily blanking mode is set, Cedar Backup will estimate future capacity (and potentially blank the disc) every time it is run. You should only use the daily blanking mode in conjunction with daily collect configuration, otherwise you will risk losing data.

    If you are using the daily blanking mode, you can typically set the blanking value to 1.0. This will cause Cedar Backup to blank the media whenever there is not enough space to store the current day's backup.

    If you are using the weekly blanking mode, then finding the correct blanking factor will require some experimentation. Cedar Backup estimates future capacity based on the configured blanking factor. The disc will be blanked if the following relationship is true:

    bytes available / (1 + bytes required) ≤ blanking factor
          

    Another way to look at this is to consider the blanking factor as a sort of (upper) backup growth estimate:

    Total size of weekly backup / Full backup size at the start of the week
          

    This ratio can be estimated using a week or two of previous backups. For instance, take this example, where March 10 is the start of the week and March 4 through March 9 represent the incremental backups from the previous week:

    /opt/backup/staging# du -s 2007/03/*
    3040    2007/03/01
    3044    2007/03/02
    6812    2007/03/03
    3044    2007/03/04
    3152    2007/03/05
    3056    2007/03/06
    3060    2007/03/07
    3056    2007/03/08
    4776    2007/03/09
    6812    2007/03/10
    11824   2007/03/11
          

    In this case, the ratio is approximately 4:

    6812 + (3044 + 3152 + 3056 + 3060 + 3056 + 4776) / 6812 = 3.9571
          

    To be safe, you might choose to configure a factor of 5.0.

    Setting a higher value reduces the risk of exceeding media capacity mid-week but might result in blanking the media more often than is necessary.

    If you run out of space mid-week, then the solution is to run the rebuild action. If this happens frequently, a higher blanking factor value should be used.

    CedarBackup3-3.1.6/doc/manual/apa.html0000664000175000017500000002172112657665550021167 0ustar pronovicpronovic00000000000000AppendixA.Extension Architecture Interface

    AppendixA.Extension Architecture Interface

    The Cedar Backup Extension Architecture Interface is the application programming interface used by third-party developers to write Cedar Backup extensions. This appendix briefly specifies the interface in enough detail for someone to succesfully implement an extension.

    You will recall that Cedar Backup extensions are third-party pieces of code which extend Cedar Backup's functionality. Extensions can be invoked from the Cedar Backup command line and are allowed to place their configuration in Cedar Backup's configuration file.

    There is a one-to-one mapping between a command-line extended action and an extension function. The mapping is configured in the Cedar Backup configuration file using a section something like this:

    <extensions>
       <action>
          <name>database</name>
          <module>foo</module>
          <function>bar</function>
          <index>101</index>
       </action> 
    </extensions>
          

    In this case, the action database has been mapped to the extension function foo.bar().

    Extension functions can take any actions they would like to once they have been invoked, but must abide by these rules:

    1. Extensions may not write to stdout or stderr using functions such as print or sys.write.

    2. All logging must take place using the Python logging facility. Flow-of-control logging should happen on the CedarBackup3.log topic. Authors can assume that ERROR will always go to the terminal, that INFO and WARN will always be logged, and that DEBUG will be ignored unless debugging is enabled.

    3. Any time an extension invokes a command-line utility, it must be done through the CedarBackup3.util.executeCommand function. This will help keep Cedar Backup safer from format-string attacks, and will make it easier to consistently log command-line process output.

    4. Extensions may not return any value.

    5. Extensions must throw a Python exception containing a descriptive message if processing fails. Extension authors can use their judgement as to what constitutes failure; however, any problems during execution should result in either a thrown exception or a logged message.

    6. Extensions may rely only on Cedar Backup functionality that is advertised as being part of the public interface. This means that extensions cannot directly make use of methods, functions or values starting with with the _ character. Furthermore, extensions should only rely on parts of the public interface that are documented in the online Epydoc documentation.

    7. Extension authors are encouraged to extend the Cedar Backup public interface through normal methods of inheritence. However, no extension is allowed to directly change Cedar Backup code in a way that would affect how Cedar Backup itself executes when the extension has not been invoked. For instance, extensions would not be allowed to add new command-line options or new writer types.

    8. Extensions must be written to assume an empty locale set (no $LC_* settings) and $LANG=C. For the typical open-source software project, this would imply writing output-parsing code against the English localization (if any). The executeCommand function does sanitize the environment to enforce this configuration.

    Extension functions take three arguments: the path to configuration on disk, a CedarBackup3.cli.Options object representing the command-line options in effect, and a CedarBackup3.config.Config object representing parsed standard configuration.

    def function(configPath, options, config):
       """Sample extension function."""
       pass
          

    This interface is structured so that simple extensions can use standard configuration without having to parse it for themselves, but more complicated extensions can get at the configuration file on disk and parse it again as needed.

    The interface to the CedarBackup3.cli.Options and CedarBackup3.config.Config classes has been thoroughly documented using Epydoc, and the documentation is available on the Cedar Backup website. The interface is guaranteed to change only in backwards-compatible ways unless the Cedar Backup major version number is bumped (i.e. from 2 to 3).

    If an extension needs to add its own configuration information to the Cedar Backup configuration file, this extra configuration must be added in a new configuration section using a name that does not conflict with standard configuration or other known extensions.

    For instance, our hypothetical database extension might require configuration indicating the path to some repositories to back up. This information might go into a section something like this:

    <database>
       <repository>/path/to/repo1</repository>
       <repository>/path/to/repo2</repository>
    </database>
          

    In order to read this new configuration, the extension code can either inherit from the Config object and create a subclass that knows how to parse the new database config section, or can write its own code to parse whatever it needs out of the file. Either way, the resulting code is completely independent of the standard Cedar Backup functionality.

    CedarBackup3-3.1.6/doc/manual/ch06s06.html0000664000175000017500000003650012657665550021520 0ustar pronovicpronovic00000000000000Mbox Extension

    Mbox Extension

    The Mbox Extension is a Cedar Backup extension used to incrementally back up UNIX-style mbox mail folders via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Mbox mail folders are not well-suited to being backed up by the normal Cedar Backup incremental backup process. This is because active folders are typically appended to on a daily basis. This forces the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large mail folders.

    What the mbox extension does is leverage the grepmail utility to back up only email messages which have been received since the last incremental backup. This way, even if a folder is added to every day, only the recently-added messages are backed up. This can potentially save a lot of space.

    Each configured mbox file or directory can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>mbox</name>
          <module>CedarBackup3.extend.mbox</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mbox configuration section. This is an example mbox configuration section:

    <mbox>
       <collect_mode>incr</collect_mode>
       <compress_mode>gzip</compress_mode>
       <file>
          <abs_path>/home/user1/mail/greylist</abs_path>
          <collect_mode>daily</collect_mode>
       </file>
       <dir>
          <abs_path>/home/user2/mail</abs_path>
       </dir>
       <dir>
          <abs_path>/home/user3/mail</abs_path>
          <exclude>
             <rel_path>spam</rel_path>
             <pattern>.*debian.*</pattern>
          </exclude>
       </dir>
    </mbox>
          

    Configuration is much like the standard collect action. Differences come from the fact that mbox directories are not collected recursively.

    Unlike collect configuration, exclusion information can only be configured at the mbox directory level (there are no global exclusions). Another difference is that no absolute exclusion paths are allowed — only relative path exclusions and patterns.

    The following elements are part of the mbox configuration section:

    collect_mode

    Default collect mode.

    The collect mode describes how frequently an mbox file or directory is backed up. The mbox extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts).

    This value is the collect mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Default compress mode.

    Mbox file or directory backups are just text, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    This value is the compress mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of none, gzip or bzip2.

    file

    An individual mbox file to be collected.

    This is a subsection which contains information about an individual mbox file to be backed up.

    This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured.

    The file subsection contains the following fields:

    collect_mode

    Collect mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the mbox file to back up.

    Restrictions: Must be an absolute path.

    dir

    An mbox directory to be collected.

    This is a subsection which contains information about an mbox directory to be backed up. An mbox directory is a directory containing mbox files. Every file in an mbox directory is assumed to be an mbox file. Mbox directories are not collected recursively. Only the files immediately within the configured directory will be backed-up and any subdirectories will be ignored.

    This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured.

    The dir subsection contains the following fields:

    collect_mode

    Collect mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the mbox directory to back up.

    Restrictions: Must be an absolute path.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this mbox directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    rel_path

    A relative path to be excluded from the backup.

    The path is assumed to be relative to the mbox directory itself. For instance, if the configured mbox directory is /home/user2/mail a configured relative path of SPAM would exclude the path /home/user2/mail/SPAM.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    CedarBackup3-3.1.6/doc/manual/pr01s04.html0000664000175000017500000001261012657665550021534 0ustar pronovicpronovic00000000000000Organization of This Manual

    Organization of This Manual

    Chapter1, Introduction

    Provides some some general history about Cedar Backup, what needs it is intended to meet, how to get support, and how to migrate from version 2 to version 3.

    Chapter2, Basic Concepts

    Discusses the basic concepts of a Cedar Backup infrastructure, and specifies terms used throughout the rest of the manual.

    Chapter3, Installation

    Explains how to install the Cedar Backup package either from the Python source distribution or from the Debian package.

    Chapter4, Command Line Tools

    Discusses the various Cedar Backup command-line tools, including the primary cback3 command.

    Chapter5, Configuration

    Provides detailed information about how to configure Cedar Backup.

    Chapter6, Official Extensions

    Describes each of the officially-supported Cedar Backup extensions.

    AppendixA, Extension Architecture Interface

    Specifies the Cedar Backup extension architecture interface, through which third party developers can write extensions to Cedar Backup.

    AppendixB, Dependencies

    Provides some additional information about the packages which Cedar Backup relies on, including information about how to find documentation and packages on non-Debian systems.

    AppendixC, Data Recovery

    Cedar Backup provides no facility for restoring backups, assuming the administrator can handle this infrequent task. This appendix provides some notes for administrators to work from.

    AppendixD, Securing Password-less SSH Connections

    Password-less SSH connections are a necessary evil when remote backup processes need to execute without human interaction. This appendix describes some ways that you can reduce the risk to your backup pool should your master machine be compromised.

    CedarBackup3-3.1.6/doc/manual/apd.html0000664000175000017500000002535512657665550021201 0ustar pronovicpronovic00000000000000AppendixD.Securing Password-less SSH Connections

    AppendixD.Securing Password-less SSH Connections

    Cedar Backup relies on password-less public key SSH connections to make various parts of its backup process work. Password-less scp is used to stage files from remote clients to the master, and password-less ssh is used to execute actions on managed clients.

    Normally, it is a good idea to avoid password-less SSH connections in favor of using an SSH agent. The SSH agent manages your SSH connections so that you don't need to type your passphrase over and over. You get most of the benefits of a password-less connection without the risk. Unfortunately, because Cedar Backup has to execute without human involvement (through a cron job), use of an agent really isn't feasable. We have to rely on true password-less public keys to give the master access to the client peers.

    Traditionally, Cedar Backup has relied on a segmenting strategy to minimize the risk. Although the backup typically runs as root — so that all parts of the filesystem can be backed up — we don't use the root user for network connections. Instead, we use a dedicated backup user on the master to initiate network connections, and dedicated users on each of the remote peers to accept network connections.

    With this strategy in place, an attacker with access to the backup user on the master (or even root access, really) can at best only get access to the backup user on the remote peers. We still concede a local attack vector, but at least that vector is restricted to an unprivileged user.

    Some Cedar Backup users may not be comfortable with this risk, and others may not be able to implement the segmentation strategy — they simply may not have a way to create a login which is only used for backups.

    So, what are these users to do? Fortunately there is a solution. The SSH authorized keys file supports a way to put a filter in place on an SSH connection. This excerpt is from the AUTHORIZED_KEYS FILE FORMAT section of man 8 sshd:

    command="command"
       Specifies that the command is executed whenever this key is used for
       authentication.  The command supplied by the user (if any) is ignored.  The
       command is run on a pty if the client requests a pty; otherwise it is run
       without a tty.  If an 8-bit clean channel is required, one must not request
       a pty or should specify no-pty.  A quote may be included in the command by
       quoting it with a backslash.  This option might be useful to restrict
       certain public keys to perform just a specific operation.  An example might
       be a key that permits remote backups but nothing else.  Note that the client
       may specify TCP and/or X11 forwarding unless they are explicitly prohibited.
       Note that this option applies to shell, command or subsystem execution.
          

    Essentially, this gives us a way to authenticate the commands that are being executed. We can either accept or reject commands, and we can even provide a readable error message for commands we reject. The filter is applied on the remote peer, to the key that provides the master access to the remote peer.

    So, let's imagine that we have two hosts: master mickey, and peer minnie. Here is the original ~/.ssh/authorized_keys file for the backup user on minnie (remember, this is all on one line in the file):

    ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp3MsSpVB9q9iZ+awek120391k;mm0c221=3=km
    =m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9FtAZ9U+MmpL901231asdkl;ai1-923ma9s=9=
    1-2341=-a0sd=-sa0=1z= backup@mickey
          

    This line is the public key that minnie can use to identify the backup user on mickey. Assuming that there is no passphrase on the private key back on mickey, the backup user on mickey can get direct access to minnie.

    To put the filter in place, we add a command option to the key, like this:

    command="/opt/backup/validate-backup" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp
    3MsSpVB9q9iZ+awek120391k;mm0c221=3=km=m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9F
    tAZ9U+MmpL901231asdkl;ai1-923ma9s=9=1-2341=-a0sd=-sa0=1z= backup@mickey
          

    Basically, the command option says that whenever this key is used to successfully initiate a connection, the /opt/backup/validate-backup command will be run instead of the real command that came over the SSH connection. Fortunately, the interface gives the command access to certain shell variables that can be used to invoke the original command if you want to.

    A very basic validate-backup script might look something like this:

    #!/bin/bash
    if [[ "${SSH_ORIGINAL_COMMAND}" == "ls -l" ]] ; then
        ${SSH_ORIGINAL_COMMAND}
    else
       echo "Security policy does not allow command [${SSH_ORIGINAL_COMMAND}]."
       exit 1
    fi
          

    This script allows exactly ls -l and nothing else. If the user attempts some other command, they get a nice error message telling them that their command has been disallowed.

    For remote commands executed over ssh, the original command is exactly what the caller attempted to invoke. For remote copies, the commands are either scp -f file (copy from the peer to the master) or scp -t file (copy to the peer from the master).

    If you want, you can see what command SSH thinks it is executing by using ssh -v or scp -v. The command will be right at the top, something like this:

    Executing: program /usr/bin/ssh host mickey, user (unspecified), command scp -v -f .profile
    OpenSSH_4.3p2 Debian-9, OpenSSL 0.9.8c 05 Sep 2006
    debug1: Reading configuration data /home/backup/.ssh/config
    debug1: Applying options for daystrom
    debug1: Reading configuration data /etc/ssh/ssh_config
    debug1: Applying options for *
    debug2: ssh_connect: needpriv 0
          

    Omit the -v and you have your command: scp -f .profile.

    For a normal, non-managed setup, you need to allow the following commands, where /path/to/collect/ is replaced with the real path to the collect directory on the remote peer:

    scp -f /path/to/collect/cback.collect
    scp -f /path/to/collect/*
    scp -t /path/to/collect/cback.stage
          

    If you are configuring a managed client, then you also need to list the exact command lines that the master will be invoking on the managed client. You are guaranteed that the master will invoke one action at a time, so if you list two lines per action (full and non-full) you should be fine. Here's an example for the collect action:

    /usr/bin/cback3 --full collect
    /usr/bin/cback3 collect
          

    Of course, you would have to list the actual path to the cback3 executable — exactly the one listed in the <cback_command> configuration option for your managed peer.

    I hope that there is enough information here for interested users to implement something that makes them comfortable. I have resisted providing a complete example script, because I think everyone's setup will be different. However, feel free to write if you are working through this and you have questions.

    CedarBackup3-3.1.6/doc/manual/styles.css0000664000175000017500000000664712657665550021607 0ustar pronovicpronovic00000000000000/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * C E D A R * S O L U T I O N S "Software done right." * S O F T W A R E * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Author : Kenneth J. Pronovici * Language : XSLT * Project : Cedar Backup, release 3 * Purpose : Custom stylesheet applied to user manual in HTML form. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ /* This stylesheet was originally taken from the Subversion project's book (http://svnbook.red-bean.com/). I have not made any modifications to the sheet for use with Cedar Backup. The original stylesheet was (c) 2000-2004 CollabNet (see CREDITS). */ BODY { background: white; margin: 0.5in; font-family: arial,helvetica,sans-serif; } H1.title { font-size: 250%; font-style: normal; font-weight: bold; color: black; } H2.subtitle { font-size: 150%; font-style: italic; color: black; } H2.title { font-size: 150%; font-style: normal; font-weight: bold; color: black; } H3.title { font-size: 125%; font-style: normal; font-weight: bold; color: black; } H4.title { font-size: 100%; font-style: normal; font-weight: bold; color: black; } .toc B { font-size: 125%; font-style: normal; font-weight: bold; color: black; } P,LI,UL,OL,DD,DT { font-style: normal; font-weight: normal; color: black; } TT,PRE { font-family: courier new,courier,fixed; } .command, .screen, .programlisting { font-family: courier new,courier,fixed; font-style: normal; font-weight: normal; } .filename { font-family: arial,helvetica,sans-serif; font-style: italic; } A { color: blue; text-decoration: underline; } A:hover { background: rgb(75%,75%,100%); color: blue; text-decoration: underline; } A:visited { color: purple; text-decoration: underline; } IMG { border: none; } .figure, .example, .table { margin: 0.125in 0.5in; } .table TABLE { border: 1px rgb(180,180,200) solid; border-spacing: 0px; } .table TD { border: 1px rgb(180,180,200) solid; } .table TH { background: rgb(180,180,200); border: 1px rgb(180,180,200) solid; } .table P.title, .figure P.title, .example P.title { text-align: left !important; font-size: 100% !important; } .author { font-size: 100%; font-style: italic; font-weight: normal; color: black; } .sidebar { border: 2px black solid; background: rgb(230,230,235); padding: 0.12in; margin: 0 0.5in; } .sidebar P.title { text-align: center; font-size: 125%; } .tip { border: black solid 1px; background: url(./images/info.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .warning { border: black solid 1px; background: url(./images/warning.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .note { border: black solid 1px; background: url(./images/note.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .programlisting, .screen { font-family: courier new,courier,fixed; font-style: normal; font-weight: normal; font-size: 90%; color: black; margin: 0 0.5in; } .navheader, .navfooter { border: black solid 1px; background: rgb(180,180,200); } .navheader HR, .navfooter HR { display: none; } CedarBackup3-3.1.6/doc/manual/apb.html0000664000175000017500000004063412657665550021174 0ustar pronovicpronovic00000000000000AppendixB.Dependencies

    AppendixB.Dependencies

    Python 3.4 (or later)

    If you can't find a package for your system, install from the package source, using the upstream link.

    RSH Server and Client

    Although Cedar Backup will technically work with any RSH-compatible server and client pair (such as the classic rsh client), most users should only use an SSH (secure shell) server and client.

    The defacto standard today is OpenSSH. Some systems package the server and the client together, and others package the server and the client separately. Note that master nodes need an SSH client, and client nodes need to run an SSH server.

    If you can't find SSH client or server packages for your system, install from the package source, using the upstream link.

    mkisofs

    The mkisofs command is used create ISO filesystem images that can later be written to backup media.

    On Debian platforms, mkisofs is not distributed and genisoimage is used instead. The Debian package takes care of this for you.

    If you can't find a package for your system, install from the package source, using the upstream link.

    cdrecord

    The cdrecord command is used to write ISO images to CD media in a backup device.

    On Debian platforms, cdrecord is not distributed and wodim is used instead. The Debian package takes care of this for you.

    If you can't find a package for your system, install from the package source, using the upstream link.

    dvd+rw-tools

    The dvd+rw-tools package provides the growisofs utility, which is used to write ISO images to DVD media in a backup device.

    If you can't find a package for your system, install from the package source, using the upstream link.

    eject and volname

    The eject command is used to open and close the tray on a backup device (if the backup device has a tray). Sometimes, the tray must be opened and closed in order to "reset" the device so it notices recent changes to a disc.

    The volname command is used to determine the volume name of media in a backup device.

    If you can't find a package for your system, install from the package source, using the upstream link.

    mount and umount

    The mount and umount commands are used to mount and unmount CD/DVD media after it has been written, in order to run a consistency check.

    If you can't find a package for your system, install from the package source, using the upstream link.

    grepmail

    The grepmail command is used by the mbox extension to pull out only recent messages from mbox mail folders.

    If you can't find a package for your system, install from the package source, using the upstream link.

    gpg

    The gpg command is used by the encrypt extension to encrypt files.

    If you can't find a package for your system, install from the package source, using the upstream link.

    split

    The split command is used by the split extension to split up large files.

    This command is typically part of the core operating system install and is not distributed in a separate package.

    AWS CLI

    AWS CLI is Amazon's official command-line tool for interacting with the Amazon Web Services infrastruture. Cedar Backup uses AWS CLI to copy backup data up to Amazon S3 cloud storage.

    After you install AWS CLI, you need to configure your connection to AWS with an appropriate access id and access key. Amazon provides a good setup guide.

    The initial implementation of the amazons3 extension was written using AWS CLI 1.4. As of this writing, not all Linux distributions include a package for this version. On these platforms, the easiest way to install it is via PIP: apt-get install python3-pip, and then pip3 install awscli. The Debian package includes an appropriate dependency starting with the jessie release.

    Chardet

    The cback3-amazons3-sync command relies on the Chardet Python package to check filename encoding. You only need this package if you are going to use the sync tool.

    CedarBackup3-3.1.6/doc/manual/ch05s05.html0000664000175000017500000005535312657665550021525 0ustar pronovicpronovic00000000000000Setting up a Master Peer Node

    Setting up a Master Peer Node

    Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a master is somewhat more complicated than configuring a client.

    Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own.

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or whichever other user receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Tip

    This setup procedure discusses how to set up Cedar Backup in the normal case for a master. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Keep in mind that you do not necessarily have to run the collect action on the master. See notes further below for more information.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure your writer device.

    Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation.

    Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations.

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Note

    There is no need to set up your CD/DVD device if you have decided not to execute the store action.

    Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers.

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa:

    user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
    Generating public/private rsa key pair.
    Created directory '/home/user/.ssh'.
    Your identification has been saved in /home/user/.ssh/id_rsa.
    Your public key has been saved in /home/user/.ssh/id_rsa.pub.
    The key fingerprint is:
    11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine
             

    The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644).

    If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly large enough hold twice as much data as will be backed up from the entire pool on a given night, plus space for whatever is collected on the master itself. This will allow for all three operations - collect, stage and store - to have enough space to complete. Note that if you elect not to purge the staging directory every night, you will need even more space.

    You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                stage/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above), create a configuration file for your machine. Since you are working with a master machine, you would typically configure all four action-specific sections: collect, stage, store and purge.

    Note

    Note that the master can treat itself as a client peer for certain actions. As an example, if you run the collect action on the master, then you will stage that data by configuring a local peer representing the master.

    Something else to keep in mind is that you do not really have to run the collect action on the master. For instance, you may prefer to just use your master machine as a consolidation point machine that just collects data from the other client machines in a backup pool. In that case, there is no need to collect data on the master itself.

    The usual location for the Cedar Backup config file is /etc/cback3.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback3 script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback3 validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. This command only validates configuration on the master, not any clients that the master might be configured to connect to.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test connectivity to client machines.

    This step must wait until after your client machines have been at least partially configured. Once the backup user(s) have been configured on the client machine(s) in a pool, attempt an SSH connection to each client.

    Log in as the backup user on the master, and then use the command ssh user@machine where user is the name of backup user on the client machine, and machine is the name of the client machine.

    If you are able to log in successfully to each client without entering a password, then things have been configured properly. Otherwise, double-check that you followed the user setup instructions for the master and the clients.

    Step 9: Test your backup.

    Make sure that you have configured all of the clients in your backup pool. On all of the clients, execute cback3 --full collect. (You will probably have already tested this command on each of the clients, so it should succeed.)

    When all of the client backups have completed, place a valid CD/DVD disc in your drive, and then use the command cback3 --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully.

    Just to be sure that everything worked properly, check the logfile (/var/log/cback3.log) on the master and each of the clients, and also mount the CD/DVD disc on the master to be sure it can be read.

    You may also want to run cback3 purge on the master and each client once you have finished validating that everything worked.

    If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. [22] To be safe, always enable the consistency check option in the store configuration section.

    Step 10: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file:

    30 00 * * * root  cback3 collect
    30 02 * * * root  cback3 stage
    30 04 * * * root  cback3 store
    30 06 * * * root  cback3 purge
             

    You should consider adding the --output or -O switch to your cback3 command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    You will need to coordinate the collect and purge actions on clients so that their collect actions complete before the master attempts to stage, and so that their purge actions do not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. [23]

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup3. As installed, this file contains several different settings, all commented out. Uncomment the Master machine entries in the file, and change the lines so that the backup goes off when you want it to.

    CedarBackup3-3.1.6/doc/manual/ch06s08.html0000664000175000017500000001443712657665550021527 0ustar pronovicpronovic00000000000000Split Extension

    Split Extension

    The Split Extension is a Cedar Backup extension used to split up large files within staging directories. It is probably only useful in combination with the cback3-span command, which requires individual files within staging directories to each be smaller than a single disc.

    You would normally run this action immediately after the standard stage action, but you could also choose to run it by hand immediately before running cback3-span.

    The split extension uses the standard UNIX split tool to split the large files up. This tool simply splits the files on bite-size boundaries. It has no knowledge of file formats.

    Note: this means that in order to recover the data in your original large file, you must have every file that the original file was split into. Think carefully about whether this is what you want. It doesn't sound like a huge limitation. However, cback3-span might put an indivdual file on any disc in a set — the files split from one larger file will not necessarily be together. That means you will probably need every disc in your backup set in order to recover any data from the backup set.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions> 
       <action>
          <name>split</name>
          <module>CedarBackup3.extend.split</module>
          <function>executeAction</function>
          <index>299</index>
       </action>
    </extensions>
          

    This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own split configuration section. This is an example Split configuration section:

    <split>
       <size_limit>250 MB</size_limit>
       <split_size>100 MB</split_size>
    </split>
          

    The following elements are part of the Split configuration section:

    size_limit

    Size limit.

    Files with a size strictly larger than this limit will be split by the extension.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a size as described above.

    split_size

    Split size.

    This is the size of the chunks that a large file will be split into. The final chunk may be smaller if the split size doesn't divide evenly into the file size.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a size as described above.

    CedarBackup3-3.1.6/doc/manual/ch06s07.html0000664000175000017500000001717712657665550021532 0ustar pronovicpronovic00000000000000Encrypt Extension

    Encrypt Extension

    The Encrypt Extension is a Cedar Backup extension used to encrypt backups. It does this by encrypting the contents of a master's staging directory each day after the stage action is run. This way, backed-up data is encrypted both when sitting on the master and when written to disc. This extension must be run before the standard store action, otherwise unencrypted data will be written to disc.

    There are several differents ways encryption could have been built in to or layered on to Cedar Backup. I asked the mailing list for opinions on the subject in January 2007 and did not get a lot of feedback, so I chose the option that was simplest to understand and simplest to implement. If other encryption use cases make themselves known in the future, this extension can be enhanced or replaced.

    Currently, this extension supports only GPG. However, it would be straightforward to support other public-key encryption mechanisms, such as OpenSSL.

    Warning

    If you decide to encrypt your backups, be absolutely sure that you have your GPG secret key saved off someplace safe — someplace other than on your backup disc. If you lose your secret key, your backup will be useless.

    I suggest that before you rely on this extension, you should execute a dry run and make sure you can successfully decrypt the backup that is written to disc.

    Before configuring the Encrypt extension, you must configure GPG. Either create a new keypair or use an existing one. Determine which user will execute your backup (typically root) and have that user import and lsign the public half of the keypair. Then, save off the secret half of the keypair someplace safe, apart from your backup (i.e. on a floppy disk or USB drive). Make sure you know the recipient name associated with the public key because you'll need it to configure Cedar Backup. (If you can run gpg -e -r "Recipient Name" file.txt and it executes cleanly with no user interaction required, you should be OK.)

    An encrypted backup has the same file structure as a normal backup, so all of the instructions in AppendixC, Data Recovery apply. The only difference is that encrypted files will have an additional .gpg extension (so for instance file.tar.gz becomes file.tar.gz.gpg). To recover decrypted data, simply log on as a user which has access to the secret key and decrypt the .gpg file that you are interested in. Then, recover the data as usual.

    Note: I am being intentionally vague about how to configure and use GPG, because I do not want to encourage neophytes to blindly use this extension. If you do not already understand GPG well enough to follow the two paragraphs above, do not use this extension. Instead, before encrypting your backups, check out the excellent GNU Privacy Handbook at http://www.gnupg.org/gph/en/manual.html and gain an understanding of how encryption can help you or hurt you.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>encrypt</name>
          <module>CedarBackup3.extend.encrypt</module>
          <function>executeAction</function>
          <index>301</index>
       </action>
    </extensions>
          

    This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own encrypt configuration section. This is an example Encrypt configuration section:

    <encrypt>
       <encrypt_mode>gpg</encrypt_mode>
       <encrypt_target>Backup User</encrypt_target>
    </encrypt>
          

    The following elements are part of the Encrypt configuration section:

    encrypt_mode

    Encryption mode.

    This value specifies which encryption mechanism will be used by the extension.

    Currently, only the GPG public-key encryption mechanism is supported.

    Restrictions: Must be gpg.

    encrypt_target

    Encryption target.

    The value in this field is dependent on the encryption mode. For the gpg mode, this is the name of the recipient whose public key will be used to encrypt the backup data, i.e. the value accepted by gpg -r.

    CedarBackup3-3.1.6/doc/manual/ch02s06.html0000664000175000017500000000667512657665550021526 0ustar pronovicpronovic00000000000000Managed Backups

    Managed Backups

    Cedar Backup also supports an optional feature called the managed backup. This feature is intended for use with remote clients where cron is not available.

    When managed backups are enabled, managed clients must still be configured as usual. However, rather than using a cron job on the client to execute the collect and purge actions, the master executes these actions on the client via a remote shell.

    To make this happen, first set up one or more managed clients in Cedar Backup configuration. Then, invoke Cedar Backup with the --managed command-line option. Whenever Cedar Backup invokes an action locally, it will invoke the same action on each of the managed clients.

    Technically, this feature works for any client, not just clients that don't have cron available. Used this way, it can simplify the setup process, because cron only has to be configured on the master. For some users, that may be motivation enough to use this feature all of the time.

    However, please keep in mind that this feature depends on a stable network. If your network connection drops, your backup will be interrupted and will not be complete. It is even possible that some of the Cedar Backup metadata (like incremental backup state) will be corrupted. The risk is not high, but it is something you need to be aware of if you choose to use this optional feature.

    CedarBackup3-3.1.6/doc/manual/ch05.html0000664000175000017500000002526112657665550021170 0ustar pronovicpronovic00000000000000Chapter5.Configuration

    Chapter5.Configuration

    Table of Contents

    Overview
    Configuration File Format
    Sample Configuration File
    Reference Configuration
    Options Configuration
    Peers Configuration
    Collect Configuration
    Stage Configuration
    Store Configuration
    Purge Configuration
    Extensions Configuration
    Setting up a Pool of One
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Client Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure the master in your backup pool.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Master Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test connectivity to client machines.
    Step 9: Test your backup.
    Step 10: Modify the backup cron jobs.
    Configuring your Writer Device
    Device Types
    Devices identified by by device name
    Devices identified by SCSI id
    Linux Notes
    Finding your Linux CD Writer
    Mac OS X Notes
    Optimized Blanking Stategy

    Overview

    Configuring Cedar Backup is unfortunately somewhat complicated. The good news is that once you get through the initial configuration process, you'll hardly ever have to change anything. Even better, the most typical changes (i.e. adding and removing directories from a backup) are easy.

    First, familiarize yourself with the concepts in Chapter2, Basic Concepts. In particular, be sure that you understand the differences between a master and a client. (If you only have one machine, then your machine will act as both a master and a client, and we'll refer to your setup as a pool of one.) Then, install Cedar Backup per the instructions in Chapter3, Installation.

    Once everything has been installed, you are ready to begin configuring Cedar Backup. Look over the section called “The cback3 command” (in Chapter4, Command Line Tools) to become familiar with the command line interface. Then, look over the section called “Configuration File Format” (below) and create a configuration file for each peer in your backup pool. To start with, create a very simple configuration file, then expand it later. Decide now whether you will store the configuration file in the standard place (/etc/cback3.conf) or in some other location.

    After you have all of the configuration files in place, configure each of your machines, following the instructions in the appropriate section below (for master, client or pool of one). Since the master and client(s) must communicate over the network, you won't be able to fully configure the master without configuring each client and vice-versa. The instructions are clear on what needs to be done.

    CedarBackup3-3.1.6/doc/manual/ch01.html0000664000175000017500000001312012657665550021153 0ustar pronovicpronovic00000000000000Chapter1.Introduction

    Chapter1.Introduction

    Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it.— Linus Torvalds, at the release of Linux 2.0.8 in July of 1996.

    What is Cedar Backup?

    Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources.

    Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough (and almost all hardware is today), Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis.

    Alternately, Cedar Backup can write your backups to the Amazon S3 cloud rather than relying on physical media.

    Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python 3 programming language.

    There are many different backup software implementations out there in the open source world. Cedar Backup aims to fill a niche: it aims to be a good fit for people who need to back up a limited amount of important data on a regular basis. Cedar Backup isn't for you if you want to back up your huge MP3 collection every night, or if you want to back up a few hundred machines. However, if you administer a small set of machines and you want to run daily incremental backups for things like system configuration, current email, small web sites, Subversion or Mercurial repositories, or small MySQL databases, then Cedar Backup is probably worth your time.

    Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python 3, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems.

    To run a Cedar Backup client, you really just need a working Python 3 installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images or talking to the Amazon S3 infrastructure. A full list of dependencies is provided in the section called “Installing Dependencies”.

    CedarBackup3-3.1.6/doc/manual/ch06s02.html0000664000175000017500000002700712657665550021516 0ustar pronovicpronovic00000000000000Amazon S3 Extension

    Amazon S3 Extension

    The Amazon S3 extension writes data to Amazon S3 cloud storage rather than to physical media. It is intended to replace the store action, but you can also use it alongside the store action if you'd prefer to backup your data in more than one place. This extension must be run after the stage action.

    The underlying functionality relies on the AWS CLI toolset. Before you use this extension, you need to set up your Amazon S3 account and configure AWS CLI as detailed in Amazons's setup guide. The extension assumes that the backup is being executed as root, and switches over to the configured backup user to run the aws program. So, make sure you configure the AWS CLI tools as the backup user and not root. (This is different than the amazons3 sync tool extension, which executes AWS CLI command as the same user that is running the tool.)

    When using physical media via the standard store action, there is an implicit limit to the size of a backup, since a backup must fit on a single disc. Since there is no physical media, no such limit exists for Amazon S3 backups. This leaves open the possibility that Cedar Backup might construct an unexpectedly-large backup that the administrator is not aware of. Over time, this might become expensive, either in terms of network bandwidth or in terms of Amazon S3 storage and I/O charges. To mitigate this risk, set a reasonable maximum size using the configuration elements shown below. If the backup fails, you have a chance to review what made the backup larger than you expected, and you can either correct the problem (i.e. remove a large temporary directory that got inadvertently included in the backup) or change configuration to take into account the new "normal" maximum size.

    You can optionally configure Cedar Backup to encrypt data before sending it to S3. To do that, provide a complete command line using the ${input} and ${output} variables to represent the original input file and the encrypted output file. This command will be executed as the backup user.

    For instance, you can use something like this with GPG:

    /usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input}
          

    The GPG mechanism depends on a strong passphrase for security. One way to generate a strong passphrase is using your system random number generator, i.e.:

    dd if=/dev/urandom count=20 bs=1 | xxd -ps
          

    (See StackExchange for more details about that advice.) If you decide to use encryption, make sure you save off the passphrase in a safe place, so you can get at your backup data later if you need to. And obviously, make sure to set permissions on the passphrase file so it can only be read by the backup user.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>amazons3</name>
          <module>CedarBackup3.extend.amazons3</module>
          <function>executeAction</function>
          <index>201</index> <!-- just after stage -->
       </action>
    </extensions>
          

    This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own amazons3 configuration section. This is an example configuration section with encryption disabled:

    <amazons3>
          <s3_bucket>example.com-backup/staging</s3_bucket>
    </amazons3>
          

    The following elements are part of the Amazon S3 configuration section:

    warn_midnite

    Whether to generate warnings for crossing midnite.

    This field indicates whether warnings should be generated if the Amazon S3 operation has to cross a midnite boundary in order to find data to write to the cloud. For instance, a warning would be generated if valid data was only found in the day before or day after the current day.

    Configuration for some users is such that the amazons3 operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something strange might have happened.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    s3_bucket

    The name of the Amazon S3 bucket that data will be written to.

    This field configures the S3 bucket that your data will be written to. In S3, buckets are named globally. For uniqueness, you would typically use the name of your domain followed by some suffix, such as example.com-backup. If you want, you can specify a subdirectory within the bucket, such as example.com-backup/staging.

    Restrictions: Must be non-empty.

    encrypt

    Command used to encrypt backup data before upload to S3

    If this field is provided, then data will be encrypted before it is uploaded to Amazon S3. You must provide the entire command used to encrypt a file, including the ${input} and ${output} variables. An example GPG command is shown above, but you can use any mechanism you choose. The command will be run as the configured backup user.

    Restrictions: If provided, must be non-empty.

    full_size_limit

    Maximum size of a full backup

    If this field is provided, then a size limit will be applied to full backups. If the total size of the selected staging directory is greater than the limit, then the backup will fail.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a value as described above, greater than zero.

    incr_size_limit

    Maximum size of an incremental backup

    If this field is provided, then a size limit will be applied to incremental backups. If the total size of the selected staging directory is greater than the limit, then the backup will fail.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a value as described above, greater than zero.

    CedarBackup3-3.1.6/doc/manual/ch03s03.html0000664000175000017500000002106212657665550021507 0ustar pronovicpronovic00000000000000Installing from Source

    Installing from Source

    On platforms other than Debian, Cedar Backup is installed from a Python source distribution. [16] You will have to manage dependencies on your own.

    Tip

    Many UNIX-like distributions provide an automatic or semi-automatic way to install packages like the ones Cedar Backup requires (think RPMs for Mandrake or RedHat, Gentoo's Portage system, the Fink project for Mac OS X, or the BSD ports system). If you are not sure how to install these packages on your system, you might want to check out AppendixB, Dependencies. This appendix provides links to upstream source packages, plus as much information as I have been able to gather about packages for non-Debian platforms.

    Installing Dependencies

    Cedar Backup requires a number of external packages in order to function properly. Before installing Cedar Backup, you must make sure that these dependencies are met.

    Cedar Backup is written in Python 3 and requires version 3.4 or greater of the language.

    Additionally, remote client peer nodes must be running an RSH-compatible server, such as the ssh server, and master nodes must have an RSH-compatible client installed if they need to connect to remote peer machines.

    Master machines also require several other system utilities, most having to do with writing and validating CD/DVD media. On master machines, you must make sure that these utilities are available if you want to to run the store action:

    • mkisofs

    • eject

    • mount

    • unmount

    • volname

    Then, you need this utility if you are writing CD media:

    • cdrecord

    or these utilities if you are writing DVD media:

    • growisofs

    All of these utilities are common and are easy to find for almost any UNIX-like operating system.

    Installing the Source Package

    Python source packages are fairly easy to install. They are distributed as .tar.gz files which contain Python source code, a manifest and an installation script called setup.py.

    Once you have downloaded the source package from the Cedar Solutions website, [15] untar it:

    $ zcat CedarBackup3-3.0.0.tar.gz | tar xvf -
             

    This will create a directory called (in this case) CedarBackup3-3.0.0. The version number in the directory will always match the version number in the filename.

    If you have root access and want to install the package to the standard Python location on your system, then you can install the package in two simple steps:

    $ cd CedarBackup3-3.0.0
    $ python3 setup.py install
             

    Make sure that you are using Python 3.4 or better to execute setup.py.

    You may also wish to run the unit tests before actually installing anything. Run them like so:

    python3 util/test.py
             

    If any unit test reports a failure on your system, please email me the output from the unit test, so I can fix the problem. [17] This is particularly important for non-Linux platforms where I do not have a test system available to me.

    Some users might want to choose a different install location or change other install parameters. To get more information about how setup.py works, use the --help option:

    $ python3 setup.py --help
    $ python3 setup.py install --help
             

    In any case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration.

    CedarBackup3-3.1.6/doc/manual/ch02s09.html0000664000175000017500000001040712657665550021515 0ustar pronovicpronovic00000000000000Extensions

    Extensions

    Imagine that there is a third party developer who understands how to back up a certain kind of database repository. This third party might want to integrate his or her specialized backup into the Cedar Backup process, perhaps thinking of the database backup as a sort of collect step.

    Prior to Cedar Backup version 2, any such integration would have been completely independent of Cedar Backup itself. The external backup functionality would have had to maintain its own configuration and would not have had access to any Cedar Backup configuration.

    Starting with version 2, Cedar Backup allows extensions to the backup process. An extension is an action that isn't part of the standard backup process (i.e. not collect, stage, store or purge), but can be executed by Cedar Backup when properly configured.

    Extension authors implement an action process function with a certain interface, and are allowed to add their own sections to the Cedar Backup configuration file, so that all backup configuration can be centralized. Then, the action process function is associated with an action name which can be executed from the cback3 command line like any other action.

    Hopefully, as the Cedar Backup user community grows, users will contribute their own extensions back to the community. Well-written general-purpose extensions will be accepted into the official codebase.

    Note

    Users should see Chapter5, Configuration for more information on how extensions are configured, and Chapter6, Official Extensions for details on all of the officially-supported extensions.

    Developers may be interested in AppendixA, Extension Architecture Interface.

    CedarBackup3-3.1.6/doc/manual/apcs02.html0000664000175000017500000002463612657665550021526 0ustar pronovicpronovic00000000000000Recovering Filesystem Data

    Recovering Filesystem Data

    Filesystem data is gathered by the standard Cedar Backup collect action. This data is placed into files of the form *.tar. The first part of the name (before .tar), represents the path to the directory. For example, boot.tar would contain data from /boot, and var-lib-jspwiki.tar would contain data from /var/lib/jspwiki. (As a special case, data from the root directory would be placed in -.tar). Remember that your tarfile might have a bzip2 (.bz2) or gzip (.gz) extension, depending on what compression you specified in configuration.

    If you are using full backups every day, the latest backup data is always within the latest daily directory stored on your backup media or within your staging directory. If you have some or all of your directories configured to do incremental backups, then the first day of the week holds the full backups and the other days represent incremental differences relative to that first day of the week.

    Full Restore

    To do a full system restore, find the newest applicable full backup and extract it. If you have some incremental backups, extract them into the same place as the full backup, one by one starting from oldest to newest. (This way, if a file changed every day you will always get the latest one.)

    All of the backed-up files are stored in the tar file in a relative fashion, so you can extract from the tar file either directly into the filesystem, or into a temporary location.

    For example, to restore boot.tar.bz2 directly into /boot, execute tar from your root directory (/):

    root:/# bzcat boot.tar.bz2 | tar xvf -
             

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    If you want to extract boot.tar.gz into a temporary location like /tmp/boot instead, just change directories first. In this case, you'd execute the tar command from within /tmp instead of /.

    root:/tmp# bzcat boot.tar.bz2 | tar xvf -
             

    Again, use zcat or just cat as appropriate.

    For more information, you might want to check out the manpage or GNU info documentation for the tar command.

    Partial Restore

    Most users will need to do a partial restore much more frequently than a full restore. Perhaps you accidentally removed your home directory, or forgot to check in some version of a file before deleting it. Or, perhaps the person who packaged Apache for your system blew away your web server configuration on upgrade (it happens). The solution to these and other kinds of problems is a partial restore (assuming you've backed up the proper things).

    The procedure is similar to a full restore. The specific steps depend on how much information you have about the file you are looking for. Where with a full restore, you can confidently extract the full backup followed by each of the incremental backups, this might not be what you want when doing a partial restore. You may need to take more care in finding the right version of a file — since the same file, if changed frequently, would appear in more than one backup.

    Start by finding the backup media that contains the file you are looking for. If you rotate your backup media, and your last known contact with the file was a while ago, you may need to look on older media to find it. This may take some effort if you are not sure when the change you are trying to correct took place.

    Once you have decided to look at a particular piece of backup media, find the correct peer (host), and look for the file in the full backup:

    root:/tmp# bzcat boot.tar.bz2 | tar tvf - path/to/file
             

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    The tvf tells tar to search for the file in question and just list the results rather than extracting the file. Note that the filename is relative (with no starting /). Alternately, you can omit the path/to/file and search through the output using more or less

    If you haven't found what you are looking for, work your way through the incremental files for the directory in question. One of them may also have the file if it changed during the course of the backup. Or, move to older or newer media and see if you can find the file there.

    Once you have found your file, extract it using xvf:

    root:/tmp# bzcat boot.tar.bz2 | tar xvf - path/to/file
             

    Again, use zcat or just cat as appropriate.

    Inspect the file and make sure it's what you're looking for. Again, you may need to move to older or newer media to find the exact version of your file.

    For more information, you might want to check out the manpage or GNU info documentation for the tar command.

    CedarBackup3-3.1.6/doc/manual/ch01s03.html0000664000175000017500000001366612657665550021520 0ustar pronovicpronovic00000000000000How to Get Support

    How to Get Support

    Cedar Backup is open source software that is provided to you at no cost. It is provided with no warranty, not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. However, that said, someone can usually help you solve whatever problems you might see.

    If you experience a problem, your best bet is to file an issue in the issue tracker at BitBucket. [1] When the source code was hosted at SourceForge, there was a mailing list. However, it was very lightly used in the last years before I abandoned SourceForge, and I have decided not to replace it.

    If you are not comfortable discussing your problem in public or listing it in a public database, or if you need to send along information that you do not want made public, then you can write . That mail will go directly to me. If you write the support address about a bug, a scrubbed bug report will eventually end up in the public bug database anyway, so if at all possible you should use the public reporting mechanisms. One of the strengths of the open-source software development model is its transparency.

    Regardless of how you report your problem, please try to provide as much information as possible about the behavior you observed and the environment in which the problem behavior occurred. [2]

    In particular, you should provide: the version of Cedar Backup that you are using; how you installed Cedar Backup (i.e. Debian package, source package, etc.); the exact command line that you executed; any error messages you received, including Python stack traces (if any); and relevant sections of the Cedar Backup log. It would be even better if you could describe exactly how to reproduce the problem, for instance by including your entire configuration file and/or specific information about your system that might relate to the problem. However, please do not provide huge sections of debugging logs unless you are sure they are relevant or unless someone asks for them.

    Tip

    Sometimes, the error that Cedar Backup displays can be rather cryptic. This is because under internal error conditions, the text related to an exception might get propogated all of the way up to the user interface. If the message you receive doesn't make much sense, or if you suspect that it results from an internal error, you might want to re-run Cedar Backup with the --stack option. This forces Cedar Backup to dump the entire Python stack trace associated with the error, rather than just printing the last message it received. This is good information to include along with a bug report, as well.

    CedarBackup3-3.1.6/doc/manual/ch04s04.html0000664000175000017500000004612612657665550021521 0ustar pronovicpronovic00000000000000The cback3-span command

    The cback3-span command

    Introduction

    Cedar Backup was designed — and is still primarily focused — around weekly backups to a single CD or DVD. Most users who back up more data than fits on a single disc seem to stop their backup process at the stage step, using Cedar Backup as an easy way to collect data.

    However, some users have expressed a need to write these large kinds of backups to disc — if not every day, then at least occassionally. The cback3-span tool was written to meet those needs. If you have staged more data than fits on a single CD or DVD, you can use cback3-span to split that data between multiple discs.

    cback3-span is not a general-purpose disc-splitting tool. It is a specialized program that requires Cedar Backup configuration to run. All it can do is read Cedar Backup configuration, find any staging directories that have not yet been written to disc, and split the files in those directories between discs.

    cback3-span accepts many of the same command-line options as cback3, but must be run interactively. It cannot be run from cron. This is intentional. It is intended to be a useful tool, not a new part of the backup process (that is the purpose of an extension).

    In order to use cback3-span, you must configure your backup such that the largest individual backup file can fit on a single disc. The command will not split a single file onto more than one disc. All it can do is split large directories onto multiple discs. Files in those directories will be arbitrarily split up so that space is utilized most efficiently.

    Syntax

    The cback3-span command has the following syntax:

     Usage: cback3-span [switches]
    
     Cedar Backup 'span' tool.
    
     This Cedar Backup utility spans staged data between multiple discs.
     It is a utility, not an extension, and requires user interaction.
    
     The following switches are accepted, mostly to set up underlying
     Cedar Backup functionality:
    
       -h, --help     Display this usage/help listing
       -V, --version  Display version information
       -b, --verbose  Print verbose output as well as logging to disk
       -c, --config   Path to config file (default: /etc/cback3.conf)
       -l, --logfile  Path to logfile (default: /var/log/cback3.log)
       -o, --owner    Logfile ownership, user:group (default: root:adm)
       -m, --mode     Octal logfile permissions mode (default: 640)
       -O, --output   Record some sub-command (i.e. cdrecord) output to the log
       -d, --debug    Write debugging information to the log (implies --output)
       -s, --stack    Dump a Python stack trace instead of swallowing exceptions
             

    Switches

    -h, --help

    Display usage/help listing.

    -V, --version

    Display version information.

    -b, --verbose

    Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen.

    -c, --config

    Specify the path to an alternate configuration file. The default configuration file is /etc/cback3.conf.

    -l, --logfile

    Specify the path to an alternate logfile. The default logfile file is /var/log/cback3.log.

    -o, --owner

    Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback3 command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values.

    -m, --mode

    Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback3 command is executed, it will retain its existing ownership and mode.

    -O, --output

    Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD/DVD recorder and its media.

    -d, --debug

    Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well.

    -s, --stack

    Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report.

    Using cback3-span

    As discussed above, the cback3-span is an interactive command. It cannot be run from cron.

    You can typically use the default answer for most questions. The only two questions that you may not want the default answer for are the fit algorithm and the cushion percentage.

    The cushion percentage is used by cback3-span to determine what capacity to shoot for when splitting up your staging directories. A 650 MB disc does not fit fully 650 MB of data. It's usually more like 627 MB of data. The cushion percentage tells cback3-span how much overhead to reserve for the filesystem. The default of 4% is usually OK, but if you have problems you may need to increase it slightly.

    The fit algorithm tells cback3-span how it should determine which items should be placed on each disc. If you don't like the result from one algorithm, you can reject that solution and choose a different algorithm.

    The four available fit algorithms are:

    worst

    The worst-fit algorithm.

    The worst-fit algorithm proceeds through a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing.

    best

    The best-fit algorithm.

    The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not unusual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms.

    first

    The first-fit algorithm.

    The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting.

    alternate

    A hybrid algorithm that I call alternate-fit.

    This algorithm tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slightly fewer items.

    Sample run

    Below is a log showing a sample cback3-span run.

    ================================================
               Cedar Backup 'span' tool
    ================================================
    
    This the Cedar Backup span tool.  It is used to split up staging
    data when that staging data does not fit onto a single disc.
    
    This utility operates using Cedar Backup configuration.  Configuration
    specifies which staging directory to look at and which writer device
    and media type to use.
    
    Continue? [Y/n]: 
    ===
    
    Cedar Backup store configuration looks like this:
    
       Source Directory...: /tmp/staging
       Media Type.........: cdrw-74
       Device Type........: cdwriter
       Device Path........: /dev/cdrom
       Device SCSI ID.....: None
       Drive Speed........: None
       Check Data Flag....: True
       No Eject Flag......: False
    
    Is this OK? [Y/n]: 
    ===
    
    Please wait, indexing the source directory (this may take a while)...
    ===
    
    The following daily staging directories have not yet been written to disc:
    
       /tmp/staging/2007/02/07
       /tmp/staging/2007/02/08
       /tmp/staging/2007/02/09
       /tmp/staging/2007/02/10
       /tmp/staging/2007/02/11
       /tmp/staging/2007/02/12
       /tmp/staging/2007/02/13
       /tmp/staging/2007/02/14
    
    The total size of the data in these directories is 1.00 GB.
    
    Continue? [Y/n]: 
    ===
    
    Based on configuration, the capacity of your media is 650.00 MB.
    
    Since estimates are not perfect and there is some uncertainly in
    media capacity calculations, it is good to have a "cushion",
    a percentage of capacity to set aside.  The cushion reduces the
    capacity of your media, so a 1.5% cushion leaves 98.5% remaining.
    
    What cushion percentage? [4.00]: 
    ===
    
    The real capacity, taking into account the 4.00% cushion, is 627.25 MB.
    It will take at least 2 disc(s) to store your 1.00 GB of data.
    
    Continue? [Y/n]: 
    ===
    
    Which algorithm do you want to use to span your data across
    multiple discs?
    
    The following algorithms are available:
    
       first....: The "first-fit" algorithm
       best.....: The "best-fit" algorithm
       worst....: The "worst-fit" algorithm
       alternate: The "alternate-fit" algorithm
    
    If you don't like the results you will have a chance to try a
    different one later.
    
    Which algorithm? [worst]: 
    ===
    
    Please wait, generating file lists (this may take a while)...
    ===
    
    Using the "worst-fit" algorithm, Cedar Backup can split your data
    into 2 discs.
    
    Disc 1: 246 files, 615.97 MB, 98.20% utilization
    Disc 2: 8 files, 412.96 MB, 65.84% utilization
    
    Accept this solution? [Y/n]: n
    ===
    
    Which algorithm do you want to use to span your data across
    multiple discs?
    
    The following algorithms are available:
    
       first....: The "first-fit" algorithm
       best.....: The "best-fit" algorithm
       worst....: The "worst-fit" algorithm
       alternate: The "alternate-fit" algorithm
    
    If you don't like the results you will have a chance to try a
    different one later.
    
    Which algorithm? [worst]: alternate
    ===
    
    Please wait, generating file lists (this may take a while)...
    ===
    
    Using the "alternate-fit" algorithm, Cedar Backup can split your data
    into 2 discs.
    
    Disc 1: 73 files, 627.25 MB, 100.00% utilization
    Disc 2: 181 files, 401.68 MB, 64.04% utilization
    
    Accept this solution? [Y/n]: y
    ===
    
    Please place the first disc in your backup device.
    Press return when ready.
    ===
    
    Initializing image...
    Writing image to disc...
             
    CedarBackup3-3.1.6/doc/manual/pr01s05.html0000664000175000017500000000402612657665550021537 0ustar pronovicpronovic00000000000000Acknowledgments

    Acknowledgments

    The structure of this manual and some of the basic boilerplate has been taken from the book Version Control with Subversion. Thanks to the authors (and O'Reilly) for making this excellent reference available under a free and open license.

    CedarBackup3-3.1.6/doc/manual/pr01s02.html0000664000175000017500000000364412657665550021541 0ustar pronovicpronovic00000000000000Audience

    Audience

    This manual has been written for computer-literate administrators who need to use and configure Cedar Backup on their Linux or UNIX-like system. The examples in this manual assume the reader is relatively comfortable with UNIX and command-line interfaces.

    CedarBackup3-3.1.6/doc/manual/index.html0000664000175000017500000003770412657665550021545 0ustar pronovicpronovic00000000000000Cedar Backup 3 Software Manual

    Cedar Backup 3 Software Manual

    Kenneth J. Pronovici

    This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation.

    For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work.

    This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

    Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA


    Table of Contents

    Preface
    Purpose
    Audience
    Conventions Used in This Book
    Typographic Conventions
    Icons
    Organization of This Manual
    Acknowledgments
    1. Introduction
    What is Cedar Backup?
    Migrating from Version 2 to Version 3
    How to Get Support
    History
    2. Basic Concepts
    General Architecture
    Data Recovery
    Cedar Backup Pools
    The Backup Process
    The Collect Action
    The Stage Action
    The Store Action
    The Purge Action
    The All Action
    The Validate Action
    The Initialize Action
    The Rebuild Action
    Coordination between Master and Clients
    Managed Backups
    Media and Device Types
    Incremental Backups
    Extensions
    3. Installation
    Background
    Installing on a Debian System
    Installing from Source
    Installing Dependencies
    Installing the Source Package
    4. Command Line Tools
    Overview
    The cback3 command
    Introduction
    Syntax
    Switches
    Actions
    The cback3-amazons3-sync command
    Introduction
    Syntax
    Switches
    The cback3-span command
    Introduction
    Syntax
    Switches
    Using cback3-span
    Sample run
    5. Configuration
    Overview
    Configuration File Format
    Sample Configuration File
    Reference Configuration
    Options Configuration
    Peers Configuration
    Collect Configuration
    Stage Configuration
    Store Configuration
    Purge Configuration
    Extensions Configuration
    Setting up a Pool of One
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Client Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure the master in your backup pool.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Master Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test connectivity to client machines.
    Step 9: Test your backup.
    Step 10: Modify the backup cron jobs.
    Configuring your Writer Device
    Device Types
    Devices identified by by device name
    Devices identified by SCSI id
    Linux Notes
    Finding your Linux CD Writer
    Mac OS X Notes
    Optimized Blanking Stategy
    6. Official Extensions
    System Information Extension
    Amazon S3 Extension
    Subversion Extension
    MySQL Extension
    PostgreSQL Extension
    Mbox Extension
    Encrypt Extension
    Split Extension
    Capacity Extension
    A. Extension Architecture Interface
    B. Dependencies
    C. Data Recovery
    Finding your Data
    Recovering Filesystem Data
    Full Restore
    Partial Restore
    Recovering MySQL Data
    Recovering Subversion Data
    Recovering Mailbox Data
    Recovering Data split by the Split Extension
    D. Securing Password-less SSH Connections
    E. Copyright
    CedarBackup3-3.1.6/doc/manual/ch04s03.html0000664000175000017500000003124012657665550021507 0ustar pronovicpronovic00000000000000The cback3-amazons3-sync command

    The cback3-amazons3-sync command

    Introduction

    The cback3-amazons3-sync tool is used for synchronizing entire directories of files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar Backup process.

    This might be a good option for some types of data, as long as you understand the limitations around retrieving previous versions of objects that get modified or deleted as part of a sync. S3 does support versioning, but it won't be quite as easy to get at those previous versions as with an explicit incremental backup like cback3 provides. Cedar Backup does not provide any tooling that would help you retrieve previous versions.

    The underlying functionality relies on the AWS CLI toolset. Before you use this extension, you need to set up your Amazon S3 account and configure AWS CLI as detailed in Amazons's setup guide. The aws command will be executed as the same user that is executing the cback3-amazons3-sync command, so make sure you configure it as the proper user. (This is different than the amazons3 extension, which is designed to execute as root and switches over to the configured backup user to execute AWS CLI commands.)

    Syntax

    The cback3-amazons3-sync command has the following syntax:

     Usage: cback3-amazons3-sync [switches] sourceDir s3bucketUrl
    
     Cedar Backup Amazon S3 sync tool.
    
     This Cedar Backup utility synchronizes a local directory to an Amazon S3
     bucket.  After the sync is complete, a validation step is taken.  An
     error is reported if the contents of the bucket do not match the
     source directory, or if the indicated size for any file differs.
     This tool is a wrapper over the AWS CLI command-line tool.
    
     The following arguments are required:
    
       sourceDir            The local source directory on disk (must exist)
       s3BucketUrl          The URL to the target Amazon S3 bucket
    
     The following switches are accepted:
    
       -h, --help           Display this usage/help listing
       -V, --version        Display version information
       -b, --verbose        Print verbose output as well as logging to disk
       -q, --quiet          Run quietly (display no output to the screen)
       -l, --logfile        Path to logfile (default: /var/log/cback3.log)
       -o, --owner          Logfile ownership, user:group (default: root:adm)
       -m, --mode           Octal logfile permissions mode (default: 640)
       -O, --output         Record some sub-command (i.e. aws) output to the log
       -d, --debug          Write debugging information to the log (implies --output)
       -s, --stack          Dump Python stack trace instead of swallowing exceptions
       -D, --diagnostics    Print runtime diagnostics to the screen and exit
       -v, --verifyOnly     Only verify the S3 bucket contents, do not make changes
       -w, --ignoreWarnings Ignore warnings about problematic filename encodings
    
     Typical usage would be something like:
    
       cback3-amazons3-sync /home/myuser s3://example.com-backup/myuser
    
     This will sync the contents of /home/myuser into the indicated bucket.
             

    Switches

    -h, --help

    Display usage/help listing.

    -V, --version

    Display version information.

    -b, --verbose

    Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen.

    -q, --quiet

    Run quietly (display no output to the screen).

    -l, --logfile

    Specify the path to an alternate logfile. The default logfile file is /var/log/cback3.log.

    -o, --owner

    Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback3-amazons3-sync command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values.

    -m, --mode

    Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback3-amazons3-sync command is executed, it will retain its existing ownership and mode.

    -O, --output

    Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference.

    -d, --debug

    Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well.

    -s, --stack

    Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report.

    -D, --diagnostics

    Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report.

    -v, --verifyOnly

    Only verify the S3 bucket contents against the directory on disk. Do not make any changes to the S3 bucket or transfer any files. This is intended as a quick check to see whether the sync is up-to-date.

    Although no files are transferred, the tool will still execute the source filename encoding check, discussed below along with --ignoreWarnings.

    -w, --ignoreWarnings

    The AWS CLI S3 sync process is very picky about filename encoding. Files that the Linux filesystem handles with no problems can cause problems in S3 if the filename cannot be encoded properly in your configured locale. As of this writing, filenames like this will cause the sync process to abort without transferring all files as expected.

    To avoid confusion, the cback3-amazons3-sync tries to guess which files in the source directory will cause problems, and refuses to execute the AWS CLI S3 sync if any problematic files exist. If you'd rather proceed anyway, use --ignoreWarnings.

    If problematic files are found, then you have basically two options: either correct your locale (i.e. if you have set LANG=C) or rename the file so it can be encoded properly in your locale. The error messages will tell you the expected encoding (from your locale) and the actual detected encoding for the filename.

    CedarBackup3-3.1.6/doc/manual/apcs05.html0000664000175000017500000001217612657665550021525 0ustar pronovicpronovic00000000000000Recovering Mailbox Data

    Recovering Mailbox Data

    Mailbox data is gathered by the Cedar Backup mbox extension. Cedar Backup will create either full or incremental backups, but both kinds of backups are treated identically when restoring.

    Individual mbox files and mbox directories are treated a little differently, since individual files are just compressed, but directories are collected into a tar archive.

    First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week.

    The mbox extension creates files of the form mbox-*. Backup files for individual mbox files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. Backup files for mbox directories will have a .tar, .tar.gz or .tar.bz2 extension, again depending on what kind of compression you specified in configuration.

    There is one backup file for each configured mbox file or directory. The backup file name represents the name of the file or directory and the date it was backed up. So, the file mbox-20060624-home-user-mail-greylist represents the backup for /home/user/mail/greylist run on 24 Jun 2006. Likewise, mbox-20060624-home-user-mail.tar represents the backup for the /home/user/mail directory run on that same date.

    Once you have found the files you are looking for, the restoration procedure is fairly simple. First, concatenate all of the backup files together. Then, use grepmail to eliminate duplicate messages (if any).

    Here is an example for a single backed-up file:

    root:/tmp# rm restore.mbox # make sure it's not left over
    root:/tmp# cat mbox-20060624-home-user-mail-greylist >> restore.mbox
    root:/tmp# cat mbox-20060625-home-user-mail-greylist >> restore.mbox
    root:/tmp# cat mbox-20060626-home-user-mail-greylist >> restore.mbox
    root:/tmp# grepmail -a -u restore.mbox > nodups.mbox
          

    At this point, nodups.mbox contains all of the backed-up messages from /home/user/mail/greylist.

    Of course, if your backups are compressed, you'll have to use zcat or bzcat rather than just cat.

    If you are backing up mbox directories rather than individual files, see the filesystem instructions for notes on now to extract the individual files from inside tar archives. Extract the files you are interested in, and then concatenate them together just like shown above for the individual case.

    CedarBackup3-3.1.6/doc/manual/ch06s05.html0000664000175000017500000002272512657665550021523 0ustar pronovicpronovic00000000000000PostgreSQL Extension

    PostgreSQL Extension

    The PostgreSQL Extension is a Cedar Backup extension used to back up PostgreSQL [27] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    The backup is done via the pg_dump or pg_dumpall commands included with the PostgreSQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases.

    The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the pg_dump client. This can be accomplished using appropriate configuration in the pg_hda.conf file.

    This extension always produces a full backup. There is currently no facility for making incremental backups.

    Warning

    Once you place PostgreSQL configuration into the Cedar Backup configuration file, you should be careful about who is allowed to see that information. This is because PostgreSQL configuration will contain information about available PostgreSQL databases and usernames. Typically, you might want to lock down permissions so that only the file owner can read the file contents (i.e. use mode 0600).

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>postgresql</name>
          <module>CedarBackup3.extend.postgresql</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own postgresql configuration section. This is an example PostgreSQL configuration section:

    <postgresql>
       <compress_mode>bzip2</compress_mode>
       <user>username</user>
       <all>Y</all>
    </postgresql>
          

    If you decide to back up specific databases, then you would list them individually, like this:

    <postgresql>
       <compress_mode>bzip2</compress_mode>
       <user>username</user>
       <all>N</all>
       <database>db1</database>
       <database>db2</database>
    </postgresql>
          

    The following elements are part of the PostgreSQL configuration section:

    user

    Database user.

    The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user.

    This value is optional.

    Consult your PostgreSQL documentation for information on how to configure a default database user outside of Cedar Backup, and for information on how to specify a database password when you configure a user within Cedar Backup. You will probably want to modify pg_hda.conf.

    Restrictions: If provided, must be non-empty.

    compress_mode

    Compress mode.

    PostgreSQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    Restrictions: Must be one of none, gzip or bzip2.

    all

    Indicates whether to back up all databases.

    If this value is Y, then all PostgreSQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below).

    If you choose this option, the entire database backup will go into one big dump file.

    Restrictions: Must be a boolean (Y or N).

    database

    Named database to be backed up.

    If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file.

    This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y.

    Restrictions: Must be non-empty.

    CedarBackup3-3.1.6/doc/manual/ch02.html0000664000175000017500000001400512657665550021157 0ustar pronovicpronovic00000000000000Chapter2.Basic Concepts

    Chapter2.Basic Concepts

    General Architecture

    Cedar Backup is architected as a Python package (library) and a single executable (a Python script). The Python package provides both application-specific code and general utilities that can be used by programs other than Cedar Backup. It also includes modules that can be used by third parties to extend Cedar Backup or provide related functionality.

    The cback3 script is designed to run as root, since otherwise it's difficult to back up system directories or write to the CD/DVD device. However, pains are taken to use the backup user's effective user id (specified in configuration) when appropriate. Note: this does not mean that cback3 runs setuid[8] or setgid. However, all files on disk will be owned by the backup user, and and all rsh-based network connections will take place as the backup user.

    The cback3 script is configured via command-line options and an XML configuration file on disk. The configuration file is normally stored in /etc/cback3.conf, but this path can be overridden at runtime. See Chapter5, Configuration for more information on how Cedar Backup is configured.

    Warning

    You should be aware that backups to CD/DVD media can probably be read by any user which has permissions to mount the CD/DVD writer. If you intend to leave the backup disc in the drive at all times, you may want to consider this when setting up device permissions on your machine. See also the section called “Encrypt Extension”.

    CedarBackup3-3.1.6/doc/manual/manual.html0000664000175000017500000166323512657665550021720 0ustar pronovicpronovic00000000000000Cedar Backup 3 Software Manual

    Cedar Backup 3 Software Manual

    Kenneth J. Pronovici

    This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation.

    For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work.

    This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

    Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA


    Table of Contents

    Preface
    Purpose
    Audience
    Conventions Used in This Book
    Typographic Conventions
    Icons
    Organization of This Manual
    Acknowledgments
    1. Introduction
    What is Cedar Backup?
    Migrating from Version 2 to Version 3
    How to Get Support
    History
    2. Basic Concepts
    General Architecture
    Data Recovery
    Cedar Backup Pools
    The Backup Process
    The Collect Action
    The Stage Action
    The Store Action
    The Purge Action
    The All Action
    The Validate Action
    The Initialize Action
    The Rebuild Action
    Coordination between Master and Clients
    Managed Backups
    Media and Device Types
    Incremental Backups
    Extensions
    3. Installation
    Background
    Installing on a Debian System
    Installing from Source
    Installing Dependencies
    Installing the Source Package
    4. Command Line Tools
    Overview
    The cback3 command
    Introduction
    Syntax
    Switches
    Actions
    The cback3-amazons3-sync command
    Introduction
    Syntax
    Switches
    The cback3-span command
    Introduction
    Syntax
    Switches
    Using cback3-span
    Sample run
    5. Configuration
    Overview
    Configuration File Format
    Sample Configuration File
    Reference Configuration
    Options Configuration
    Peers Configuration
    Collect Configuration
    Stage Configuration
    Store Configuration
    Purge Configuration
    Extensions Configuration
    Setting up a Pool of One
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Client Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure the master in your backup pool.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Master Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test connectivity to client machines.
    Step 9: Test your backup.
    Step 10: Modify the backup cron jobs.
    Configuring your Writer Device
    Device Types
    Devices identified by by device name
    Devices identified by SCSI id
    Linux Notes
    Finding your Linux CD Writer
    Mac OS X Notes
    Optimized Blanking Stategy
    6. Official Extensions
    System Information Extension
    Amazon S3 Extension
    Subversion Extension
    MySQL Extension
    PostgreSQL Extension
    Mbox Extension
    Encrypt Extension
    Split Extension
    Capacity Extension
    A. Extension Architecture Interface
    B. Dependencies
    C. Data Recovery
    Finding your Data
    Recovering Filesystem Data
    Full Restore
    Partial Restore
    Recovering MySQL Data
    Recovering Subversion Data
    Recovering Mailbox Data
    Recovering Data split by the Split Extension
    D. Securing Password-less SSH Connections
    E. Copyright

    Preface

    Purpose

    This software manual has been written to document version 2 of Cedar Backup, originally released in early 2005.

    Audience

    This manual has been written for computer-literate administrators who need to use and configure Cedar Backup on their Linux or UNIX-like system. The examples in this manual assume the reader is relatively comfortable with UNIX and command-line interfaces.

    Conventions Used in This Book

    This section covers the various conventions used in this manual.

    Typographic Conventions

    Term

    Used for first use of important terms.

    Command

    Used for commands, command output, and switches

    Replaceable

    Used for replaceable items in code and text

    Filenames

    Used for file and directory names

    Icons

    Note

    This icon designates a note relating to the surrounding text.

    Tip

    This icon designates a helpful tip relating to the surrounding text.

    Warning

    This icon designates a warning relating to the surrounding text.

    Organization of This Manual

    Chapter1, Introduction

    Provides some some general history about Cedar Backup, what needs it is intended to meet, how to get support, and how to migrate from version 2 to version 3.

    Chapter2, Basic Concepts

    Discusses the basic concepts of a Cedar Backup infrastructure, and specifies terms used throughout the rest of the manual.

    Chapter3, Installation

    Explains how to install the Cedar Backup package either from the Python source distribution or from the Debian package.

    Chapter4, Command Line Tools

    Discusses the various Cedar Backup command-line tools, including the primary cback3 command.

    Chapter5, Configuration

    Provides detailed information about how to configure Cedar Backup.

    Chapter6, Official Extensions

    Describes each of the officially-supported Cedar Backup extensions.

    AppendixA, Extension Architecture Interface

    Specifies the Cedar Backup extension architecture interface, through which third party developers can write extensions to Cedar Backup.

    AppendixB, Dependencies

    Provides some additional information about the packages which Cedar Backup relies on, including information about how to find documentation and packages on non-Debian systems.

    AppendixC, Data Recovery

    Cedar Backup provides no facility for restoring backups, assuming the administrator can handle this infrequent task. This appendix provides some notes for administrators to work from.

    AppendixD, Securing Password-less SSH Connections

    Password-less SSH connections are a necessary evil when remote backup processes need to execute without human interaction. This appendix describes some ways that you can reduce the risk to your backup pool should your master machine be compromised.

    Acknowledgments

    The structure of this manual and some of the basic boilerplate has been taken from the book Version Control with Subversion. Thanks to the authors (and O'Reilly) for making this excellent reference available under a free and open license.

    Chapter1.Introduction

    Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it.— Linus Torvalds, at the release of Linux 2.0.8 in July of 1996.

    What is Cedar Backup?

    Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources.

    Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough (and almost all hardware is today), Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis.

    Alternately, Cedar Backup can write your backups to the Amazon S3 cloud rather than relying on physical media.

    Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python 3 programming language.

    There are many different backup software implementations out there in the open source world. Cedar Backup aims to fill a niche: it aims to be a good fit for people who need to back up a limited amount of important data on a regular basis. Cedar Backup isn't for you if you want to back up your huge MP3 collection every night, or if you want to back up a few hundred machines. However, if you administer a small set of machines and you want to run daily incremental backups for things like system configuration, current email, small web sites, Subversion or Mercurial repositories, or small MySQL databases, then Cedar Backup is probably worth your time.

    Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python 3, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems.

    To run a Cedar Backup client, you really just need a working Python 3 installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images or talking to the Amazon S3 infrastructure. A full list of dependencies is provided in the section called “Installing Dependencies”.

    Migrating from Version 2 to Version 3

    The main difference between Cedar Backup version 2 and Cedar Backup version 3 is the targeted Python interpreter. Cedar Backup version 2 was designed for Python 2, while version 3 is a conversion of the original code to Python 3. Other than that, both versions are functionally equivalent. The configuration format is unchanged, and you can mix-and-match masters and clients of different versions in the same backup pool. Both versions will be fully supported until around the time of the Python 2 end-of-life in 2020, but you should plan to migrate sooner than that if possible.

    A major design goal for version 3 was to facilitate easy migration testing for users, by making it possible to install version 3 on the same server where version 2 was already in use. A side effect of this design choice is that all of the executables, configuration files, and logs changed names in version 3. Where version 2 used "cback", version 3 uses "cback3": cback3.conf instead of cback.conf, cback3.log instead of cback.log, etc.

    So, while migrating from version 2 to version 3 is relatively straightforward, you will have to make some changes manually. You will need to create a new configuration file (or soft link to the old one), modify your cron jobs to use the new executable name, etc. You can migrate one server at a time in your pool with no ill effects, or even incrementally migrate a single server by using version 2 and version 3 on different days of the week or for different parts of the backup.

    How to Get Support

    Cedar Backup is open source software that is provided to you at no cost. It is provided with no warranty, not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. However, that said, someone can usually help you solve whatever problems you might see.

    If you experience a problem, your best bet is to file an issue in the issue tracker at BitBucket. [1] When the source code was hosted at SourceForge, there was a mailing list. However, it was very lightly used in the last years before I abandoned SourceForge, and I have decided not to replace it.

    If you are not comfortable discussing your problem in public or listing it in a public database, or if you need to send along information that you do not want made public, then you can write . That mail will go directly to me. If you write the support address about a bug, a scrubbed bug report will eventually end up in the public bug database anyway, so if at all possible you should use the public reporting mechanisms. One of the strengths of the open-source software development model is its transparency.

    Regardless of how you report your problem, please try to provide as much information as possible about the behavior you observed and the environment in which the problem behavior occurred. [2]

    In particular, you should provide: the version of Cedar Backup that you are using; how you installed Cedar Backup (i.e. Debian package, source package, etc.); the exact command line that you executed; any error messages you received, including Python stack traces (if any); and relevant sections of the Cedar Backup log. It would be even better if you could describe exactly how to reproduce the problem, for instance by including your entire configuration file and/or specific information about your system that might relate to the problem. However, please do not provide huge sections of debugging logs unless you are sure they are relevant or unless someone asks for them.

    Tip

    Sometimes, the error that Cedar Backup displays can be rather cryptic. This is because under internal error conditions, the text related to an exception might get propogated all of the way up to the user interface. If the message you receive doesn't make much sense, or if you suspect that it results from an internal error, you might want to re-run Cedar Backup with the --stack option. This forces Cedar Backup to dump the entire Python stack trace associated with the error, rather than just printing the last message it received. This is good information to include along with a bug report, as well.

    History

    Cedar Backup began life in late 2000 as a set of Perl scripts called kbackup. These scripts met an immediate need (which was to back up skyjammer.com and some personal machines) but proved to be unstable, overly verbose and rather difficult to maintain.

    In early 2002, work began on a rewrite of kbackup. The goal was to address many of the shortcomings of the original application, as well as to clean up the code and make it available to the general public. While doing research related to code I could borrow or base the rewrite on, I discovered that there was already an existing backup package with the name kbackup, so I decided to change the name to Cedar Backup instead.

    Because I had become fed up with the prospect of maintaining a large volume of Perl code, I decided to abandon that language in favor of Python. [3] At the time, I chose Python mostly because I was interested in learning it, but in retrospect it turned out to be a very good decision. From my perspective, Python has almost all of the strengths of Perl, but few of its inherent weaknesses (I feel that primarily, Python code often ends up being much more readable than Perl code).

    Around this same time, skyjammer.com and cedar-solutions.com were converted to run Debian GNU/Linux (potato) [4] and I entered the Debian new maintainer queue, so I also made it a goal to implement Debian packages along with a Python source distribution for the new release.

    Version 1.0 of Cedar Backup was released in June of 2002. We immediately began using it to back up skyjammer.com and cedar-solutions.com, where it proved to be much more stable than the original code.

    In the meantime, I continued to improve as a Python programmer and also started doing a significant amount of professional development in Java. It soon became obvious that the internal structure of Cedar Backup 1.0, while much better than kbackup, still left something to be desired. In November 2003, I began an attempt at cleaning up the codebase. I converted all of the internal documentation to use Epydoc, [5] and updated the code to use the newly-released Python logging package [6] after having a good experience with Java's log4j. However, I was still not satisfied with the code, which did not lend itself to the automated regression testing I had used when working with junit in my Java code.

    So, rather than releasing the cleaned-up code, I instead began another ground-up rewrite in May 2004. With this rewrite, I applied everything I had learned from other Java and Python projects I had undertaken over the last few years. I structured the code to take advantage of Python's unique ability to blend procedural code with object-oriented code, and I made automated unit testing a primary requirement. The result was the 2.0 release, which is cleaner, more compact, better focused, and better documented than any release before it. Utility code is less application-specific, and is now usable as a general-purpose library. The 2.0 release also includes a complete regression test suite of over 3000 tests, which will help to ensure that quality is maintained as development continues into the future. [7]

    The 3.0 release of Cedar Backup is a Python 3 conversion of the 2.0 release, with minimal additional functionality. The conversion from Python 2 to Python 3 started in mid-2015, about 5 years before the anticipated deprecation of Python 2 in 2020. Most users should consider transitioning to the 3.0 release.



    [2] See Simon Tatham's excellent bug reporting tutorial: http://www.chiark.greenend.org.uk/~sgtatham/bugs.html .

    [4] Debian's stable releases are named after characters in the Toy Story movie.

    [5] Epydoc is a Python code documentation tool. See http://epydoc.sourceforge.net/.

    [7] Tests are implemented using Python's unit test framework. See http://docs.python.org/lib/module-unittest.html.

    Chapter2.Basic Concepts

    General Architecture

    Cedar Backup is architected as a Python package (library) and a single executable (a Python script). The Python package provides both application-specific code and general utilities that can be used by programs other than Cedar Backup. It also includes modules that can be used by third parties to extend Cedar Backup or provide related functionality.

    The cback3 script is designed to run as root, since otherwise it's difficult to back up system directories or write to the CD/DVD device. However, pains are taken to use the backup user's effective user id (specified in configuration) when appropriate. Note: this does not mean that cback3 runs setuid[8] or setgid. However, all files on disk will be owned by the backup user, and and all rsh-based network connections will take place as the backup user.

    The cback3 script is configured via command-line options and an XML configuration file on disk. The configuration file is normally stored in /etc/cback3.conf, but this path can be overridden at runtime. See Chapter5, Configuration for more information on how Cedar Backup is configured.

    Warning

    You should be aware that backups to CD/DVD media can probably be read by any user which has permissions to mount the CD/DVD writer. If you intend to leave the backup disc in the drive at all times, you may want to consider this when setting up device permissions on your machine. See also the section called “Encrypt Extension”.

    Data Recovery

    Cedar Backup does not include any facility to restore backups. Instead, it assumes that the administrator (using the procedures and references in AppendixC, Data Recovery) can handle the task of restoring their own system, using the standard system tools at hand.

    If I were to maintain recovery code in Cedar Backup, I would almost certainly end up in one of two situations. Either Cedar Backup would only support simple recovery tasks, and those via an interface a lot like that of the underlying system tools; or Cedar Backup would have to include a hugely complicated interface to support more specialized (and hence useful) recovery tasks like restoring individual files as of a certain point in time. In either case, I would end up trying to maintain critical functionality that would be rarely used, and hence would also be rarely tested by end-users. I am uncomfortable asking anyone to rely on functionality that falls into this category.

    My primary goal is to keep the Cedar Backup codebase as simple and focused as possible. I hope you can understand how the choice of providing documentation, but not code, seems to strike the best balance between managing code complexity and providing the functionality that end-users need.

    Cedar Backup Pools

    There are two kinds of machines in a Cedar Backup pool. One machine (the master) has a CD or DVD writer on it and writes the backup to disc. The others (clients) collect data to be written to disc by the master. Collectively, the master and client machines in a pool are called peer machines.

    Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one) and in fact more users seem to use it like this than any other way.

    The Backup Process

    The Cedar Backup backup process is structured in terms of a set of decoupled actions which execute independently (based on a schedule in cron) rather than through some highly coordinated flow of control.

    This design decision has both positive and negative consequences. On the one hand, the code is much simpler and can choose to simply abort or log an error if its expectations are not met. On the other hand, the administrator must coordinate the various actions during initial set-up. See the section called “Coordination between Master and Clients” (later in this chapter) for more information on this subject.

    A standard backup run consists of four steps (actions), some of which execute on the master machine, and some of which execute on one or more client machines. These actions are: collect, stage, store and purge.

    In general, more than one action may be specified on the command-line. If more than one action is specified, then actions will be taken in a sensible order (generally collect, stage, store, purge). A special all action is also allowed, which implies all of the standard actions in the same sensible order.

    The cback3 command also supports several actions that are not part of the standard backup run and cannot be executed along with any other actions. These actions are validate, initialize and rebuild. All of the various actions are discussed further below.

    See Chapter5, Configuration for more information on how a backup run is configured.

    The Collect Action

    The collect action is the first action in a standard backup run. It executes on both master and client nodes. Based on configuration, this action traverses the peer's filesystem and gathers files to be backed up. Each configured high-level directory is collected up into its own tar file in the collect directory. The tarfiles can either be uncompressed (.tar) or compressed with either gzip (.tar.gz) or bzip2 (.tar.bz2).

    There are three supported collect modes: daily, weekly and incremental. Directories configured for daily backups are backed up every day. Directories configured for weekly backups are backed up on the first day of the week. Directories configured for incremental backups are traversed every day, but only the files which have changed (based on a saved-off SHA hash) are actually backed up.

    Collect configuration also allows for a variety of ways to filter files and directories out of the backup. For instance, administrators can configure an ignore indicator file [9] or specify absolute paths or filename patterns [10] to be excluded. You can even configure a backup link farm rather than explicitly listing files and directories in configuration.

    This action is optional on the master. You only need to configure and execute the collect action on the master if you have data to back up on that machine. If you plan to use the master only as a consolidation point to collect data from other machines, then there is no need to execute the collect action there. If you run the collect action on the master, it behaves the same there as anywhere else, and you have to stage the master's collected data just like any other client (typically by configuring a local peer in the stage action).

    The Stage Action

    The stage action is the second action in a standard backup run. It executes on the master peer node. The master works down the list of peers in its backup pool and stages (copies) the collected backup files from each of them into a daily staging directory by peer name.

    For the purposes of this action, the master node can be configured to treat itself as a client node. If you intend to back up data on the master, configure the master as a local peer. Otherwise, just configure each of the clients as a remote peer.

    Local and remote client peers are treated differently. Local peer collect directories are assumed to be accessible via normal copy commands (i.e. on a mounted filesystem) while remote peer collect directories are accessed via an RSH-compatible command such as ssh.

    If a given peer is not ready to be staged, the stage process will log an error, abort the backup for that peer, and then move on to its other peers. This way, one broken peer cannot break a backup for other peers which are up and running.

    Keep in mind that Cedar Backup is flexible about what actions must be executed as part of a backup. If you would prefer, you can stop the backup process at this step, and skip the store step. In this case, the staged directories will represent your backup rather than a disc.

    Note

    Directories collected by another process can be staged by Cedar Backup. If the file cback.collect exists in a collect directory when the stage action is taken, then that directory will be staged.

    The Store Action

    The store action is the third action in a standard backup run. It executes on the master peer node. The master machine determines the location of the current staging directory, and then writes the contents of that staging directory to disc. After the contents of the directory have been written to disc, an optional validation step ensures that the write was successful.

    If the backup is running on the first day of the week, if the drive does not support multisession discs, or if the --full option is passed to the cback3 command, the disc will be rebuilt from scratch. Otherwise, a new ISO session will be added to the disc each day the backup runs.

    This action is entirely optional. If you would prefer to just stage backup data from a set of peers to a master machine, and have the staged directories represent your backup rather than a disc, this is fine.

    Warning

    The store action is not supported on the Mac OS X (darwin) platform. On that platform, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality works on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully.

    The Purge Action

    The purge action is the fourth and final action in a standard backup run. It executes both on the master and client peer nodes. Configuration specifies how long to retain files in certain directories, and older files and empty directories are purged.

    Typically, collect directories are purged daily, and stage directories are purged weekly or slightly less often (if a disc gets corrupted, older backups may still be available on the master). Some users also choose to purge the configured working directory (which is used for temporary files) to eliminate any leftover files which might have resulted from changes to configuration.

    The All Action

    The all action is a pseudo-action which causes all of the actions in a standard backup run to be executed together in order. It cannot be combined with any other actions on the command line.

    Extensions cannot be executed as part of the all action. If you need to execute an extended action, you must specify the other actions you want to run individually on the command line. [11]

    The all action does not have its own configuration. Instead, it relies on the individual configuration sections for all of the other actions.

    The Validate Action

    The validate action is used to validate configuration on a particular peer node, either master or client. It cannot be combined with any other actions on the command line.

    The validate action checks that the configuration file can be found, that the configuration file is valid, and that certain portions of the configuration file make sense (for instance, making sure that specified users exist, directories are readable and writable as necessary, etc.).

    The Initialize Action

    The initialize action is used to initialize media for use with Cedar Backup. This is an optional step. By default, Cedar Backup does not need to use initialized media and will write to whatever media exists in the writer device.

    However, if the check media store configuration option is set to true, Cedar Backup will check the media before writing to it and will error out if the media has not been initialized.

    Initializing the media consists of writing a mostly-empty image using a known media label (the media label will begin with CEDAR BACKUP).

    Note that only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup. You can still configure Cedar Backup to check non-rewritable media; in this case, the check will also pass if the media is apparently unused (i.e. has no media label).

    The Rebuild Action

    The rebuild action is an exception-handling action that is executed independent of a standard backup run. It cannot be combined with any other actions on the command line.

    The rebuild action attempts to rebuild this week's disc from any remaining unpurged staging directories. Typically, it is used to make a copy of a backup, replace lost or damaged media, or to switch to new media mid-week for some other reason.

    To decide what data to write to disc again, the rebuild action looks back and finds the first day of the current week. Then, it finds any remaining staging directories between that date and the current date. If any staging directories are found, they are all written to disc in one big ISO session.

    The rebuild action does not have its own configuration. It relies on configuration for other other actions, especially the store action.

    Coordination between Master and Clients

    Unless you are using Cedar Backup to manage a pool of one, you will need to set up some coordination between your clients and master to make everything work properly. This coordination isn't difficult — it mostly consists of making sure that operations happen in the right order — but some users are suprised that it is required and want to know why Cedar Backup can't just take care of it for me.

    Essentially, each client must finish collecting all of its data before the master begins staging it, and the master must finish staging data from a client before that client purges its collected data. Administrators may need to experiment with the time between the collect and purge entries so that the master has enough time to stage data before it is purged.

    Managed Backups

    Cedar Backup also supports an optional feature called the managed backup. This feature is intended for use with remote clients where cron is not available.

    When managed backups are enabled, managed clients must still be configured as usual. However, rather than using a cron job on the client to execute the collect and purge actions, the master executes these actions on the client via a remote shell.

    To make this happen, first set up one or more managed clients in Cedar Backup configuration. Then, invoke Cedar Backup with the --managed command-line option. Whenever Cedar Backup invokes an action locally, it will invoke the same action on each of the managed clients.

    Technically, this feature works for any client, not just clients that don't have cron available. Used this way, it can simplify the setup process, because cron only has to be configured on the master. For some users, that may be motivation enough to use this feature all of the time.

    However, please keep in mind that this feature depends on a stable network. If your network connection drops, your backup will be interrupted and will not be complete. It is even possible that some of the Cedar Backup metadata (like incremental backup state) will be corrupted. The risk is not high, but it is something you need to be aware of if you choose to use this optional feature.

    Media and Device Types

    Cedar Backup is focused around writing backups to CD or DVD media using a standard SCSI or IDE writer. In Cedar Backup terms, the disc itself is referred to as the media, and the CD/DVD drive is referred to as the device or sometimes the backup device. [12]

    When using a new enough backup device, a new multisession ISO image [13] is written to the media on the first day of the week, and then additional multisession images are added to the media each day that Cedar Backup runs. This way, the media is complete and usable at the end of every backup run, but a single disc can be used all week long. If your backup device does not support multisession images — which is really unusual today — then a new ISO image will be written to the media each time Cedar Backup runs (and you should probably confine yourself to the daily backup mode to avoid losing data).

    Cedar Backup currently supports four different kinds of CD media:

    cdr-74

    74-minute non-rewritable CD media

    cdrw-74

    74-minute rewritable CD media

    cdr-80

    80-minute non-rewritable CD media

    cdrw-80

    80-minute rewritable CD media

    I have chosen to support just these four types of CD media because they seem to be the most standard of the various types commonly sold in the U.S. as of this writing (early 2005). If you regularly use an unsupported media type and would like Cedar Backup to support it, send me information about the capacity of the media in megabytes (MB) and whether it is rewritable.

    Cedar Backup also supports two kinds of DVD media:

    dvd+r

    Single-layer non-rewritable DVD+R media

    dvd+rw

    Single-layer rewritable DVD+RW media

    The underlying growisofs utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type.

    Incremental Backups

    Cedar Backup supports three different kinds of backups for individual collect directories. These are daily, weekly and incremental backups. Directories using the daily mode are backed up every day. Directories using the weekly mode are only backed up on the first day of the week, or when the --full option is used. Directories using the incremental mode are always backed up on the first day of the week (like a weekly backup), but after that only the files which have changed are actually backed up on a daily basis.

    In Cedar Backup, incremental backups are not based on date, but are instead based on saved checksums, one for each backed-up file. When a full backup is run, Cedar Backup gathers a checksum value [14] for each backed-up file. The next time an incremental backup is run, Cedar Backup checks its list of file/checksum pairs for each file that might be backed up. If the file's checksum value does not match the saved value, or if the file does not appear in the list of file/checksum pairs, then it will be backed up and a new checksum value will be placed into the list. Otherwise, the file will be ignored and the checksum value will be left unchanged.

    Cedar Backup stores the file/checksum pairs in .sha files in its working directory, one file per configured collect directory. The mappings in these files are reset at the start of the week or when the --full option is used. Because these files are used for an entire week, you should never purge the working directory more frequently than once per week.

    Extensions

    Imagine that there is a third party developer who understands how to back up a certain kind of database repository. This third party might want to integrate his or her specialized backup into the Cedar Backup process, perhaps thinking of the database backup as a sort of collect step.

    Prior to Cedar Backup version 2, any such integration would have been completely independent of Cedar Backup itself. The external backup functionality would have had to maintain its own configuration and would not have had access to any Cedar Backup configuration.

    Starting with version 2, Cedar Backup allows extensions to the backup process. An extension is an action that isn't part of the standard backup process (i.e. not collect, stage, store or purge), but can be executed by Cedar Backup when properly configured.

    Extension authors implement an action process function with a certain interface, and are allowed to add their own sections to the Cedar Backup configuration file, so that all backup configuration can be centralized. Then, the action process function is associated with an action name which can be executed from the cback3 command line like any other action.

    Hopefully, as the Cedar Backup user community grows, users will contribute their own extensions back to the community. Well-written general-purpose extensions will be accepted into the official codebase.

    Note

    Users should see Chapter5, Configuration for more information on how extensions are configured, and Chapter6, Official Extensions for details on all of the officially-supported extensions.

    Developers may be interested in AppendixA, Extension Architecture Interface.



    [9] Analagous to .cvsignore in CVS

    [10] In terms of Python regular expressions

    [11] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. I am not planning to change the way this works.

    [12] My original backup device was an old Sony CRX140E 4X CD-RW drive. It has since died, and I currently develop using a Lite-On 1673S DVDRW drive.

    [13] An ISO image is the standard way of creating a filesystem to be copied to a CD or DVD. It is essentially a filesystem-within-a-file and many UNIX operating systems can actually mount ISO image files just like hard drives, floppy disks or actual CDs. See Wikipedia for more information: http://en.wikipedia.org/wiki/ISO_image.

    [14] The checksum is actually an SHA cryptographic hash. See Wikipedia for more information: http://en.wikipedia.org/wiki/SHA-1.

    Chapter3.Installation

    Background

    There are two different ways to install Cedar Backup. The easiest way is to install the pre-built Debian packages. This method is painless and ensures that all of the correct dependencies are available, etc.

    If you are running a Linux distribution other than Debian or you are running some other platform like FreeBSD or Mac OS X, then you must use the Python source distribution to install Cedar Backup. When using this method, you need to manage all of the dependencies yourself.

    Installing on a Debian System

    The easiest way to install Cedar Backup onto a Debian system is by using a tool such as apt-get or aptitude.

    If you are running a Debian release which contains Cedar Backup, you can use your normal Debian mirror as an APT data source. (The Debian jessie release is the first release to contain Cedar Backup 3.) Otherwise, you need to install from the Cedar Solutions APT data source. [15] To do this, add the Cedar Solutions APT data source to your /etc/apt/sources.list file.

    After you have configured the proper APT data source, install Cedar Backup using this set of commands:

    $ apt-get update
    $ apt-get install cedar-backup3 cedar-backup3-doc
          

    Several of the Cedar Backup dependencies are listed as recommended rather than required. If you are installing Cedar Backup on a master machine, you must install some or all of the recommended dependencies, depending on which actions you intend to execute. The stage action normally requires ssh, and the store action requires eject and either cdrecord/mkisofs or dvd+rw-tools. Clients must also install some sort of ssh server if a remote master will collect backups from them.

    If you would prefer, you can also download the .deb files and install them by hand with a tool such as dpkg. You can find these files files in the Cedar Solutions APT source.

    In either case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration.

    Note

    The Debian package-management tools must generally be run as root. It is safe to install Cedar Backup to a non-standard location and run it as a non-root user. However, to do this, you must install the source distribution instead of the Debian package.

    Installing from Source

    On platforms other than Debian, Cedar Backup is installed from a Python source distribution. [16] You will have to manage dependencies on your own.

    Tip

    Many UNIX-like distributions provide an automatic or semi-automatic way to install packages like the ones Cedar Backup requires (think RPMs for Mandrake or RedHat, Gentoo's Portage system, the Fink project for Mac OS X, or the BSD ports system). If you are not sure how to install these packages on your system, you might want to check out AppendixB, Dependencies. This appendix provides links to upstream source packages, plus as much information as I have been able to gather about packages for non-Debian platforms.

    Installing Dependencies

    Cedar Backup requires a number of external packages in order to function properly. Before installing Cedar Backup, you must make sure that these dependencies are met.

    Cedar Backup is written in Python 3 and requires version 3.4 or greater of the language.

    Additionally, remote client peer nodes must be running an RSH-compatible server, such as the ssh server, and master nodes must have an RSH-compatible client installed if they need to connect to remote peer machines.

    Master machines also require several other system utilities, most having to do with writing and validating CD/DVD media. On master machines, you must make sure that these utilities are available if you want to to run the store action:

    • mkisofs

    • eject

    • mount

    • unmount

    • volname

    Then, you need this utility if you are writing CD media:

    • cdrecord

    or these utilities if you are writing DVD media:

    • growisofs

    All of these utilities are common and are easy to find for almost any UNIX-like operating system.

    Installing the Source Package

    Python source packages are fairly easy to install. They are distributed as .tar.gz files which contain Python source code, a manifest and an installation script called setup.py.

    Once you have downloaded the source package from the Cedar Solutions website, [15] untar it:

    $ zcat CedarBackup3-3.0.0.tar.gz | tar xvf -
             

    This will create a directory called (in this case) CedarBackup3-3.0.0. The version number in the directory will always match the version number in the filename.

    If you have root access and want to install the package to the standard Python location on your system, then you can install the package in two simple steps:

    $ cd CedarBackup3-3.0.0
    $ python3 setup.py install
             

    Make sure that you are using Python 3.4 or better to execute setup.py.

    You may also wish to run the unit tests before actually installing anything. Run them like so:

    python3 util/test.py
             

    If any unit test reports a failure on your system, please email me the output from the unit test, so I can fix the problem. [17] This is particularly important for non-Linux platforms where I do not have a test system available to me.

    Some users might want to choose a different install location or change other install parameters. To get more information about how setup.py works, use the --help option:

    $ python3 setup.py --help
    $ python3 setup.py install --help
             

    In any case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration.

    Chapter4.Command Line Tools

    Overview

    Cedar Backup comes with three command-line programs: cback3, cback3-amazons3-sync, and cback3-span.

    The cback3 command is the primary command line interface and the only Cedar Backup program that most users will ever need.

    The cback3-amazons3-sync tool is used for synchronizing entire directories of files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar Backup process.

    Users who have a lot of data to back up — more than will fit on a single CD or DVD — can use the interactive cback3-span tool to split their data between multiple discs.

    The cback3 command

    Introduction

    Cedar Backup's primary command-line interface is the cback3 command. It controls the entire backup process.

    Syntax

    The cback3 command has the following syntax:

     Usage: cback3 [switches] action(s)
    
     The following switches are accepted:
    
       -h, --help         Display this usage/help listing
       -V, --version      Display version information
       -b, --verbose      Print verbose output as well as logging to disk
       -q, --quiet        Run quietly (display no output to the screen)
       -c, --config       Path to config file (default: /etc/cback3.conf)
       -f, --full         Perform a full backup, regardless of configuration
       -M, --managed      Include managed clients when executing actions
       -N, --managed-only Include ONLY managed clients when executing actions
       -l, --logfile      Path to logfile (default: /var/log/cback3.log)
       -o, --owner        Logfile ownership, user:group (default: root:adm)
       -m, --mode         Octal logfile permissions mode (default: 640)
       -O, --output       Record some sub-command (i.e. cdrecord) output to the log
       -d, --debug        Write debugging information to the log (implies --output)
       -s, --stack        Dump a Python stack trace instead of swallowing exceptions
       -D, --diagnostics  Print runtime diagnostics to the screen and exit
    
     The following actions may be specified:
    
       all                Take all normal actions (collect, stage, store, purge)
       collect            Take the collect action
       stage              Take the stage action
       store              Take the store action
       purge              Take the purge action
       rebuild            Rebuild "this week's" disc if possible
       validate           Validate configuration only
       initialize         Initialize media for use with Cedar Backup
    
     You may also specify extended actions that have been defined in
     configuration.
    
     You must specify at least one action to take.  More than one of
     the "collect", "stage", "store" or "purge" actions and/or
     extended actions may be specified in any arbitrary order; they
     will be executed in a sensible order.  The "all", "rebuild",
     "validate", and "initialize" actions may not be combined with
     other actions.
             

    Note that the all action only executes the standard four actions. It never executes any of the configured extensions. [18]

    Switches

    -h, --help

    Display usage/help listing.

    -V, --version

    Display version information.

    -b, --verbose

    Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen.

    -q, --quiet

    Run quietly (display no output to the screen).

    -c, --config

    Specify the path to an alternate configuration file. The default configuration file is /etc/cback3.conf.

    -f, --full

    Perform a full backup, regardless of configuration. For the collect action, this means that any existing information related to incremental backups will be ignored and rewritten; for the store action, this means that a new disc will be started.

    -M, --managed

    Include managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client after executing the action locally.

    -N, --managed-only

    Include only managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client — but do not execute the action locally.

    -l, --logfile

    Specify the path to an alternate logfile. The default logfile file is /var/log/cback3.log.

    -o, --owner

    Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback3 command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values.

    -m, --mode

    Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback3 command is executed, it will retain its existing ownership and mode.

    -O, --output

    Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference.

    -d, --debug

    Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well.

    -s, --stack

    Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report.

    -D, --diagnostics

    Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report.

    Actions

    You can find more information about the various actions in the section called “The Backup Process” (in Chapter2, Basic Concepts). In general, you may specify any combination of the collect, stage, store or purge actions, and the specified actions will be executed in a sensible order. Or, you can specify one of the all, rebuild, validate, or initialize actions (but these actions may not be combined with other actions).

    If you have configured any Cedar Backup extensions, then the actions associated with those extensions may also be specified on the command line. If you specify any other actions along with an extended action, the actions will be executed in a sensible order per configuration. The all action never executes extended actions, however.

    The cback3-amazons3-sync command

    Introduction

    The cback3-amazons3-sync tool is used for synchronizing entire directories of files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar Backup process.

    This might be a good option for some types of data, as long as you understand the limitations around retrieving previous versions of objects that get modified or deleted as part of a sync. S3 does support versioning, but it won't be quite as easy to get at those previous versions as with an explicit incremental backup like cback3 provides. Cedar Backup does not provide any tooling that would help you retrieve previous versions.

    The underlying functionality relies on the AWS CLI toolset. Before you use this extension, you need to set up your Amazon S3 account and configure AWS CLI as detailed in Amazons's setup guide. The aws command will be executed as the same user that is executing the cback3-amazons3-sync command, so make sure you configure it as the proper user. (This is different than the amazons3 extension, which is designed to execute as root and switches over to the configured backup user to execute AWS CLI commands.)

    Syntax

    The cback3-amazons3-sync command has the following syntax:

     Usage: cback3-amazons3-sync [switches] sourceDir s3bucketUrl
    
     Cedar Backup Amazon S3 sync tool.
    
     This Cedar Backup utility synchronizes a local directory to an Amazon S3
     bucket.  After the sync is complete, a validation step is taken.  An
     error is reported if the contents of the bucket do not match the
     source directory, or if the indicated size for any file differs.
     This tool is a wrapper over the AWS CLI command-line tool.
    
     The following arguments are required:
    
       sourceDir            The local source directory on disk (must exist)
       s3BucketUrl          The URL to the target Amazon S3 bucket
    
     The following switches are accepted:
    
       -h, --help           Display this usage/help listing
       -V, --version        Display version information
       -b, --verbose        Print verbose output as well as logging to disk
       -q, --quiet          Run quietly (display no output to the screen)
       -l, --logfile        Path to logfile (default: /var/log/cback3.log)
       -o, --owner          Logfile ownership, user:group (default: root:adm)
       -m, --mode           Octal logfile permissions mode (default: 640)
       -O, --output         Record some sub-command (i.e. aws) output to the log
       -d, --debug          Write debugging information to the log (implies --output)
       -s, --stack          Dump Python stack trace instead of swallowing exceptions
       -D, --diagnostics    Print runtime diagnostics to the screen and exit
       -v, --verifyOnly     Only verify the S3 bucket contents, do not make changes
       -w, --ignoreWarnings Ignore warnings about problematic filename encodings
    
     Typical usage would be something like:
    
       cback3-amazons3-sync /home/myuser s3://example.com-backup/myuser
    
     This will sync the contents of /home/myuser into the indicated bucket.
             

    Switches

    -h, --help

    Display usage/help listing.

    -V, --version

    Display version information.

    -b, --verbose

    Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen.

    -q, --quiet

    Run quietly (display no output to the screen).

    -l, --logfile

    Specify the path to an alternate logfile. The default logfile file is /var/log/cback3.log.

    -o, --owner

    Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback3-amazons3-sync command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values.

    -m, --mode

    Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback3-amazons3-sync command is executed, it will retain its existing ownership and mode.

    -O, --output

    Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference.

    -d, --debug

    Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well.

    -s, --stack

    Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report.

    -D, --diagnostics

    Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report.

    -v, --verifyOnly

    Only verify the S3 bucket contents against the directory on disk. Do not make any changes to the S3 bucket or transfer any files. This is intended as a quick check to see whether the sync is up-to-date.

    Although no files are transferred, the tool will still execute the source filename encoding check, discussed below along with --ignoreWarnings.

    -w, --ignoreWarnings

    The AWS CLI S3 sync process is very picky about filename encoding. Files that the Linux filesystem handles with no problems can cause problems in S3 if the filename cannot be encoded properly in your configured locale. As of this writing, filenames like this will cause the sync process to abort without transferring all files as expected.

    To avoid confusion, the cback3-amazons3-sync tries to guess which files in the source directory will cause problems, and refuses to execute the AWS CLI S3 sync if any problematic files exist. If you'd rather proceed anyway, use --ignoreWarnings.

    If problematic files are found, then you have basically two options: either correct your locale (i.e. if you have set LANG=C) or rename the file so it can be encoded properly in your locale. The error messages will tell you the expected encoding (from your locale) and the actual detected encoding for the filename.

    The cback3-span command

    Introduction

    Cedar Backup was designed — and is still primarily focused — around weekly backups to a single CD or DVD. Most users who back up more data than fits on a single disc seem to stop their backup process at the stage step, using Cedar Backup as an easy way to collect data.

    However, some users have expressed a need to write these large kinds of backups to disc — if not every day, then at least occassionally. The cback3-span tool was written to meet those needs. If you have staged more data than fits on a single CD or DVD, you can use cback3-span to split that data between multiple discs.

    cback3-span is not a general-purpose disc-splitting tool. It is a specialized program that requires Cedar Backup configuration to run. All it can do is read Cedar Backup configuration, find any staging directories that have not yet been written to disc, and split the files in those directories between discs.

    cback3-span accepts many of the same command-line options as cback3, but must be run interactively. It cannot be run from cron. This is intentional. It is intended to be a useful tool, not a new part of the backup process (that is the purpose of an extension).

    In order to use cback3-span, you must configure your backup such that the largest individual backup file can fit on a single disc. The command will not split a single file onto more than one disc. All it can do is split large directories onto multiple discs. Files in those directories will be arbitrarily split up so that space is utilized most efficiently.

    Syntax

    The cback3-span command has the following syntax:

     Usage: cback3-span [switches]
    
     Cedar Backup 'span' tool.
    
     This Cedar Backup utility spans staged data between multiple discs.
     It is a utility, not an extension, and requires user interaction.
    
     The following switches are accepted, mostly to set up underlying
     Cedar Backup functionality:
    
       -h, --help     Display this usage/help listing
       -V, --version  Display version information
       -b, --verbose  Print verbose output as well as logging to disk
       -c, --config   Path to config file (default: /etc/cback3.conf)
       -l, --logfile  Path to logfile (default: /var/log/cback3.log)
       -o, --owner    Logfile ownership, user:group (default: root:adm)
       -m, --mode     Octal logfile permissions mode (default: 640)
       -O, --output   Record some sub-command (i.e. cdrecord) output to the log
       -d, --debug    Write debugging information to the log (implies --output)
       -s, --stack    Dump a Python stack trace instead of swallowing exceptions
             

    Switches

    -h, --help

    Display usage/help listing.

    -V, --version

    Display version information.

    -b, --verbose

    Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen.

    -c, --config

    Specify the path to an alternate configuration file. The default configuration file is /etc/cback3.conf.

    -l, --logfile

    Specify the path to an alternate logfile. The default logfile file is /var/log/cback3.log.

    -o, --owner

    Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback3 command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values.

    -m, --mode

    Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback3 command is executed, it will retain its existing ownership and mode.

    -O, --output

    Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD/DVD recorder and its media.

    -d, --debug

    Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well.

    -s, --stack

    Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report.

    Using cback3-span

    As discussed above, the cback3-span is an interactive command. It cannot be run from cron.

    You can typically use the default answer for most questions. The only two questions that you may not want the default answer for are the fit algorithm and the cushion percentage.

    The cushion percentage is used by cback3-span to determine what capacity to shoot for when splitting up your staging directories. A 650 MB disc does not fit fully 650 MB of data. It's usually more like 627 MB of data. The cushion percentage tells cback3-span how much overhead to reserve for the filesystem. The default of 4% is usually OK, but if you have problems you may need to increase it slightly.

    The fit algorithm tells cback3-span how it should determine which items should be placed on each disc. If you don't like the result from one algorithm, you can reject that solution and choose a different algorithm.

    The four available fit algorithms are:

    worst

    The worst-fit algorithm.

    The worst-fit algorithm proceeds through a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing.

    best

    The best-fit algorithm.

    The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not unusual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms.

    first

    The first-fit algorithm.

    The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting.

    alternate

    A hybrid algorithm that I call alternate-fit.

    This algorithm tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slightly fewer items.

    Sample run

    Below is a log showing a sample cback3-span run.

    ================================================
               Cedar Backup 'span' tool
    ================================================
    
    This the Cedar Backup span tool.  It is used to split up staging
    data when that staging data does not fit onto a single disc.
    
    This utility operates using Cedar Backup configuration.  Configuration
    specifies which staging directory to look at and which writer device
    and media type to use.
    
    Continue? [Y/n]: 
    ===
    
    Cedar Backup store configuration looks like this:
    
       Source Directory...: /tmp/staging
       Media Type.........: cdrw-74
       Device Type........: cdwriter
       Device Path........: /dev/cdrom
       Device SCSI ID.....: None
       Drive Speed........: None
       Check Data Flag....: True
       No Eject Flag......: False
    
    Is this OK? [Y/n]: 
    ===
    
    Please wait, indexing the source directory (this may take a while)...
    ===
    
    The following daily staging directories have not yet been written to disc:
    
       /tmp/staging/2007/02/07
       /tmp/staging/2007/02/08
       /tmp/staging/2007/02/09
       /tmp/staging/2007/02/10
       /tmp/staging/2007/02/11
       /tmp/staging/2007/02/12
       /tmp/staging/2007/02/13
       /tmp/staging/2007/02/14
    
    The total size of the data in these directories is 1.00 GB.
    
    Continue? [Y/n]: 
    ===
    
    Based on configuration, the capacity of your media is 650.00 MB.
    
    Since estimates are not perfect and there is some uncertainly in
    media capacity calculations, it is good to have a "cushion",
    a percentage of capacity to set aside.  The cushion reduces the
    capacity of your media, so a 1.5% cushion leaves 98.5% remaining.
    
    What cushion percentage? [4.00]: 
    ===
    
    The real capacity, taking into account the 4.00% cushion, is 627.25 MB.
    It will take at least 2 disc(s) to store your 1.00 GB of data.
    
    Continue? [Y/n]: 
    ===
    
    Which algorithm do you want to use to span your data across
    multiple discs?
    
    The following algorithms are available:
    
       first....: The "first-fit" algorithm
       best.....: The "best-fit" algorithm
       worst....: The "worst-fit" algorithm
       alternate: The "alternate-fit" algorithm
    
    If you don't like the results you will have a chance to try a
    different one later.
    
    Which algorithm? [worst]: 
    ===
    
    Please wait, generating file lists (this may take a while)...
    ===
    
    Using the "worst-fit" algorithm, Cedar Backup can split your data
    into 2 discs.
    
    Disc 1: 246 files, 615.97 MB, 98.20% utilization
    Disc 2: 8 files, 412.96 MB, 65.84% utilization
    
    Accept this solution? [Y/n]: n
    ===
    
    Which algorithm do you want to use to span your data across
    multiple discs?
    
    The following algorithms are available:
    
       first....: The "first-fit" algorithm
       best.....: The "best-fit" algorithm
       worst....: The "worst-fit" algorithm
       alternate: The "alternate-fit" algorithm
    
    If you don't like the results you will have a chance to try a
    different one later.
    
    Which algorithm? [worst]: alternate
    ===
    
    Please wait, generating file lists (this may take a while)...
    ===
    
    Using the "alternate-fit" algorithm, Cedar Backup can split your data
    into 2 discs.
    
    Disc 1: 73 files, 627.25 MB, 100.00% utilization
    Disc 2: 181 files, 401.68 MB, 64.04% utilization
    
    Accept this solution? [Y/n]: y
    ===
    
    Please place the first disc in your backup device.
    Press return when ready.
    ===
    
    Initializing image...
    Writing image to disc...
             


    [18] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. Better to be definitive than confusing.

    Chapter5.Configuration

    Table of Contents

    Overview
    Configuration File Format
    Sample Configuration File
    Reference Configuration
    Options Configuration
    Peers Configuration
    Collect Configuration
    Stage Configuration
    Store Configuration
    Purge Configuration
    Extensions Configuration
    Setting up a Pool of One
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Client Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure the master in your backup pool.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Master Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test connectivity to client machines.
    Step 9: Test your backup.
    Step 10: Modify the backup cron jobs.
    Configuring your Writer Device
    Device Types
    Devices identified by by device name
    Devices identified by SCSI id
    Linux Notes
    Finding your Linux CD Writer
    Mac OS X Notes
    Optimized Blanking Stategy

    Overview

    Configuring Cedar Backup is unfortunately somewhat complicated. The good news is that once you get through the initial configuration process, you'll hardly ever have to change anything. Even better, the most typical changes (i.e. adding and removing directories from a backup) are easy.

    First, familiarize yourself with the concepts in Chapter2, Basic Concepts. In particular, be sure that you understand the differences between a master and a client. (If you only have one machine, then your machine will act as both a master and a client, and we'll refer to your setup as a pool of one.) Then, install Cedar Backup per the instructions in Chapter3, Installation.

    Once everything has been installed, you are ready to begin configuring Cedar Backup. Look over the section called “The cback3 command” (in Chapter4, Command Line Tools) to become familiar with the command line interface. Then, look over the section called “Configuration File Format” (below) and create a configuration file for each peer in your backup pool. To start with, create a very simple configuration file, then expand it later. Decide now whether you will store the configuration file in the standard place (/etc/cback3.conf) or in some other location.

    After you have all of the configuration files in place, configure each of your machines, following the instructions in the appropriate section below (for master, client or pool of one). Since the master and client(s) must communicate over the network, you won't be able to fully configure the master without configuring each client and vice-versa. The instructions are clear on what needs to be done.

    Configuration File Format

    Cedar Backup is configured through an XML [19] configuration file, usually called /etc/cback3.conf. The configuration file contains the following sections: reference, options, collect, stage, store, purge and extensions.

    All configuration files must contain the two general configuration sections, the reference section and the options section. Besides that, administrators need only configure actions they intend to use. For instance, on a client machine, administrators will generally only configure the collect and purge sections, while on a master machine they will have to configure all four action-related sections. [20] The extensions section is always optional and can be omitted unless extensions are in use.

    Note

    Even though the Mac OS X (darwin) filesystem is not case-sensitive, Cedar Backup configuration is generally case-sensitive on that platform, just like on all other platforms. For instance, even though the files Ken and ken might be the same on the Mac OS X filesystem, an exclusion in Cedar Backup configuration for ken will only match the file if it is actually on the filesystem with a lower-case k as its first letter. This won't surprise the typical UNIX user, but might surprise someone who's gotten into the Mac Mindset.

    Sample Configuration File

    Both the Python source distribution and the Debian package come with a sample configuration file. The Debian package includes its sample in /usr/share/doc/cedar-backup3/examples/cback3.conf.sample.

    This is a sample configuration file similar to the one provided in the source package. Documentation below provides more information about each of the individual configuration sections.

    <?xml version="1.0"?>
    <cb_config>
       <reference>
          <author>Kenneth J. Pronovici</author>
          <revision>1.3</revision>
          <description>Sample</description>
       </reference>
       <options>
          <starting_day>tuesday</starting_day>
          <working_dir>/opt/backup/tmp</working_dir>
          <backup_user>backup</backup_user>
          <backup_group>group</backup_group>
          <rcp_command>/usr/bin/scp -B</rcp_command>
       </options>
       <peers>
          <peer>
             <name>debian</name>
             <type>local</type>
             <collect_dir>/opt/backup/collect</collect_dir>
          </peer>
       </peers>
       <collect>
          <collect_dir>/opt/backup/collect</collect_dir>
          <collect_mode>daily</collect_mode>
          <archive_mode>targz</archive_mode>
          <ignore_file>.cbignore</ignore_file>
          <dir>
             <abs_path>/etc</abs_path>
             <collect_mode>incr</collect_mode>
          </dir>
          <file>
             <abs_path>/home/root/.profile</abs_path>
             <collect_mode>weekly</collect_mode>
          </file>
       </collect>
       <stage>
          <staging_dir>/opt/backup/staging</staging_dir>
       </stage>
       <store>
          <source_dir>/opt/backup/staging</source_dir>
          <media_type>cdrw-74</media_type>
          <device_type>cdwriter</device_type>
          <target_device>/dev/cdrw</target_device>
          <target_scsi_id>0,0,0</target_scsi_id>
          <drive_speed>4</drive_speed>
          <check_data>Y</check_data>
          <check_media>Y</check_media>
          <warn_midnite>Y</warn_midnite>
       </store>
       <purge>
          <dir>
             <abs_path>/opt/backup/stage</abs_path>
             <retain_days>7</retain_days>
          </dir>
          <dir>
             <abs_path>/opt/backup/collect</abs_path>
             <retain_days>0</retain_days>
          </dir>
       </purge>
    </cb_config>
             

    Reference Configuration

    The reference configuration section contains free-text elements that exist only for reference.. The section itself is required, but the individual elements may be left blank if desired.

    This is an example reference configuration section:

    <reference>
       <author>Kenneth J. Pronovici</author>
       <revision>Revision 1.3</revision>
       <description>Sample</description>
       <generator>Yet to be Written Config Tool (tm)</description>
    </reference>
             

    The following elements are part of the reference configuration section:

    author

    Author of the configuration file.

    Restrictions: None

    revision

    Revision of the configuration file.

    Restrictions: None

    description

    Description of the configuration file.

    Restrictions: None

    generator

    Tool that generated the configuration file, if any.

    Restrictions: None

    Options Configuration

    The options configuration section contains configuration options that are not specific to any one action.

    This is an example options configuration section:

    <options>
       <starting_day>tuesday</starting_day>
       <working_dir>/opt/backup/tmp</working_dir>
       <backup_user>backup</backup_user>
       <backup_group>backup</backup_group>
       <rcp_command>/usr/bin/scp -B</rcp_command>
       <rsh_command>/usr/bin/ssh</rsh_command>
       <cback_command>/usr/bin/cback</cback_command>
       <managed_actions>collect, purge</managed_actions>
       <override>
          <command>cdrecord</command>
          <abs_path>/opt/local/bin/cdrecord</abs_path>
       </override>
       <override>
          <command>mkisofs</command>
          <abs_path>/opt/local/bin/mkisofs</abs_path>
       </override>
       <pre_action_hook>
          <action>collect</action>
          <command>echo "I AM A PRE-ACTION HOOK RELATED TO COLLECT"</command>
       </pre_action_hook>
       <post_action_hook>
          <action>collect</action>
          <command>echo "I AM A POST-ACTION HOOK RELATED TO COLLECT"</command>
       </post_action_hook>
    </options>
             

    The following elements are part of the options configuration section:

    starting_day

    Day that starts the week.

    Cedar Backup is built around the idea of weekly backups. The starting day of week is the day that media will be rebuilt from scratch and that incremental backup information will be cleared.

    Restrictions: Must be a day of the week in English, i.e. monday, tuesday, etc. The validation is case-sensitive.

    working_dir

    Working (temporary) directory to use for backups.

    This directory is used for writing temporary files, such as tar file or ISO filesystem images as they are being built. It is also used to store day-to-day information about incremental backups.

    The working directory should contain enough free space to hold temporary tar files (on a client) or to build an ISO filesystem image (on a master).

    Restrictions: Must be an absolute path

    backup_user

    Effective user that backups should run as.

    This user must exist on the machine which is being configured and should not be root (although that restriction is not enforced).

    This value is also used as the default remote backup user for remote peers.

    Restrictions: Must be non-empty

    backup_group

    Effective group that backups should run as.

    This group must exist on the machine which is being configured, and should not be root or some other powerful group (although that restriction is not enforced).

    Restrictions: Must be non-empty

    rcp_command

    Default rcp-compatible copy command for staging.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This value is used as the default value for all remote peers. Technically, this value is not needed by clients, but we require it for all config files anyway.

    Restrictions: Must be non-empty

    rsh_command

    Default rsh-compatible command to use for remote shells.

    The rsh command should be the exact command used for remote shells, including any required options.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Restrictions: Must be non-empty

    cback_command

    Default cback-compatible command to use on managed remote clients.

    The cback command should be the exact command used for for executing cback on a remote managed client, including any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration.

    Restrictions: Must be non-empty

    managed_actions

    Default set of actions that are managed on remote clients.

    This is a comma-separated list of actions that the master will manage on behalf of remote clients. Typically, it would include only collect-like actions and purge.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Restrictions: Must be non-empty.

    override

    Command to override with a customized path.

    This is a subsection which contains a command to override with a customized path. This functionality would be used if root's $PATH does not include a particular required command, or if there is a need to use a version of a command that is different than the one listed on the $PATH. Most users will only use this section when directed to, in order to fix a problem.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    command

    Name of the command to be overridden, i.e. cdrecord.

    Restrictions: Must be a non-empty string.

    abs_path

    The absolute path where the overridden command can be found.

    Restrictions: Must be an absolute path.

    pre_action_hook

    Hook configuring a command to be executed before an action.

    This is a subsection which configures a command to be executed immediately before a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    action

    Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists.

    Restrictions: Must be a non-empty string.

    command

    Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command.

    Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands.

    Restrictions: Must be a non-empty string.

    post_action_hook

    Hook configuring a command to be executed after an action.

    This is a subsection which configures a command to be executed immediately after a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    action

    Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists.

    Restrictions: Must be a non-empty string.

    command

    Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command.

    Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands.

    Restrictions: Must be a non-empty string.

    Peers Configuration

    The peers configuration section contains a list of the peers managed by a master. This section is only required on a master.

    This is an example peers configuration section:

    <peers>
       <peer>
          <name>machine1</name>
          <type>local</type>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
       <peer>
          <name>machine2</name>
          <type>remote</type>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
          <ignore_failures>all</ignore_failures>
       </peer>
       <peer>
          <name>machine3</name>
          <type>remote</type>
          <managed>Y</managed>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
          <rcp_command>/usr/bin/scp</rcp_command>
          <rsh_command>/usr/bin/ssh</rsh_command>
          <cback_command>/usr/bin/cback</cback_command>
          <managed_actions>collect, purge</managed_actions>
       </peer>
    </peers>
             

    The following elements are part of the peers configuration section:

    peer (local version)

    Local client peer in a backup pool.

    This is a subsection which contains information about a specific local client peer managed by a master.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    The local peer subsection must contain the following fields:

    name

    Name of the peer, typically a valid hostname.

    For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a local peer, it must always be local.

    Restrictions: Must be local.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp).

    Restrictions: Must be an absolute path.

    ignore_failures

    Ignore failure mode for this peer

    The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator.

    The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup.

    Restrictions: If set, must be one of "none", "all", "daily", or "weekly".

    peer (remote version)

    Remote client peer in a backup pool.

    This is a subsection which contains information about a specific remote client peer managed by a master. A remote peer is one which can be reached via an rsh-based network call.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    The remote peer subsection must contain the following fields:

    name

    Hostname of the peer.

    For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a remote peer, it must always be remote.

    Restrictions: Must be remote.

    managed

    Indicates whether this peer is managed.

    A managed peer (or managed client) is a peer for which the master manages all of the backup activites via a remote shell.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command).

    Restrictions: Must be an absolute path.

    ignore_failures

    Ignore failure mode for this peer

    The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator.

    The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup.

    Restrictions: If set, must be one of "none", "all", "daily", or "weekly".

    backup_user

    Name of backup user on the remote peer.

    This username will be used when copying files from the remote peer via an rsh-based network connection.

    This field is optional. if it doesn't exist, the backup will use the default backup user from the options section.

    Restrictions: Must be non-empty.

    rcp_command

    The rcp-compatible copy command for this peer.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section.

    Restrictions: Must be non-empty.

    rsh_command

    The rsh-compatible command for this peer.

    The rsh command should be the exact command used for remote shells, including any required options.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default rsh command from the options section.

    Restrictions: Must be non-empty

    cback_command

    The cback-compatible command for this peer.

    The cback command should be the exact command used for for executing cback on the peer as part of a managed backup. This value must include any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default cback command from the options section.

    Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration.

    Restrictions: Must be non-empty

    managed_actions

    Set of actions that are managed for this peer.

    This is a comma-separated list of actions that the master will manage on behalf this peer. Typically, it would include only collect-like actions and purge.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default list of managed actions from the options section.

    Restrictions: Must be non-empty.

    Collect Configuration

    The collect configuration section contains configuration options related the the collect action. This section contains a variable number of elements, including an optional exclusion section and a repeating subsection used to specify which directories and/or files to collect. You can also configure an ignore indicator file, which lets users mark their own directories as not backed up.

    In order to actually execute the collect action, you must have configured at least one collect directory or one collect file. However, if you are only including collect configuration for use by an extension, then it's OK to leave out these sections. The validation will take place only when the collect action is executed.

    This is an example collect configuration section:

    <collect>
       <collect_dir>/opt/backup/collect</collect_dir>
       <collect_mode>daily</collect_mode>
       <archive_mode>targz</archive_mode>
       <ignore_file>.cbignore</ignore_file>
       <exclude>
          <abs_path>/etc</abs_path>
          <pattern>.*\.conf</pattern>
       </exclude>
       <file>
          <abs_path>/home/root/.profile</abs_path>
       </file>
       <dir>
          <abs_path>/etc</abs_path>
       </dir>
       <dir>
          <abs_path>/var/log</abs_path>
          <collect_mode>incr</collect_mode>
       </dir>
       <dir>
          <abs_path>/opt</abs_path>
          <collect_mode>weekly</collect_mode>
          <exclude>
             <abs_path>/opt/large</abs_path>
             <rel_path>backup</rel_path>
             <pattern>.*tmp</pattern>
          </exclude>
       </dir>
    </collect>
             

    The following elements are part of the collect configuration section:

    collect_dir

    Directory to collect files into.

    On a client, this is the directory which tarfiles for individual collect directories are written into. The master then stages files from this directory into its own staging directory.

    This field is always required. It must contain enough free space to collect all of the backed-up files on the machine in a compressed form.

    Restrictions: Must be an absolute path

    collect_mode

    Default collect mode.

    The collect mode describes how frequently a directory is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This value is the collect mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Default archive mode for collect files.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This value is the archive mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of tar, targz or tarbz2.

    ignore_file

    Default ignore file name.

    The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration.

    The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it.

    This value is the ignore file name that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be non-empty

    recursion_level

    Recursion level to use when collecting directories.

    This is an integer value that Cedar Backup will consider when generating archive files for a configured collect directory.

    Normally, Cedar Backup generates one archive file per collect directory. So, if you collect /etc you get etc.tar.gz. Most of the time, this is what you want. However, you may sometimes wish to generate multiple archive files for a single collect directory.

    The most obvious example is for /home. By default, Cedar Backup will generate home.tar.gz. If instead, you want one archive file per home directory you can set a recursion level of 1. Cedar Backup will generate home-user1.tar.gz, home-user2.tar.gz, etc.

    Higher recursion levels (2, 3, etc.) are legal, and it doesn't matter if the configured recursion level is deeper than the directory tree that is being collected. You can use a negative recursion level (like -1) to specify an infinite level of recursion. This will exhaust the tree in the same way as if the recursion level is set too high.

    This field is optional. if it doesn't exist, the backup will use the default recursion level of zero.

    Restrictions: Must be an integer.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of absolute paths and patterns to be excluded across all configured directories. For a given directory, the set of absolute paths and patterns to exclude is built from this list and any list that exists on the directory itself. Directories cannot override or remove entries that are in this list, however.

    This section is optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    abs_path

    An absolute path to be recursively excluded from the backup.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be an absolute path.

    pattern

    A pattern to be recursively excluded from the backup.

    The pattern must be a Python regular expression. [21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    file

    A file to be collected.

    This is a subsection which contains information about a specific file to be collected (backed up).

    This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed.

    The collect file subsection contains the following fields:

    abs_path

    Absolute path of the file to collect.

    Restrictions: Must be an absolute path.

    collect_mode

    Collect mode for this file

    The collect mode describes how frequently a file is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Archive mode for this file.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This field is optional. if it doesn't exist, the backup will use the default archive mode.

    Restrictions: Must be one of tar, targz or tarbz2.

    dir

    A directory to be collected.

    This is a subsection which contains information about a specific directory to be collected (backed up).

    This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed.

    The collect directory subsection contains the following fields:

    abs_path

    Absolute path of the directory to collect.

    The path may be either a directory, a soft link to a directory, or a hard link to a directory. All three are treated the same at this level.

    The contents of the directory will be recursively collected. The backup will contain all of the files in the directory, as well as the contents of all of the subdirectories within the directory, etc.

    Soft links within the directory are treated as files, i.e. they are copied verbatim (as a link) and their contents are not backed up.

    Restrictions: Must be an absolute path.

    collect_mode

    Collect mode for this directory

    The collect mode describes how frequently a directory is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Archive mode for this directory.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This field is optional. if it doesn't exist, the backup will use the default archive mode.

    Restrictions: Must be one of tar, targz or tarbz2.

    ignore_file

    Ignore file name for this directory.

    The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration.

    The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it.

    This field is optional. If it doesn't exist, the backup will use the default ignore file name.

    Restrictions: Must be non-empty

    link_depth

    Link depth value to use for this directory.

    The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links within the collect directory, a depth of 1 follows only links immediately within the collect directory, a depth of 2 follows the links at the next level down, etc.

    This field is optional. If it doesn't exist, the backup will assume a value of zero, meaning that soft links within the collect directory will never be followed.

    Restrictions: If set, must be an integer ≥ 0.

    dereference

    Whether to dereference soft links.

    If this flag is set, links that are being followed will be dereferenced before being added to the backup. The link will be added (as a link), and then the directory or file that the link points at will be added as well.

    This value only applies to a directory where soft links are being followed (per the link_depth configuration option). It never applies to a configured collect directory itself, only to other directories within the collect directory.

    This field is optional. If it doesn't exist, the backup will assume that links should never be dereferenced.

    Restrictions: Must be a boolean (Y or N).

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this collect directory. This list is combined with the program-wide list to build a complete list for the directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    abs_path

    An absolute path to be recursively excluded from the backup.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be an absolute path.

    rel_path

    A relative path to be recursively excluded from the backup.

    The path is assumed to be relative to the collect directory itself. For instance, if the configured directory is /opt/web a configured relative path of something/else would exclude the path /opt/web/something/else.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value something/else would exclude any files within something/else as well as files within other directories under something/else.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    Stage Configuration

    The stage configuration section contains configuration options related the the stage action. The section indicates where date from peers can be staged to.

    This section can also (optionally) override the list of peers so that not all peers are staged. If you provide any peers in this section, then the list of peers here completely replaces the list of peers in the peers configuration section for the purposes of staging.

    This is an example stage configuration section for the simple case where the list of peers is taken from peers configuration:

    <stage>
       <staging_dir>/opt/backup/stage</staging_dir>
    </stage>
             

    This is an example stage configuration section that overrides the default list of peers:

    <stage>
       <staging_dir>/opt/backup/stage</staging_dir>
       <peer>
          <name>machine1</name>
          <type>local</type>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
       <peer>
          <name>machine2</name>
          <type>remote</type>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
    </stage>
             

    The following elements are part of the stage configuration section:

    staging_dir

    Directory to stage files into.

    This is the directory into which the master stages collected data from each of the clients. Within the staging directory, data is staged into date-based directories by peer name. For instance, peer daystrom backed up on 19 Feb 2005 would be staged into something like 2005/02/19/daystrom relative to the staging directory itself.

    This field is always required. The directory must contain enough free space to stage all of the files collected from all of the various machines in a backup pool. Many administrators set up purging to keep staging directories around for a week or more, which requires even more space.

    Restrictions: Must be an absolute path

    peer (local version)

    Local client peer in a backup pool.

    This is a subsection which contains information about a specific local client peer to be staged (backed up). A local peer is one whose collect directory can be reached without requiring any rsh-based network calls. It is possible that a remote peer might be staged as a local peer if its collect directory is mounted to the master via NFS, AFS or some other method.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration.

    The local peer subsection must contain the following fields:

    name

    Name of the peer, typically a valid hostname.

    For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a local peer, it must always be local.

    Restrictions: Must be local.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp).

    Restrictions: Must be an absolute path.

    peer (remote version)

    Remote client peer in a backup pool.

    This is a subsection which contains information about a specific remote client peer to be staged (backed up). A remote peer is one whose collect directory can only be reached via an rsh-based network call.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration.

    The remote peer subsection must contain the following fields:

    name

    Hostname of the peer.

    For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a remote peer, it must always be remote.

    Restrictions: Must be remote.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command).

    Restrictions: Must be an absolute path.

    backup_user

    Name of backup user on the remote peer.

    This username will be used when copying files from the remote peer via an rsh-based network connection.

    This field is optional. if it doesn't exist, the backup will use the default backup user from the options section.

    Restrictions: Must be non-empty.

    rcp_command

    The rcp-compatible copy command for this peer.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section.

    Restrictions: Must be non-empty.

    Store Configuration

    The store configuration section contains configuration options related the the store action. This section contains several optional fields. Most fields control the way media is written using the writer device.

    This is an example store configuration section:

    <store>
       <source_dir>/opt/backup/stage</source_dir>
       <media_type>cdrw-74</media_type>
       <device_type>cdwriter</device_type>
       <target_device>/dev/cdrw</target_device>
       <target_scsi_id>0,0,0</target_scsi_id>
       <drive_speed>4</drive_speed>
       <check_data>Y</check_data>
       <check_media>Y</check_media>
       <warn_midnite>Y</warn_midnite>
       <no_eject>N</no_eject>
       <refresh_media_delay>15</refresh_media_delay>
       <eject_delay>2</eject_delay>
       <blank_behavior>
          <mode>weekly</mode>
          <factor>1.3</factor>
       </blank_behavior>
    </store>
             

    The following elements are part of the store configuration section:

    source_dir

    Directory whose contents should be written to media.

    This directory must be a Cedar Backup staging directory, as configured in the staging configuration section. Only certain data from that directory (typically, data from the current day) will be written to disc.

    Restrictions: Must be an absolute path

    device_type

    Type of the device used to write the media.

    This field controls which type of writer device will be used by Cedar Backup. Currently, Cedar Backup supports CD writers (cdwriter) and DVD writers (dvdwriter).

    This field is optional. If it doesn't exist, the cdwriter device type is assumed.

    Restrictions: If set, must be either cdwriter or dvdwriter.

    media_type

    Type of the media in the device.

    Unless you want to throw away a backup disc every week, you are probably best off using rewritable media.

    You must choose a media type that is appropriate for the device type you chose above. For more information on media types, see the section called “Media and Device Types” (in Chapter2, Basic Concepts).

    Restrictions: Must be one of cdr-74, cdrw-74, cdr-80 or cdrw-80 if device type is cdwriter; or one of dvd+r or dvd+rw if device type is dvdwriter.

    target_device

    Filesystem device name for writer device.

    This value is required for both CD writers and DVD writers.

    This is the UNIX device name for the writer drive, for instance /dev/scd0 or a symlink like /dev/cdrw.

    In some cases, this device name is used to directly write to media. This is true all of the time for DVD writers, and is true for CD writers when a SCSI id (see below) has not been specified.

    Besides this, the device name is also needed in order to do several pre-write checks (such as whether the device might already be mounted) as well as the post-write consistency check, if enabled.

    Note: some users have reported intermittent problems when using a symlink as the target device on Linux, especially with DVD media. If you experience problems, try using the real device name rather than the symlink.

    Restrictions: Must be an absolute path.

    target_scsi_id

    SCSI id for the writer device.

    This value is optional for CD writers and is ignored for DVD writers.

    If you have configured your CD writer hardware to work through the normal filesystem device path, then you can leave this parameter unset. Cedar Backup will just use the target device (above) when talking to cdrecord.

    Otherwise, if you have SCSI CD writer hardware or you have configured your non-SCSI hardware to operate like a SCSI device, then you need to provide Cedar Backup with a SCSI id it can use when talking with cdrecord.

    For the purposes of Cedar Backup, a valid SCSI identifier must either be in the standard SCSI identifier form scsibus,target,lun or in the specialized-method form <method>:scsibus,target,lun.

    An example of a standard SCSI identifier is 1,6,2. Today, the two most common examples of the specialized-method form are ATA:scsibus,target,lun and ATAPI:scsibus,target,lun, but you may occassionally see other values (like OLDATAPI in some forks of cdrecord).

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Restrictions: If set, must be a valid SCSI identifier.

    drive_speed

    Speed of the drive, i.e. 2 for a 2x device.

    This field is optional. If it doesn't exist, the underlying device-related functionality will use the default drive speed.

    For DVD writers, it is best to leave this value unset, so growisofs can pick an appropriate speed. For CD writers, since media can be speed-sensitive, it is probably best to set a sensible value based on your specific writer and media.

    Restrictions: If set, must be an integer ≥ 1.

    check_data

    Whether the media should be validated.

    This field indicates whether a resulting image on the media should be validated after the write completes, by running a consistency check against it. If this check is enabled, the contents of the staging directory are directly compared to the media, and an error is reported if there is a mismatch.

    Practice shows that some drives can encounter an error when writing a multisession disc, but not report any problems. This consistency check allows us to catch the problem. By default, the consistency check is disabled, but most users should choose to enable it unless they have a good reason not to.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    check_media

    Whether the media should be checked before writing to it.

    By default, Cedar Backup does not check its media before writing to it. It will write to any media in the backup device. If you set this flag to Y, Cedar Backup will make sure that the media has been initialized before writing to it. (Rewritable media is initialized using the initialize action.)

    If the configured media is not rewritable (like CD-R), then this behavior is modified slightly. For this kind of media, the check passes either if the media has been initialized or if the media appears unused.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    warn_midnite

    Whether to generate warnings for crossing midnite.

    This field indicates whether warnings should be generated if the store operation has to cross a midnite boundary in order to find data to write to disc. For instance, a warning would be generated if valid store data was only found in the day before or day after the current day.

    Configuration for some users is such that the store operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something strange might have happened.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    no_eject

    Indicates that the writer device should not be ejected.

    Under some circumstances, Cedar Backup ejects (opens and closes) the writer device. This is done because some writer devices need to re-load the media before noticing a media state change (like a new session).

    For most writer devices this is safe, because they have a tray that can be opened and closed. If your writer device does not have a tray and Cedar Backup does not properly detect this, then set this flag. Cedar Backup will not ever issue an eject command to your writer.

    Note: this could cause problems with your backup. For instance, with many writers, the check data step may fail if the media is not reloaded first. If this happens to you, you may need to get a different writer device.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    refresh_media_delay

    Number of seconds to delay after refreshing media

    This field is optional. If it doesn't exist, no delay will occur.

    Some devices seem to take a little while to stablize after refreshing the media (i.e. closing and opening the tray). During this period, operations on the media may fail. If your device behaves like this, you can try setting a delay of 10-15 seconds.

    Restrictions: If set, must be an integer ≥ 1.

    eject_delay

    Number of seconds to delay after ejecting the tray

    This field is optional. If it doesn't exist, no delay will occur.

    If your system seems to have problems opening and closing the tray, one possibility is that the open/close sequence is happening too quickly — either the tray isn't fully open when Cedar Backup tries to close it, or it doesn't report being open. To work around that problem, set an eject delay of a few seconds.

    Restrictions: If set, must be an integer ≥ 1.

    blank_behavior

    Optimized blanking strategy.

    For more information about Cedar Backup's optimized blanking strategy, see the section called “Optimized Blanking Stategy”.

    This entire configuration section is optional. However, if you choose to provide it, you must configure both a blanking mode and a blanking factor.

    blank_mode

    Blanking mode.

    Restrictions:Must be one of "daily" or "weekly".

    blank_factor

    Blanking factor.

    Restrictions:Must be a floating point number ≥ 0.

    Purge Configuration

    The purge configuration section contains configuration options related the the purge action. This section contains a set of directories to be purged, along with information about the schedule at which they should be purged.

    Typically, Cedar Backup should be configured to purge collect directories daily (retain days of 0).

    If you are tight on space, staging directories can also be purged daily. However, if you have space to spare, you should consider purging about once per week. That way, if your backup media is damaged, you will be able to recreate the week's backup using the rebuild action.

    You should also purge the working directory periodically, once every few weeks or once per month. This way, if any unneeded files are left around, perhaps because a backup was interrupted or because configuration changed, they will eventually be removed. The working directory should not be purged any more frequently than once per week, otherwise you will risk destroying data used for incremental backups.

    This is an example purge configuration section:

    <purge>
       <dir>
          <abs_path>/opt/backup/stage</abs_path>
          <retain_days>7</retain_days>
       </dir>
       <dir>
          <abs_path>/opt/backup/collect</abs_path>
          <retain_days>0</retain_days>
       </dir>
    </purge>
             

    The following elements are part of the purge configuration section:

    dir

    A directory to purge within.

    This is a subsection which contains information about a specific directory to purge within.

    This section can be repeated as many times as is necessary. At least one purge directory must be configured.

    The purge directory subsection contains the following fields:

    abs_path

    Absolute path of the directory to purge within.

    The contents of the directory will be purged based on age. The purge will remove any files that were last modified more than retain days days ago. Empty directories will also eventually be removed. The purge directory itself will never be removed.

    The path may be either a directory, a soft link to a directory, or a hard link to a directory. Soft links within the directory (if any) are treated as files.

    Restrictions: Must be an absolute path.

    retain_days

    Number of days to retain old files.

    Once it has been more than this many days since a file was last modified, it is a candidate for removal.

    Restrictions: Must be an integer ≥ 0.

    Extensions Configuration

    The extensions configuration section is used to configure third-party extensions to Cedar Backup. If you don't intend to use any extensions, or don't know what extensions are, then you can safely leave this section out of your configuration file. It is optional.

    Extensions configuration is used to specify extended actions implemented by code external to Cedar Backup. An administrator can use this section to map command-line Cedar Backup actions to third-party extension functions.

    Each extended action has a name, which is mapped to a Python function within a particular module. Each action also has an index associated with it. This index is used to properly order execution when more than one action is specified on the command line. The standard actions have predefined indexes, and extended actions are interleaved into the normal order of execution using those indexes. The collect action has index 100, the stage index has action 200, the store action has index 300 and the purge action has index 400.

    Warning

    Extended actions should always be configured to run before the standard action they are associated with. This is because of the way indicator files are used in Cedar Backup. For instance, the staging process considers the collect action to be complete for a peer if the file cback.collect can be found in that peer's collect directory.

    If you were to run the standard collect action before your other collect-like actions, the indicator file would be written after the collect action completes but before all of the other actions even run. Because of this, there's a chance the stage process might back up the collect directory before the entire set of collect-like actions have completed — and you would get no warning about this in your email!

    So, imagine that a third-party developer provided a Cedar Backup extension to back up a certain kind of database repository, and you wanted to map that extension to the database command-line action. You have been told that this function is called foo.bar(). You think of this backup as a collect kind of action, so you want it to be performed immediately before the collect action.

    To configure this extension, you would list an action with a name database, a module foo, a function name bar and an index of 99.

    This is how the hypothetical action would be configured:

    <extensions>
       <action>
          <name>database</name>
          <module>foo</module>
          <function>bar</function>
          <index>99</index>
       </action>
    </extensions>
             

    The following elements are part of the extensions configuration section:

    action

    This is a subsection that contains configuration related to a single extended action.

    This section can be repeated as many times as is necessary.

    The action subsection contains the following fields:

    name

    Name of the extended action.

    Restrictions: Must be a non-empty string consisting of only lower-case letters and digits.

    module

    Name of the Python module associated with the extension function.

    Restrictions: Must be a non-empty string and a valid Python identifier.

    function

    Name of the Python extension function within the module.

    Restrictions: Must be a non-empty string and a valid Python identifier.

    index

    Index of action, for execution ordering.

    Restrictions: Must be an integer ≥ 0.

    Setting up a Pool of One

    Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one).

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Tip

    This setup procedure discusses how to set up Cedar Backup in the normal case for a pool of one. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure your writer device.

    Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation.

    Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations.

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Note

    There is no need to set up your CD/DVD device if you have decided not to execute the store action.

    Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers.

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly three times as big as the amount of data that will be backed up on a nightly basis, to allow for the data to be collected, staged, and then placed into an ISO filesystem image on disk. (This is one disadvantage to using Cedar Backup in single-machine pools, but in this day of really large hard drives, it might not be an issue.) Note that if you elect not to purge the staging directory every night, you will need even more space.

    You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                stage/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above) create a configuration file for your machine. Since you are working with a pool of one, you must configure all four action-specific sections: collect, stage, store and purge.

    The usual location for the Cedar Backup config file is /etc/cback3.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback3 script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidential information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback3 validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test your backup.

    Place a valid CD/DVD disc in your drive, and then use the command cback3 --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully.

    Just to be sure that everything worked properly, check the logfile (/var/log/cback3.log) for errors and also mount the CD/DVD disc to be sure it can be read.

    If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. [22] To be safe, always enable the consistency check option in the store configuration section.

    Step 9: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, one way to configure the cron job is to add a line like this to your /etc/crontab file:

    30 00 * * * root  cback3 all
             

    Or, you can create an executable script containing just these lines and place that file in the /etc/cron.daily directory:

    #/bin/sh
    cback3 all
             

    You should consider adding the --output or -O switch to your cback3 command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup3. As installed, this file contains several different settings, all commented out. Uncomment the Single machine (pool of one) entry in the file, and change the line so that the backup goes off when you want it to.

    Setting up a Client Peer Node

    Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a client is a little simpler than configuring a master.

    Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own.

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Note

    See AppendixD, Securing Password-less SSH Connections for some important notes on how to optionally further secure password-less SSH connections to your clients.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure the master in your backup pool.

    You will not be able to complete the client configuration until at least step 3 of the master's configuration has been completed. In particular, you will need to know the master's public SSH identity to fully configure a client.

    To find the master's public SSH identity, log in as the backup user on the master and cat the public identity file ~/.ssh/id_rsa.pub:

    user@machine> cat ~/.ssh/id_rsa.pub
    ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA0vOKjlfwohPg1oPRdrmwHk75l3mI9Tb/WRZfVnu2Pw69
    uyphM9wBLRo6QfOC2T8vZCB8o/ZIgtAM3tkM0UgQHxKBXAZ+H36TOgg7BcI20I93iGtzpsMA/uXQy8kH
    HgZooYqQ9pw+ZduXgmPcAAv2b5eTm07wRqFt/U84k6bhTzs= user@machine
             

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a "ready made" backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa:

    user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
    Generating public/private rsa key pair.
    Created directory '/home/user/.ssh'.
    Your identification has been saved in /home/user/.ssh/id_rsa.
    Your public key has been saved in /home/user/.ssh/id_rsa.pub.
    The key fingerprint is:
    11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine
             

    The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable only by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644).

    Finally, take the master's public SSH identity (which you found in step 2) and cut-and-paste it into the file ~/.ssh/authorized_keys. Make sure the identity value is pasted into the file all on one line, and that the authorized_keys file is owned by your backup user and has permissions 600.

    If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly as big as the amount of data that will be backed up on a nightly basis (more if you elect not to purge it all every night).

    You should create a collect directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more "standard" location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above), create a configuration file for your machine. Since you are working with a client, you must configure all action-specific sections for the collect and purge actions.

    The usual location for the Cedar Backup config file is /etc/cback3.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback3 script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback3 validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems. This command only validates configuration on the one client, not the master or any other clients in a pool.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test your backup.

    Use the command cback3 --full collect purge. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback3.log) for errors.

    Step 9: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file:

    30 00 * * * root  cback3 collect
    30 06 * * * root  cback3 purge
             

    You should consider adding the --output or -O switch to your cback3 command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    You will need to coordinate the collect and purge actions on the client so that the collect action completes before the master attempts to stage, and so that the purge action does not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. [23]

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup3. As installed, this file contains several different settings, all commented out. Uncomment the Client machine entries in the file, and change the lines so that the backup goes off when you want it to.

    Setting up a Master Peer Node

    Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a master is somewhat more complicated than configuring a client.

    Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own.

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or whichever other user receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Tip

    This setup procedure discusses how to set up Cedar Backup in the normal case for a master. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Keep in mind that you do not necessarily have to run the collect action on the master. See notes further below for more information.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure your writer device.

    Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation.

    Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations.

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Note

    There is no need to set up your CD/DVD device if you have decided not to execute the store action.

    Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers.

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa:

    user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
    Generating public/private rsa key pair.
    Created directory '/home/user/.ssh'.
    Your identification has been saved in /home/user/.ssh/id_rsa.
    Your public key has been saved in /home/user/.ssh/id_rsa.pub.
    The key fingerprint is:
    11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine
             

    The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644).

    If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly large enough hold twice as much data as will be backed up from the entire pool on a given night, plus space for whatever is collected on the master itself. This will allow for all three operations - collect, stage and store - to have enough space to complete. Note that if you elect not to purge the staging directory every night, you will need even more space.

    You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                stage/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above), create a configuration file for your machine. Since you are working with a master machine, you would typically configure all four action-specific sections: collect, stage, store and purge.

    Note

    Note that the master can treat itself as a client peer for certain actions. As an example, if you run the collect action on the master, then you will stage that data by configuring a local peer representing the master.

    Something else to keep in mind is that you do not really have to run the collect action on the master. For instance, you may prefer to just use your master machine as a consolidation point machine that just collects data from the other client machines in a backup pool. In that case, there is no need to collect data on the master itself.

    The usual location for the Cedar Backup config file is /etc/cback3.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback3 script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback3 validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. This command only validates configuration on the master, not any clients that the master might be configured to connect to.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test connectivity to client machines.

    This step must wait until after your client machines have been at least partially configured. Once the backup user(s) have been configured on the client machine(s) in a pool, attempt an SSH connection to each client.

    Log in as the backup user on the master, and then use the command ssh user@machine where user is the name of backup user on the client machine, and machine is the name of the client machine.

    If you are able to log in successfully to each client without entering a password, then things have been configured properly. Otherwise, double-check that you followed the user setup instructions for the master and the clients.

    Step 9: Test your backup.

    Make sure that you have configured all of the clients in your backup pool. On all of the clients, execute cback3 --full collect. (You will probably have already tested this command on each of the clients, so it should succeed.)

    When all of the client backups have completed, place a valid CD/DVD disc in your drive, and then use the command cback3 --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully.

    Just to be sure that everything worked properly, check the logfile (/var/log/cback3.log) on the master and each of the clients, and also mount the CD/DVD disc on the master to be sure it can be read.

    You may also want to run cback3 purge on the master and each client once you have finished validating that everything worked.

    If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. [22] To be safe, always enable the consistency check option in the store configuration section.

    Step 10: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file:

    30 00 * * * root  cback3 collect
    30 02 * * * root  cback3 stage
    30 04 * * * root  cback3 store
    30 06 * * * root  cback3 purge
             

    You should consider adding the --output or -O switch to your cback3 command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    You will need to coordinate the collect and purge actions on clients so that their collect actions complete before the master attempts to stage, and so that their purge actions do not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. [23]

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup3. As installed, this file contains several different settings, all commented out. Uncomment the Master machine entries in the file, and change the lines so that the backup goes off when you want it to.

    Configuring your Writer Device

    Device Types

    In order to execute the store action, you need to know how to identify your writer device. Cedar Backup supports two kinds of device types: CD writers and DVD writers. DVD writers are always referenced through a filesystem device name (i.e. /dev/dvd). CD writers can be referenced either through a SCSI id, or through a filesystem device name. Which you use depends on your operating system and hardware.

    Devices identified by by device name

    For all DVD writers, and for CD writers on certain platforms, you will configure your writer device using only a device name. If your writer device works this way, you should just specify <target_device> in configuration. You can either leave <target_scsi_id> blank or remove it completely. The writer device will be used both to write to the device and for filesystem operations — for instance, when the media needs to be mounted to run the consistency check.

    Devices identified by SCSI id

    Cedar Backup can use devices identified by SCSI id only when configured to use the cdwriter device type.

    In order to use a SCSI device with Cedar Backup, you must know both the SCSI id <target_scsi_id> and the device name <target_device>. The SCSI id will be used to write to media using cdrecord; and the device name will be used for other filesystem operations.

    A true SCSI device will always have an address scsibus,target,lun (i.e. 1,6,2). This should hold true on most UNIX-like systems including Linux and the various BSDs (although I do not have a BSD system to test with currently). The SCSI address represents the location of your writer device on the one or more SCSI buses that you have available on your system.

    On some platforms, it is possible to reference non-SCSI writer devices (i.e. an IDE CD writer) using an emulated SCSI id. If you have configured your non-SCSI writer device to have an emulated SCSI id, provide the filesystem device path in <target_device> and the SCSI id in <target_scsi_id>, just like for a real SCSI device.

    You should note that in some cases, an emulated SCSI id takes the same form as a normal SCSI id, while in other cases you might see a method name prepended to the normal SCSI id (i.e. ATA:1,1,1).

    Linux Notes

    On a Linux system, IDE writer devices often have a emulated SCSI address, which allows SCSI-based software to access the device through an IDE-to-SCSI interface. Under these circumstances, the first IDE writer device typically has an address 0,0,0. However, support for the IDE-to-SCSI interface has been deprecated and is not well-supported in newer kernels (kernel 2.6.x and later).

    Newer Linux kernels can address ATA or ATAPI drives without SCSI emulation by prepending a method indicator to the emulated device address. For instance, ATA:0,0,0 or ATAPI:0,0,0 are typical values.

    However, even this interface is deprecated as of late 2006, so with relatively new kernels you may be better off using the filesystem device path directly rather than relying on any SCSI emulation.

    Finding your Linux CD Writer

    Here are some hints about how to find your Linux CD writer hardware. First, try to reference your device using the filesystem device path:

    cdrecord -prcap dev=/dev/cdrom
             

    Running this command on my hardware gives output that looks like this (just the top few lines):

    Device type    : Removable CD-ROM
    Version        : 0
    Response Format: 2
    Capabilities   : 
    Vendor_info    : 'LITE-ON '
    Identification : 'DVDRW SOHW-1673S'
    Revision       : 'JS02'
    Device seems to be: Generic mmc2 DVD-R/DVD-RW.
    
    Drive capabilities, per MMC-3 page 2A:
             

    If this works, and the identifying information at the top of the output looks like your CD writer device, you've probably found a working configuration. Place the device path into <target_device> and leave <target_scsi_id> blank.

    If this doesn't work, you should try to find an ATA or ATAPI device:

    cdrecord -scanbus dev=ATA
    cdrecord -scanbus dev=ATAPI
             

    On my development system, I get a result that looks something like this for ATA:

    scsibus1:
            1,0,0   100) 'LITE-ON ' 'DVDRW SOHW-1673S' 'JS02' Removable CD-ROM
            1,1,0   101) *
            1,2,0   102) *
            1,3,0   103) *
            1,4,0   104) *
            1,5,0   105) *
            1,6,0   106) *
            1,7,0   107) *
             

    Again, if you get a result that you recognize, you have again probably found a working configuraton. Place the associated device path (in my case, /dev/cdrom) into <target_device> and put the emulated SCSI id (in this case, ATA:1,0,0) into <target_scsi_id>.

    Any further discussion of how to configure your CD writer hardware is outside the scope of this document. If you have tried the hints above and still can't get things working, you may want to reference the Linux CDROM HOWTO (http://www.tldp.org/HOWTO/CDROM-HOWTO) or the ATA RAID HOWTO (http://www.tldp.org/HOWTO/ATA-RAID-HOWTO/index.html) for more information.

    Mac OS X Notes

    On a Mac OS X (darwin) system, things get strange. Apple has abandoned traditional SCSI device identifiers in favor of a system-wide resource id. So, on a Mac, your writer device will have a name something like IOCompactDiscServices (for a CD writer) or IODVDServices (for a DVD writer). If you have multiple drives, the second drive probably has a number appended, i.e. IODVDServices/2 for the second DVD writer. You can try to figure out what the name of your device is by grepping through the output of the command ioreg -l.[24]

    Unfortunately, even if you can figure out what device to use, I can't really support the store action on this platform. In OS X, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality does work on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully.

    Optimized Blanking Stategy

    When the optimized blanking strategy has not been configured, Cedar Backup uses a simplistic approach: rewritable media is blanked at the beginning of every week, period.

    Since rewritable media can be blanked only a finite number of times before becoming unusable, some users — especially users of rewritable DVD media with its large capacity — may prefer to blank the media less often.

    If the optimized blanking strategy is configured, Cedar Backup will use a blanking factor and attempt to determine whether future backups will fit on the current media. If it looks like backups will fit, then the media will not be blanked.

    This feature will only be useful (assuming single disc is used for the whole week's backups) if the estimated total size of the weekly backup is considerably smaller than the capacity of the media (no more than 50% of the total media capacity), and only if the size of the backup can be expected to remain fairly constant over time (no frequent rapid growth expected).

    There are two blanking modes: daily and weekly. If the weekly blanking mode is set, Cedar Backup will only estimate future capacity (and potentially blank the disc) once per week, on the starting day of the week. If the daily blanking mode is set, Cedar Backup will estimate future capacity (and potentially blank the disc) every time it is run. You should only use the daily blanking mode in conjunction with daily collect configuration, otherwise you will risk losing data.

    If you are using the daily blanking mode, you can typically set the blanking value to 1.0. This will cause Cedar Backup to blank the media whenever there is not enough space to store the current day's backup.

    If you are using the weekly blanking mode, then finding the correct blanking factor will require some experimentation. Cedar Backup estimates future capacity based on the configured blanking factor. The disc will be blanked if the following relationship is true:

    bytes available / (1 + bytes required) ≤ blanking factor
          

    Another way to look at this is to consider the blanking factor as a sort of (upper) backup growth estimate:

    Total size of weekly backup / Full backup size at the start of the week
          

    This ratio can be estimated using a week or two of previous backups. For instance, take this example, where March 10 is the start of the week and March 4 through March 9 represent the incremental backups from the previous week:

    /opt/backup/staging# du -s 2007/03/*
    3040    2007/03/01
    3044    2007/03/02
    6812    2007/03/03
    3044    2007/03/04
    3152    2007/03/05
    3056    2007/03/06
    3060    2007/03/07
    3056    2007/03/08
    4776    2007/03/09
    6812    2007/03/10
    11824   2007/03/11
          

    In this case, the ratio is approximately 4:

    6812 + (3044 + 3152 + 3056 + 3060 + 3056 + 4776) / 6812 = 3.9571
          

    To be safe, you might choose to configure a factor of 5.0.

    Setting a higher value reduces the risk of exceeding media capacity mid-week but might result in blanking the media more often than is necessary.

    If you run out of space mid-week, then the solution is to run the rebuild action. If this happens frequently, a higher blanking factor value should be used.

    Chapter6.Official Extensions

    System Information Extension

    The System Information Extension is a simple Cedar Backup extension used to save off important system recovery information that might be useful when reconstructing a broken system. It is intended to be run either immediately before or immediately after the standard collect action.

    This extension saves off the following information to the configured Cedar Backup collect directory. Saved off data is always compressed using bzip2.

    • Currently-installed Debian packages via dpkg --get-selections

    • Disk partition information via fdisk -l

    • System-wide mounted filesystem contents, via ls -laR

    The Debian-specific information is only collected on systems where /usr/bin/dpkg exists.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>sysinfo</name>
          <module>CedarBackup3.extend.sysinfo</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, but requires no new configuration of its own.

    Amazon S3 Extension

    The Amazon S3 extension writes data to Amazon S3 cloud storage rather than to physical media. It is intended to replace the store action, but you can also use it alongside the store action if you'd prefer to backup your data in more than one place. This extension must be run after the stage action.

    The underlying functionality relies on the AWS CLI toolset. Before you use this extension, you need to set up your Amazon S3 account and configure AWS CLI as detailed in Amazons's setup guide. The extension assumes that the backup is being executed as root, and switches over to the configured backup user to run the aws program. So, make sure you configure the AWS CLI tools as the backup user and not root. (This is different than the amazons3 sync tool extension, which executes AWS CLI command as the same user that is running the tool.)

    When using physical media via the standard store action, there is an implicit limit to the size of a backup, since a backup must fit on a single disc. Since there is no physical media, no such limit exists for Amazon S3 backups. This leaves open the possibility that Cedar Backup might construct an unexpectedly-large backup that the administrator is not aware of. Over time, this might become expensive, either in terms of network bandwidth or in terms of Amazon S3 storage and I/O charges. To mitigate this risk, set a reasonable maximum size using the configuration elements shown below. If the backup fails, you have a chance to review what made the backup larger than you expected, and you can either correct the problem (i.e. remove a large temporary directory that got inadvertently included in the backup) or change configuration to take into account the new "normal" maximum size.

    You can optionally configure Cedar Backup to encrypt data before sending it to S3. To do that, provide a complete command line using the ${input} and ${output} variables to represent the original input file and the encrypted output file. This command will be executed as the backup user.

    For instance, you can use something like this with GPG:

    /usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input}
          

    The GPG mechanism depends on a strong passphrase for security. One way to generate a strong passphrase is using your system random number generator, i.e.:

    dd if=/dev/urandom count=20 bs=1 | xxd -ps
          

    (See StackExchange for more details about that advice.) If you decide to use encryption, make sure you save off the passphrase in a safe place, so you can get at your backup data later if you need to. And obviously, make sure to set permissions on the passphrase file so it can only be read by the backup user.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>amazons3</name>
          <module>CedarBackup3.extend.amazons3</module>
          <function>executeAction</function>
          <index>201</index> <!-- just after stage -->
       </action>
    </extensions>
          

    This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own amazons3 configuration section. This is an example configuration section with encryption disabled:

    <amazons3>
          <s3_bucket>example.com-backup/staging</s3_bucket>
    </amazons3>
          

    The following elements are part of the Amazon S3 configuration section:

    warn_midnite

    Whether to generate warnings for crossing midnite.

    This field indicates whether warnings should be generated if the Amazon S3 operation has to cross a midnite boundary in order to find data to write to the cloud. For instance, a warning would be generated if valid data was only found in the day before or day after the current day.

    Configuration for some users is such that the amazons3 operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something strange might have happened.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    s3_bucket

    The name of the Amazon S3 bucket that data will be written to.

    This field configures the S3 bucket that your data will be written to. In S3, buckets are named globally. For uniqueness, you would typically use the name of your domain followed by some suffix, such as example.com-backup. If you want, you can specify a subdirectory within the bucket, such as example.com-backup/staging.

    Restrictions: Must be non-empty.

    encrypt

    Command used to encrypt backup data before upload to S3

    If this field is provided, then data will be encrypted before it is uploaded to Amazon S3. You must provide the entire command used to encrypt a file, including the ${input} and ${output} variables. An example GPG command is shown above, but you can use any mechanism you choose. The command will be run as the configured backup user.

    Restrictions: If provided, must be non-empty.

    full_size_limit

    Maximum size of a full backup

    If this field is provided, then a size limit will be applied to full backups. If the total size of the selected staging directory is greater than the limit, then the backup will fail.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a value as described above, greater than zero.

    incr_size_limit

    Maximum size of an incremental backup

    If this field is provided, then a size limit will be applied to incremental backups. If the total size of the selected staging directory is greater than the limit, then the backup will fail.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a value as described above, greater than zero.

    Subversion Extension

    The Subversion Extension is a Cedar Backup extension used to back up Subversion [25] version control repositories via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Each configured Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2.

    There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). This extension backs up both kinds of repositories in the same way, using svnadmin dump in an incremental mode.

    It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do the backup that way, then use the normal collect action rather than this extension. If you decide to do that, be sure to consult the Subversion documentation and make sure you understand the limitations of this kind of backup.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>subversion</name>
          <module>CedarBackup3.extend.subversion</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own subversion configuration section. This is an example Subversion configuration section:

    <subversion>
       <collect_mode>incr</collect_mode>
       <compress_mode>bzip2</compress_mode>
       <repository>
          <abs_path>/opt/public/svn/docs</abs_path>
       </repository>
       <repository>
          <abs_path>/opt/public/svn/web</abs_path>
          <compress_mode>gzip</compress_mode>
       </repository>
       <repository_dir>
          <abs_path>/opt/private/svn</abs_path>
          <collect_mode>daily</collect_mode>
       </repository_dir>
    </subversion>
          

    The following elements are part of the Subversion configuration section:

    collect_mode

    Default collect mode.

    The collect mode describes how frequently a Subversion repository is backed up. The Subversion extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts).

    This value is the collect mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Default compress mode.

    Subversion repositories backups are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    This value is the compress mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of none, gzip or bzip2.

    repository

    A Subversion repository be collected.

    This is a subsection which contains information about a specific Subversion repository to be backed up.

    This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured.

    The repository subsection contains the following fields:

    collect_mode

    Collect mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the Subversion repository to back up.

    Restrictions: Must be an absolute path.

    repository_dir

    A Subversion parent repository directory be collected.

    This is a subsection which contains information about a Subversion parent repository directory to be backed up. Any subdirectory immediately within this directory is assumed to be a Subversion repository, and will be backed up.

    This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured.

    The repository_dir subsection contains the following fields:

    collect_mode

    Collect mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the Subversion repository to back up.

    Restrictions: Must be an absolute path.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this subversion parent directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    rel_path

    A relative path to be excluded from the backup.

    The path is assumed to be relative to the subversion parent directory itself. For instance, if the configured subversion parent directory is /opt/svn a configured relative path of software would exclude the path /opt/svn/software.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    MySQL Extension

    The MySQL Extension is a Cedar Backup extension used to back up MySQL [26] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Note

    This extension always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I will update this extension or provide another.

    The backup is done via the mysqldump command included with the MySQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases.

    The extension assumes that all configured databases can be backed up by a single user. Often, the root database user will be used. An alternative is to create a separate MySQL backup user and grant that user rights to read (but not write) various databases as needed. This second option is probably your best choice.

    Warning

    The extension accepts a username and password in configuration. However, you probably do not want to list those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to mysqldump via the command-line --user and --password switches, which will be visible to other users in the process listing.

    Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in /root/.my.cnf:

    [mysqldump]
    user     = root
    password = <secret>
             

    Of course, if you are executing the backup as a user other than root, then you would create the file in that user's home directory instead.

    As a side note, it is also possible to configure .my.cnf such that Cedar Backup can back up a remote database server:

    [mysqldump]
    host = remote.host
             

    For this to work, you will also need to grant privileges properly for the user which is executing the backup. See your MySQL documentation for more information about how this can be done.

    Regardless of whether you are using ~/.my.cnf or /etc/cback3.conf to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode 0600).

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>mysql</name>
          <module>CedarBackup3.extend.mysql</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mysql configuration section. This is an example MySQL configuration section:

    <mysql>
       <compress_mode>bzip2</compress_mode>
       <all>Y</all>
    </mysql>
          

    If you have decided to configure login information in Cedar Backup rather than using MySQL configuration, then you would add the username and password fields to configuration:

    <mysql>
       <user>root</user>
       <password>password</password>
       <compress_mode>bzip2</compress_mode>
       <all>Y</all>
    </mysql>
          

    The following elements are part of the MySQL configuration section:

    user

    Database user.

    The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. Typically, this would be root (i.e. the database root user, not the system root user).

    This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above.

    Restrictions: If provided, must be non-empty.

    password

    Password associated with the database user.

    This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above.

    Restrictions: If provided, must be non-empty.

    compress_mode

    Compress mode.

    MySQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    Restrictions: Must be one of none, gzip or bzip2.

    all

    Indicates whether to back up all databases.

    If this value is Y, then all MySQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below).

    If you choose this option, the entire database backup will go into one big dump file.

    Restrictions: Must be a boolean (Y or N).

    database

    Named database to be backed up.

    If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file.

    This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y.

    Restrictions: Must be non-empty.

    PostgreSQL Extension

    The PostgreSQL Extension is a Cedar Backup extension used to back up PostgreSQL [27] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    The backup is done via the pg_dump or pg_dumpall commands included with the PostgreSQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases.

    The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the pg_dump client. This can be accomplished using appropriate configuration in the pg_hda.conf file.

    This extension always produces a full backup. There is currently no facility for making incremental backups.

    Warning

    Once you place PostgreSQL configuration into the Cedar Backup configuration file, you should be careful about who is allowed to see that information. This is because PostgreSQL configuration will contain information about available PostgreSQL databases and usernames. Typically, you might want to lock down permissions so that only the file owner can read the file contents (i.e. use mode 0600).

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>postgresql</name>
          <module>CedarBackup3.extend.postgresql</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own postgresql configuration section. This is an example PostgreSQL configuration section:

    <postgresql>
       <compress_mode>bzip2</compress_mode>
       <user>username</user>
       <all>Y</all>
    </postgresql>
          

    If you decide to back up specific databases, then you would list them individually, like this:

    <postgresql>
       <compress_mode>bzip2</compress_mode>
       <user>username</user>
       <all>N</all>
       <database>db1</database>
       <database>db2</database>
    </postgresql>
          

    The following elements are part of the PostgreSQL configuration section:

    user

    Database user.

    The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user.

    This value is optional.

    Consult your PostgreSQL documentation for information on how to configure a default database user outside of Cedar Backup, and for information on how to specify a database password when you configure a user within Cedar Backup. You will probably want to modify pg_hda.conf.

    Restrictions: If provided, must be non-empty.

    compress_mode

    Compress mode.

    PostgreSQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    Restrictions: Must be one of none, gzip or bzip2.

    all

    Indicates whether to back up all databases.

    If this value is Y, then all PostgreSQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below).

    If you choose this option, the entire database backup will go into one big dump file.

    Restrictions: Must be a boolean (Y or N).

    database

    Named database to be backed up.

    If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file.

    This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y.

    Restrictions: Must be non-empty.

    Mbox Extension

    The Mbox Extension is a Cedar Backup extension used to incrementally back up UNIX-style mbox mail folders via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Mbox mail folders are not well-suited to being backed up by the normal Cedar Backup incremental backup process. This is because active folders are typically appended to on a daily basis. This forces the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large mail folders.

    What the mbox extension does is leverage the grepmail utility to back up only email messages which have been received since the last incremental backup. This way, even if a folder is added to every day, only the recently-added messages are backed up. This can potentially save a lot of space.

    Each configured mbox file or directory can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>mbox</name>
          <module>CedarBackup3.extend.mbox</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mbox configuration section. This is an example mbox configuration section:

    <mbox>
       <collect_mode>incr</collect_mode>
       <compress_mode>gzip</compress_mode>
       <file>
          <abs_path>/home/user1/mail/greylist</abs_path>
          <collect_mode>daily</collect_mode>
       </file>
       <dir>
          <abs_path>/home/user2/mail</abs_path>
       </dir>
       <dir>
          <abs_path>/home/user3/mail</abs_path>
          <exclude>
             <rel_path>spam</rel_path>
             <pattern>.*debian.*</pattern>
          </exclude>
       </dir>
    </mbox>
          

    Configuration is much like the standard collect action. Differences come from the fact that mbox directories are not collected recursively.

    Unlike collect configuration, exclusion information can only be configured at the mbox directory level (there are no global exclusions). Another difference is that no absolute exclusion paths are allowed — only relative path exclusions and patterns.

    The following elements are part of the mbox configuration section:

    collect_mode

    Default collect mode.

    The collect mode describes how frequently an mbox file or directory is backed up. The mbox extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts).

    This value is the collect mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Default compress mode.

    Mbox file or directory backups are just text, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    This value is the compress mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of none, gzip or bzip2.

    file

    An individual mbox file to be collected.

    This is a subsection which contains information about an individual mbox file to be backed up.

    This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured.

    The file subsection contains the following fields:

    collect_mode

    Collect mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the mbox file to back up.

    Restrictions: Must be an absolute path.

    dir

    An mbox directory to be collected.

    This is a subsection which contains information about an mbox directory to be backed up. An mbox directory is a directory containing mbox files. Every file in an mbox directory is assumed to be an mbox file. Mbox directories are not collected recursively. Only the files immediately within the configured directory will be backed-up and any subdirectories will be ignored.

    This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured.

    The dir subsection contains the following fields:

    collect_mode

    Collect mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the mbox directory to back up.

    Restrictions: Must be an absolute path.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this mbox directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    rel_path

    A relative path to be excluded from the backup.

    The path is assumed to be relative to the mbox directory itself. For instance, if the configured mbox directory is /home/user2/mail a configured relative path of SPAM would exclude the path /home/user2/mail/SPAM.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    Encrypt Extension

    The Encrypt Extension is a Cedar Backup extension used to encrypt backups. It does this by encrypting the contents of a master's staging directory each day after the stage action is run. This way, backed-up data is encrypted both when sitting on the master and when written to disc. This extension must be run before the standard store action, otherwise unencrypted data will be written to disc.

    There are several differents ways encryption could have been built in to or layered on to Cedar Backup. I asked the mailing list for opinions on the subject in January 2007 and did not get a lot of feedback, so I chose the option that was simplest to understand and simplest to implement. If other encryption use cases make themselves known in the future, this extension can be enhanced or replaced.

    Currently, this extension supports only GPG. However, it would be straightforward to support other public-key encryption mechanisms, such as OpenSSL.

    Warning

    If you decide to encrypt your backups, be absolutely sure that you have your GPG secret key saved off someplace safe — someplace other than on your backup disc. If you lose your secret key, your backup will be useless.

    I suggest that before you rely on this extension, you should execute a dry run and make sure you can successfully decrypt the backup that is written to disc.

    Before configuring the Encrypt extension, you must configure GPG. Either create a new keypair or use an existing one. Determine which user will execute your backup (typically root) and have that user import and lsign the public half of the keypair. Then, save off the secret half of the keypair someplace safe, apart from your backup (i.e. on a floppy disk or USB drive). Make sure you know the recipient name associated with the public key because you'll need it to configure Cedar Backup. (If you can run gpg -e -r "Recipient Name" file.txt and it executes cleanly with no user interaction required, you should be OK.)

    An encrypted backup has the same file structure as a normal backup, so all of the instructions in AppendixC, Data Recovery apply. The only difference is that encrypted files will have an additional .gpg extension (so for instance file.tar.gz becomes file.tar.gz.gpg). To recover decrypted data, simply log on as a user which has access to the secret key and decrypt the .gpg file that you are interested in. Then, recover the data as usual.

    Note: I am being intentionally vague about how to configure and use GPG, because I do not want to encourage neophytes to blindly use this extension. If you do not already understand GPG well enough to follow the two paragraphs above, do not use this extension. Instead, before encrypting your backups, check out the excellent GNU Privacy Handbook at http://www.gnupg.org/gph/en/manual.html and gain an understanding of how encryption can help you or hurt you.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>encrypt</name>
          <module>CedarBackup3.extend.encrypt</module>
          <function>executeAction</function>
          <index>301</index>
       </action>
    </extensions>
          

    This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own encrypt configuration section. This is an example Encrypt configuration section:

    <encrypt>
       <encrypt_mode>gpg</encrypt_mode>
       <encrypt_target>Backup User</encrypt_target>
    </encrypt>
          

    The following elements are part of the Encrypt configuration section:

    encrypt_mode

    Encryption mode.

    This value specifies which encryption mechanism will be used by the extension.

    Currently, only the GPG public-key encryption mechanism is supported.

    Restrictions: Must be gpg.

    encrypt_target

    Encryption target.

    The value in this field is dependent on the encryption mode. For the gpg mode, this is the name of the recipient whose public key will be used to encrypt the backup data, i.e. the value accepted by gpg -r.

    Split Extension

    The Split Extension is a Cedar Backup extension used to split up large files within staging directories. It is probably only useful in combination with the cback3-span command, which requires individual files within staging directories to each be smaller than a single disc.

    You would normally run this action immediately after the standard stage action, but you could also choose to run it by hand immediately before running cback3-span.

    The split extension uses the standard UNIX split tool to split the large files up. This tool simply splits the files on bite-size boundaries. It has no knowledge of file formats.

    Note: this means that in order to recover the data in your original large file, you must have every file that the original file was split into. Think carefully about whether this is what you want. It doesn't sound like a huge limitation. However, cback3-span might put an indivdual file on any disc in a set — the files split from one larger file will not necessarily be together. That means you will probably need every disc in your backup set in order to recover any data from the backup set.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions> 
       <action>
          <name>split</name>
          <module>CedarBackup3.extend.split</module>
          <function>executeAction</function>
          <index>299</index>
       </action>
    </extensions>
          

    This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own split configuration section. This is an example Split configuration section:

    <split>
       <size_limit>250 MB</size_limit>
       <split_size>100 MB</split_size>
    </split>
          

    The following elements are part of the Split configuration section:

    size_limit

    Size limit.

    Files with a size strictly larger than this limit will be split by the extension.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a size as described above.

    split_size

    Split size.

    This is the size of the chunks that a large file will be split into. The final chunk may be smaller if the split size doesn't divide evenly into the file size.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a size as described above.

    Capacity Extension

    The capacity extension checks the current capacity of the media in the writer and prints a warning if the media exceeds an indicated capacity. The capacity is indicated either by a maximum percentage utilized or by a minimum number of bytes that must remain unused.

    This action can be run at any time, but is probably best run as the last action on any given day, so you get as much notice as possible that your media is full and needs to be replaced.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions> <action>
          <name>capacity</name>
          <module>CedarBackup3.extend.capacity</module>
          <function>executeAction</function>
          <index>299</index>
       </action>
    </extensions>
          

    This extension relies on the options and store configuration sections in the standard Cedar Backup configuration file, and then also requires its own capacity configuration section. This is an example Capacity configuration section that configures the extension to warn if the media is more than 95.5% full:

    <capacity>
       <max_percentage>95.5</max_percentage>
    </capacity>
          

    This example configures the extension to warn if the media has fewer than 16 MB free:

    <capacity>
       <min_bytes>16 MB</min_bytes>
    </capacity>
          

    The following elements are part of the Capacity configuration section:

    max_percentage

    Maximum percentage of the media that may be utilized.

    You must provide either this value or the min_bytes value.

    Restrictions: Must be a floating point number between 0.0 and 100.0

    min_bytes

    Minimum number of free bytes that must be available.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    You must provide either this value or the max_percentage value.

    Restrictions: Must be a byte quantity as described above.

    AppendixA.Extension Architecture Interface

    The Cedar Backup Extension Architecture Interface is the application programming interface used by third-party developers to write Cedar Backup extensions. This appendix briefly specifies the interface in enough detail for someone to succesfully implement an extension.

    You will recall that Cedar Backup extensions are third-party pieces of code which extend Cedar Backup's functionality. Extensions can be invoked from the Cedar Backup command line and are allowed to place their configuration in Cedar Backup's configuration file.

    There is a one-to-one mapping between a command-line extended action and an extension function. The mapping is configured in the Cedar Backup configuration file using a section something like this:

    <extensions>
       <action>
          <name>database</name>
          <module>foo</module>
          <function>bar</function>
          <index>101</index>
       </action> 
    </extensions>
          

    In this case, the action database has been mapped to the extension function foo.bar().

    Extension functions can take any actions they would like to once they have been invoked, but must abide by these rules:

    1. Extensions may not write to stdout or stderr using functions such as print or sys.write.

    2. All logging must take place using the Python logging facility. Flow-of-control logging should happen on the CedarBackup3.log topic. Authors can assume that ERROR will always go to the terminal, that INFO and WARN will always be logged, and that DEBUG will be ignored unless debugging is enabled.

    3. Any time an extension invokes a command-line utility, it must be done through the CedarBackup3.util.executeCommand function. This will help keep Cedar Backup safer from format-string attacks, and will make it easier to consistently log command-line process output.

    4. Extensions may not return any value.

    5. Extensions must throw a Python exception containing a descriptive message if processing fails. Extension authors can use their judgement as to what constitutes failure; however, any problems during execution should result in either a thrown exception or a logged message.

    6. Extensions may rely only on Cedar Backup functionality that is advertised as being part of the public interface. This means that extensions cannot directly make use of methods, functions or values starting with with the _ character. Furthermore, extensions should only rely on parts of the public interface that are documented in the online Epydoc documentation.

    7. Extension authors are encouraged to extend the Cedar Backup public interface through normal methods of inheritence. However, no extension is allowed to directly change Cedar Backup code in a way that would affect how Cedar Backup itself executes when the extension has not been invoked. For instance, extensions would not be allowed to add new command-line options or new writer types.

    8. Extensions must be written to assume an empty locale set (no $LC_* settings) and $LANG=C. For the typical open-source software project, this would imply writing output-parsing code against the English localization (if any). The executeCommand function does sanitize the environment to enforce this configuration.

    Extension functions take three arguments: the path to configuration on disk, a CedarBackup3.cli.Options object representing the command-line options in effect, and a CedarBackup3.config.Config object representing parsed standard configuration.

    def function(configPath, options, config):
       """Sample extension function."""
       pass
          

    This interface is structured so that simple extensions can use standard configuration without having to parse it for themselves, but more complicated extensions can get at the configuration file on disk and parse it again as needed.

    The interface to the CedarBackup3.cli.Options and CedarBackup3.config.Config classes has been thoroughly documented using Epydoc, and the documentation is available on the Cedar Backup website. The interface is guaranteed to change only in backwards-compatible ways unless the Cedar Backup major version number is bumped (i.e. from 2 to 3).

    If an extension needs to add its own configuration information to the Cedar Backup configuration file, this extra configuration must be added in a new configuration section using a name that does not conflict with standard configuration or other known extensions.

    For instance, our hypothetical database extension might require configuration indicating the path to some repositories to back up. This information might go into a section something like this:

    <database>
       <repository>/path/to/repo1</repository>
       <repository>/path/to/repo2</repository>
    </database>
          

    In order to read this new configuration, the extension code can either inherit from the Config object and create a subclass that knows how to parse the new database config section, or can write its own code to parse whatever it needs out of the file. Either way, the resulting code is completely independent of the standard Cedar Backup functionality.

    AppendixB.Dependencies

    Python 3.4 (or later)

    If you can't find a package for your system, install from the package source, using the upstream link.

    RSH Server and Client

    Although Cedar Backup will technically work with any RSH-compatible server and client pair (such as the classic rsh client), most users should only use an SSH (secure shell) server and client.

    The defacto standard today is OpenSSH. Some systems package the server and the client together, and others package the server and the client separately. Note that master nodes need an SSH client, and client nodes need to run an SSH server.

    If you can't find SSH client or server packages for your system, install from the package source, using the upstream link.

    mkisofs

    The mkisofs command is used create ISO filesystem images that can later be written to backup media.

    On Debian platforms, mkisofs is not distributed and genisoimage is used instead. The Debian package takes care of this for you.

    If you can't find a package for your system, install from the package source, using the upstream link.

    cdrecord

    The cdrecord command is used to write ISO images to CD media in a backup device.

    On Debian platforms, cdrecord is not distributed and wodim is used instead. The Debian package takes care of this for you.

    If you can't find a package for your system, install from the package source, using the upstream link.

    dvd+rw-tools

    The dvd+rw-tools package provides the growisofs utility, which is used to write ISO images to DVD media in a backup device.

    If you can't find a package for your system, install from the package source, using the upstream link.

    eject and volname

    The eject command is used to open and close the tray on a backup device (if the backup device has a tray). Sometimes, the tray must be opened and closed in order to "reset" the device so it notices recent changes to a disc.

    The volname command is used to determine the volume name of media in a backup device.

    If you can't find a package for your system, install from the package source, using the upstream link.

    mount and umount

    The mount and umount commands are used to mount and unmount CD/DVD media after it has been written, in order to run a consistency check.

    If you can't find a package for your system, install from the package source, using the upstream link.

    grepmail

    The grepmail command is used by the mbox extension to pull out only recent messages from mbox mail folders.

    If you can't find a package for your system, install from the package source, using the upstream link.

    gpg

    The gpg command is used by the encrypt extension to encrypt files.

    If you can't find a package for your system, install from the package source, using the upstream link.

    split

    The split command is used by the split extension to split up large files.

    This command is typically part of the core operating system install and is not distributed in a separate package.

    AWS CLI

    AWS CLI is Amazon's official command-line tool for interacting with the Amazon Web Services infrastruture. Cedar Backup uses AWS CLI to copy backup data up to Amazon S3 cloud storage.

    After you install AWS CLI, you need to configure your connection to AWS with an appropriate access id and access key. Amazon provides a good setup guide.

    The initial implementation of the amazons3 extension was written using AWS CLI 1.4. As of this writing, not all Linux distributions include a package for this version. On these platforms, the easiest way to install it is via PIP: apt-get install python3-pip, and then pip3 install awscli. The Debian package includes an appropriate dependency starting with the jessie release.

    Chardet

    The cback3-amazons3-sync command relies on the Chardet Python package to check filename encoding. You only need this package if you are going to use the sync tool.

    AppendixC.Data Recovery

    Finding your Data

    The first step in data recovery is finding the data that you want to recover. You need to decide whether you are going to to restore off backup media, or out of some existing staging data that has not yet been purged. The only difference is, if you purge staging data less frequently than once per week, you might have some data available in the staging directories which would not be found on your backup media, depending on how you rotate your media. (And of course, if your system is trashed or stolen, you probably will not have access to your old staging data in any case.)

    Regardless of the data source you choose, you will find the data organized in the same way. The remainder of these examples will work off an example backup disc, but the contents of the staging directory will look pretty much like the contents of the disc, with data organized first by date and then by backup peer name.

    This is the root directory of my example disc:

    root:/mnt/cdrw# ls -l
    total 4
    drwxr-x---  3 backup backup 4096 Sep 01 06:30 2005/
          

    In this root directory is one subdirectory for each year represented in the backup. In this example, the backup represents data entirely from the year 2005. If your configured backup week happens to span a year boundary, there would be two subdirectories here (for example, one for 2005 and one for 2006).

    Within each year directory is one subdirectory for each month represented in the backup.

    root:/mnt/cdrw/2005# ls -l
    total 2
    dr-xr-xr-x  6 root root 2048 Sep 11 05:30 09/
          

    In this example, the backup represents data entirely from the month of September, 2005. If your configured backup week happens to span a month boundary, there would be two subdirectories here (for example, one for August 2005 and one for September 2005).

    Within each month directory is one subdirectory for each day represented in the backup.

    root:/mnt/cdrw/2005/09# ls -l
    total 8
    dr-xr-xr-x  5 root root 2048 Sep  7 05:30 07/
    dr-xr-xr-x  5 root root 2048 Sep  8 05:30 08/
    dr-xr-xr-x  5 root root 2048 Sep  9 05:30 09/
    dr-xr-xr-x  5 root root 2048 Sep 11 05:30 11/
          

    Depending on how far into the week your backup media is from, you might have as few as one daily directory in here, or as many as seven.

    Within each daily directory is a stage indicator (indicating when the directory was staged) and one directory for each peer configured in the backup:

    root:/mnt/cdrw/2005/09/07# ls -l
    total 10
    dr-xr-xr-x  2 root root 2048 Sep  7 02:31 host1/
    -r--r--r--  1 root root    0 Sep  7 03:27 cback.stage
    dr-xr-xr-x  2 root root 4096 Sep  7 02:30 host2/
    dr-xr-xr-x  2 root root 4096 Sep  7 03:23 host3/
          

    In this case, you can see that my backup includes three machines, and that the backup data was staged on September 7, 2005 at 03:27.

    Within the directory for a given host are all of the files collected on that host. This might just include tarfiles from a normal Cedar Backup collect run, and might also include files collected from Cedar Backup extensions or by other third-party processes on your system.

    root:/mnt/cdrw/2005/09/07/host1# ls -l
    total 157976
    -r--r--r--  1 root root 11206159 Sep  7 02:30 boot.tar.bz2
    -r--r--r--  1 root root        0 Sep  7 02:30 cback.collect
    -r--r--r--  1 root root     3199 Sep  7 02:30 dpkg-selections.txt.bz2
    -r--r--r--  1 root root   908325 Sep  7 02:30 etc.tar.bz2
    -r--r--r--  1 root root      389 Sep  7 02:30 fdisk-l.txt.bz2
    -r--r--r--  1 root root  1003100 Sep  7 02:30 ls-laR.txt.bz2
    -r--r--r--  1 root root    19800 Sep  7 02:30 mysqldump.txt.bz2
    -r--r--r--  1 root root  4133372 Sep  7 02:30 opt-local.tar.bz2
    -r--r--r--  1 root root 44794124 Sep  8 23:34 opt-public.tar.bz2
    -r--r--r--  1 root root 30028057 Sep  7 02:30 root.tar.bz2
    -r--r--r--  1 root root  4747070 Sep  7 02:30 svndump-0:782-opt-svn-repo1.txt.bz2
    -r--r--r--  1 root root   603863 Sep  7 02:30 svndump-0:136-opt-svn-repo2.txt.bz2
    -r--r--r--  1 root root   113484 Sep  7 02:30 var-lib-jspwiki.tar.bz2
    -r--r--r--  1 root root 19556660 Sep  7 02:30 var-log.tar.bz2
    -r--r--r--  1 root root 14753855 Sep  7 02:30 var-mail.tar.bz2
             

    As you can see, I back up variety of different things on host1. I run the normal collect action, as well as the sysinfo, mysql and subversion extensions. The resulting backup files are named in a way that makes it easy to determine what they represent.

    Files of the form *.tar.bz2 represent directories backed up by the collect action. The first part of the name (before .tar.bz2), represents the path to the directory. For example, boot.tar.gz contains data from /boot, and var-lib-jspwiki.tar.bz2 contains data from /var/lib/jspwiki.

    The fdisk-l.txt.bz2, ls-laR.tar.bz2 and dpkg-selections.tar.bz2 files are produced by the sysinfo extension.

    The mysqldump.txt.bz2 file is produced by the mysql extension. It represents a system-wide database dump, because I use the all flag in configuration. If I were to configure Cedar Backup to dump individual datbases, then the filename would contain the database name (something like mysqldump-bugs.txt.bz2).

    Finally, the files of the form svndump-*.txt.bz2 are produced by the subversion extension. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc.

    Recovering Filesystem Data

    Filesystem data is gathered by the standard Cedar Backup collect action. This data is placed into files of the form *.tar. The first part of the name (before .tar), represents the path to the directory. For example, boot.tar would contain data from /boot, and var-lib-jspwiki.tar would contain data from /var/lib/jspwiki. (As a special case, data from the root directory would be placed in -.tar). Remember that your tarfile might have a bzip2 (.bz2) or gzip (.gz) extension, depending on what compression you specified in configuration.

    If you are using full backups every day, the latest backup data is always within the latest daily directory stored on your backup media or within your staging directory. If you have some or all of your directories configured to do incremental backups, then the first day of the week holds the full backups and the other days represent incremental differences relative to that first day of the week.

    Full Restore

    To do a full system restore, find the newest applicable full backup and extract it. If you have some incremental backups, extract them into the same place as the full backup, one by one starting from oldest to newest. (This way, if a file changed every day you will always get the latest one.)

    All of the backed-up files are stored in the tar file in a relative fashion, so you can extract from the tar file either directly into the filesystem, or into a temporary location.

    For example, to restore boot.tar.bz2 directly into /boot, execute tar from your root directory (/):

    root:/# bzcat boot.tar.bz2 | tar xvf -
             

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    If you want to extract boot.tar.gz into a temporary location like /tmp/boot instead, just change directories first. In this case, you'd execute the tar command from within /tmp instead of /.

    root:/tmp# bzcat boot.tar.bz2 | tar xvf -
             

    Again, use zcat or just cat as appropriate.

    For more information, you might want to check out the manpage or GNU info documentation for the tar command.

    Partial Restore

    Most users will need to do a partial restore much more frequently than a full restore. Perhaps you accidentally removed your home directory, or forgot to check in some version of a file before deleting it. Or, perhaps the person who packaged Apache for your system blew away your web server configuration on upgrade (it happens). The solution to these and other kinds of problems is a partial restore (assuming you've backed up the proper things).

    The procedure is similar to a full restore. The specific steps depend on how much information you have about the file you are looking for. Where with a full restore, you can confidently extract the full backup followed by each of the incremental backups, this might not be what you want when doing a partial restore. You may need to take more care in finding the right version of a file — since the same file, if changed frequently, would appear in more than one backup.

    Start by finding the backup media that contains the file you are looking for. If you rotate your backup media, and your last known contact with the file was a while ago, you may need to look on older media to find it. This may take some effort if you are not sure when the change you are trying to correct took place.

    Once you have decided to look at a particular piece of backup media, find the correct peer (host), and look for the file in the full backup:

    root:/tmp# bzcat boot.tar.bz2 | tar tvf - path/to/file
             

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    The tvf tells tar to search for the file in question and just list the results rather than extracting the file. Note that the filename is relative (with no starting /). Alternately, you can omit the path/to/file and search through the output using more or less

    If you haven't found what you are looking for, work your way through the incremental files for the directory in question. One of them may also have the file if it changed during the course of the backup. Or, move to older or newer media and see if you can find the file there.

    Once you have found your file, extract it using xvf:

    root:/tmp# bzcat boot.tar.bz2 | tar xvf - path/to/file
             

    Again, use zcat or just cat as appropriate.

    Inspect the file and make sure it's what you're looking for. Again, you may need to move to older or newer media to find the exact version of your file.

    For more information, you might want to check out the manpage or GNU info documentation for the tar command.

    Recovering MySQL Data

    MySQL data is gathered by the Cedar Backup mysql extension. This extension always creates a full backup each time it runs. This wastes some space, but makes it easy to restore database data. The following procedure describes how to restore your MySQL database from the backup.

    Warning

    I am not a MySQL expert. I am providing this information for reference. I have tested these procedures on my own MySQL installation; however, I only have a single database for use by Bugzilla, and I may have misunderstood something with regard to restoring individual databases as a user other than root. If you have any doubts, test the procedure below before relying on it!

    MySQL experts and/or knowledgable Cedar Backup users: feel free to write me and correct any part of this procedure.

    First, find the backup you are interested in. If you have specified all databases in configuration, you will have a single backup file, called mysqldump.txt. If you have specified individual databases in configuration, then you will have files with names like mysqldump-database.txt instead. In either case, your file might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration.

    If you are restoring an all databases backup, make sure that you have correctly created the root user and know its password. Then, execute:

    daystrom:/# bzcat mysqldump.txt.bz2 | mysql -p -u root
          

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    Because the database backup includes CREATE DATABASE SQL statements, this command should take care of creating all of the databases within the backup, as well as populating them.

    If you are restoring a backup for a specific database, you have two choices. If you have a root login, you can use the same command as above:

    daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u root
          

    Otherwise, you can create the database and its login first (or have someone create it) and then use a database-specific login to execute the restore:

    daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u user database
          

    Again, use zcat or just cat as appropriate.

    For more information on using MySQL, see the documentation on the MySQL web site, http://mysql.org/, or the manpages for the mysql and mysqldump commands.

    Recovering Subversion Data

    Subversion data is gathered by the Cedar Backup subversion extension. Cedar Backup will create either full or incremental backups, but the procedure for restoring is the same for both. Subversion backups are always taken on a per-repository basis. If you need to restore more than one repository, follow the procedures below for each repository you are interested in.

    First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week.

    The subversion extension creates files of the form svndump-*.txt. These files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc.

    Next, if you still have the old Subversion repository around, you might want to just move it off (rename the top-level directory) before executing the restore. Or, you can restore into a temporary directory and rename it later to its real name once you've checked it out. That is what my example below will show.

    Next, you need to create a new Subversion repository to hold the restored data. This example shows an FSFS repository, but that is an arbitrary choice. You can restore from an FSFS backup into a FSFS repository or a BDB repository. The Subversion dump format is backend-agnostic.

    root:/tmp# svnadmin create --fs-type=fsfs testrepo
          

    Next, load the full backup into the repository:

    root:/tmp# bzcat svndump-0:782-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
          

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    Follow that with loads for each of the incremental backups:

    root:/tmp# bzcat svndump-783:785-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
    root:/tmp# bzcat svndump-786:800-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
          

    Again, use zcat or just cat as appropriate.

    When this is done, your repository will be restored to the point of the last commit indicated in the svndump file (in this case, to revision 800).

    Note

    Note: don't be surprised if, when you test this, the restored directory doesn't have exactly the same contents as the original directory. I can't explain why this happens, but if you execute svnadmin dump on both old and new repositories, the results are identical. This means that the repositories do contain the same content.

    For more information on using Subversion, see the book Version Control with Subversion (http://svnbook.red-bean.com/) or the Subversion FAQ (http://subversion.tigris.org/faq.html).

    Recovering Mailbox Data

    Mailbox data is gathered by the Cedar Backup mbox extension. Cedar Backup will create either full or incremental backups, but both kinds of backups are treated identically when restoring.

    Individual mbox files and mbox directories are treated a little differently, since individual files are just compressed, but directories are collected into a tar archive.

    First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week.

    The mbox extension creates files of the form mbox-*. Backup files for individual mbox files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. Backup files for mbox directories will have a .tar, .tar.gz or .tar.bz2 extension, again depending on what kind of compression you specified in configuration.

    There is one backup file for each configured mbox file or directory. The backup file name represents the name of the file or directory and the date it was backed up. So, the file mbox-20060624-home-user-mail-greylist represents the backup for /home/user/mail/greylist run on 24 Jun 2006. Likewise, mbox-20060624-home-user-mail.tar represents the backup for the /home/user/mail directory run on that same date.

    Once you have found the files you are looking for, the restoration procedure is fairly simple. First, concatenate all of the backup files together. Then, use grepmail to eliminate duplicate messages (if any).

    Here is an example for a single backed-up file:

    root:/tmp# rm restore.mbox # make sure it's not left over
    root:/tmp# cat mbox-20060624-home-user-mail-greylist >> restore.mbox
    root:/tmp# cat mbox-20060625-home-user-mail-greylist >> restore.mbox
    root:/tmp# cat mbox-20060626-home-user-mail-greylist >> restore.mbox
    root:/tmp# grepmail -a -u restore.mbox > nodups.mbox
          

    At this point, nodups.mbox contains all of the backed-up messages from /home/user/mail/greylist.

    Of course, if your backups are compressed, you'll have to use zcat or bzcat rather than just cat.

    If you are backing up mbox directories rather than individual files, see the filesystem instructions for notes on now to extract the individual files from inside tar archives. Extract the files you are interested in, and then concatenate them together just like shown above for the individual case.

    Recovering Data split by the Split Extension

    The Split extension takes large files and splits them up into smaller files. Typically, it would be used in conjunction with the cback3-span command.

    The split up files are not difficult to work with. Simply find all of the files — which could be split between multiple discs — and concatenate them together.

    root:/tmp# rm usr-src-software.tar.gz  # make sure it's not there
    root:/tmp# cat usr-src-software.tar.gz_00001 >> usr-src-software.tar.gz
    root:/tmp# cat usr-src-software.tar.gz_00002 >> usr-src-software.tar.gz
    root:/tmp# cat usr-src-software.tar.gz_00003 >> usr-src-software.tar.gz
          

    Then, use the resulting file like usual.

    Remember, you need to have all of the files that the original large file was split into before this will work. If you are missing a file, the result of the concatenation step will be either a corrupt file or a truncated file (depending on which chunks you did not include).

    AppendixD.Securing Password-less SSH Connections

    Cedar Backup relies on password-less public key SSH connections to make various parts of its backup process work. Password-less scp is used to stage files from remote clients to the master, and password-less ssh is used to execute actions on managed clients.

    Normally, it is a good idea to avoid password-less SSH connections in favor of using an SSH agent. The SSH agent manages your SSH connections so that you don't need to type your passphrase over and over. You get most of the benefits of a password-less connection without the risk. Unfortunately, because Cedar Backup has to execute without human involvement (through a cron job), use of an agent really isn't feasable. We have to rely on true password-less public keys to give the master access to the client peers.

    Traditionally, Cedar Backup has relied on a segmenting strategy to minimize the risk. Although the backup typically runs as root — so that all parts of the filesystem can be backed up — we don't use the root user for network connections. Instead, we use a dedicated backup user on the master to initiate network connections, and dedicated users on each of the remote peers to accept network connections.

    With this strategy in place, an attacker with access to the backup user on the master (or even root access, really) can at best only get access to the backup user on the remote peers. We still concede a local attack vector, but at least that vector is restricted to an unprivileged user.

    Some Cedar Backup users may not be comfortable with this risk, and others may not be able to implement the segmentation strategy — they simply may not have a way to create a login which is only used for backups.

    So, what are these users to do? Fortunately there is a solution. The SSH authorized keys file supports a way to put a filter in place on an SSH connection. This excerpt is from the AUTHORIZED_KEYS FILE FORMAT section of man 8 sshd:

    command="command"
       Specifies that the command is executed whenever this key is used for
       authentication.  The command supplied by the user (if any) is ignored.  The
       command is run on a pty if the client requests a pty; otherwise it is run
       without a tty.  If an 8-bit clean channel is required, one must not request
       a pty or should specify no-pty.  A quote may be included in the command by
       quoting it with a backslash.  This option might be useful to restrict
       certain public keys to perform just a specific operation.  An example might
       be a key that permits remote backups but nothing else.  Note that the client
       may specify TCP and/or X11 forwarding unless they are explicitly prohibited.
       Note that this option applies to shell, command or subsystem execution.
          

    Essentially, this gives us a way to authenticate the commands that are being executed. We can either accept or reject commands, and we can even provide a readable error message for commands we reject. The filter is applied on the remote peer, to the key that provides the master access to the remote peer.

    So, let's imagine that we have two hosts: master mickey, and peer minnie. Here is the original ~/.ssh/authorized_keys file for the backup user on minnie (remember, this is all on one line in the file):

    ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp3MsSpVB9q9iZ+awek120391k;mm0c221=3=km
    =m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9FtAZ9U+MmpL901231asdkl;ai1-923ma9s=9=
    1-2341=-a0sd=-sa0=1z= backup@mickey
          

    This line is the public key that minnie can use to identify the backup user on mickey. Assuming that there is no passphrase on the private key back on mickey, the backup user on mickey can get direct access to minnie.

    To put the filter in place, we add a command option to the key, like this:

    command="/opt/backup/validate-backup" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp
    3MsSpVB9q9iZ+awek120391k;mm0c221=3=km=m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9F
    tAZ9U+MmpL901231asdkl;ai1-923ma9s=9=1-2341=-a0sd=-sa0=1z= backup@mickey
          

    Basically, the command option says that whenever this key is used to successfully initiate a connection, the /opt/backup/validate-backup command will be run instead of the real command that came over the SSH connection. Fortunately, the interface gives the command access to certain shell variables that can be used to invoke the original command if you want to.

    A very basic validate-backup script might look something like this:

    #!/bin/bash
    if [[ "${SSH_ORIGINAL_COMMAND}" == "ls -l" ]] ; then
        ${SSH_ORIGINAL_COMMAND}
    else
       echo "Security policy does not allow command [${SSH_ORIGINAL_COMMAND}]."
       exit 1
    fi
          

    This script allows exactly ls -l and nothing else. If the user attempts some other command, they get a nice error message telling them that their command has been disallowed.

    For remote commands executed over ssh, the original command is exactly what the caller attempted to invoke. For remote copies, the commands are either scp -f file (copy from the peer to the master) or scp -t file (copy to the peer from the master).

    If you want, you can see what command SSH thinks it is executing by using ssh -v or scp -v. The command will be right at the top, something like this:

    Executing: program /usr/bin/ssh host mickey, user (unspecified), command scp -v -f .profile
    OpenSSH_4.3p2 Debian-9, OpenSSL 0.9.8c 05 Sep 2006
    debug1: Reading configuration data /home/backup/.ssh/config
    debug1: Applying options for daystrom
    debug1: Reading configuration data /etc/ssh/ssh_config
    debug1: Applying options for *
    debug2: ssh_connect: needpriv 0
          

    Omit the -v and you have your command: scp -f .profile.

    For a normal, non-managed setup, you need to allow the following commands, where /path/to/collect/ is replaced with the real path to the collect directory on the remote peer:

    scp -f /path/to/collect/cback.collect
    scp -f /path/to/collect/*
    scp -t /path/to/collect/cback.stage
          

    If you are configuring a managed client, then you also need to list the exact command lines that the master will be invoking on the managed client. You are guaranteed that the master will invoke one action at a time, so if you list two lines per action (full and non-full) you should be fine. Here's an example for the collect action:

    /usr/bin/cback3 --full collect
    /usr/bin/cback3 collect
          

    Of course, you would have to list the actual path to the cback3 executable — exactly the one listed in the <cback_command> configuration option for your managed peer.

    I hope that there is enough information here for interested users to implement something that makes them comfortable. I have resisted providing a complete example script, because I think everyone's setup will be different. However, feel free to write if you are working through this and you have questions.

    AppendixE.Copyright

    
    Copyright (c) 2004-2011,2013-2015
    Kenneth J. Pronovici
    
    This work is free; you can redistribute it and/or modify it under
    the terms of the GNU General Public License (the "GPL"), Version 2,
    as published by the Free Software Foundation.
    
    For the purposes of the GPL, the "preferred form of modification"
    for this work is the original Docbook XML text files.  If you
    choose to distribute this work in a compiled form (i.e. if you
    distribute HTML, PDF or Postscript documents based on the original
    Docbook XML text files), you must also consider image files to be
    "source code" if those images are required in order to construct a
    complete and readable compiled version of the work.
    
    This work is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    
    Copies of the GNU General Public License are available from
    the Free Software Foundation website, http://www.gnu.org/.
    You may also write the Free Software Foundation, Inc., 
    51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA
    
    ====================================================================
    
    		    GNU GENERAL PUBLIC LICENSE
    		       Version 2, June 1991
    
     Copyright (C) 1989, 1991 Free Software Foundation, Inc.
         51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA
     Everyone is permitted to copy and distribute verbatim copies
     of this license document, but changing it is not allowed.
    
    			    Preamble
    
      The licenses for most software are designed to take away your
    freedom to share and change it.  By contrast, the GNU General Public
    License is intended to guarantee your freedom to share and change free
    software--to make sure the software is free for all its users.  This
    General Public License applies to most of the Free Software
    Foundation's software and to any other program whose authors commit to
    using it.  (Some other Free Software Foundation software is covered by
    the GNU Library General Public License instead.)  You can apply it to
    your programs, too.
    
      When we speak of free software, we are referring to freedom, not
    price.  Our General Public Licenses are designed to make sure that you
    have the freedom to distribute copies of free software (and charge for
    this service if you wish), that you receive source code or can get it
    if you want it, that you can change the software or use pieces of it
    in new free programs; and that you know you can do these things.
    
      To protect your rights, we need to make restrictions that forbid
    anyone to deny you these rights or to ask you to surrender the rights.
    These restrictions translate to certain responsibilities for you if you
    distribute copies of the software, or if you modify it.
    
      For example, if you distribute copies of such a program, whether
    gratis or for a fee, you must give the recipients all the rights that
    you have.  You must make sure that they, too, receive or can get the
    source code.  And you must show them these terms so they know their
    rights.
    
      We protect your rights with two steps: (1) copyright the software, and
    (2) offer you this license which gives you legal permission to copy,
    distribute and/or modify the software.
    
      Also, for each author's protection and ours, we want to make certain
    that everyone understands that there is no warranty for this free
    software.  If the software is modified by someone else and passed on, we
    want its recipients to know that what they have is not the original, so
    that any problems introduced by others will not reflect on the original
    authors' reputations.
    
      Finally, any free program is threatened constantly by software
    patents.  We wish to avoid the danger that redistributors of a free
    program will individually obtain patent licenses, in effect making the
    program proprietary.  To prevent this, we have made it clear that any
    patent must be licensed for everyone's free use or not licensed at all.
    
      The precise terms and conditions for copying, distribution and
    modification follow.
    
    		    GNU GENERAL PUBLIC LICENSE
       TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
    
      0. This License applies to any program or other work which contains
    a notice placed by the copyright holder saying it may be distributed
    under the terms of this General Public License.  The "Program", below,
    refers to any such program or work, and a "work based on the Program"
    means either the Program or any derivative work under copyright law:
    that is to say, a work containing the Program or a portion of it,
    either verbatim or with modifications and/or translated into another
    language.  (Hereinafter, translation is included without limitation in
    the term "modification".)  Each licensee is addressed as "you".
    
    Activities other than copying, distribution and modification are not
    covered by this License; they are outside its scope.  The act of
    running the Program is not restricted, and the output from the Program
    is covered only if its contents constitute a work based on the
    Program (independent of having been made by running the Program).
    Whether that is true depends on what the Program does.
    
      1. You may copy and distribute verbatim copies of the Program's
    source code as you receive it, in any medium, provided that you
    conspicuously and appropriately publish on each copy an appropriate
    copyright notice and disclaimer of warranty; keep intact all the
    notices that refer to this License and to the absence of any warranty;
    and give any other recipients of the Program a copy of this License
    along with the Program.
    
    You may charge a fee for the physical act of transferring a copy, and
    you may at your option offer warranty protection in exchange for a fee.
    
      2. You may modify your copy or copies of the Program or any portion
    of it, thus forming a work based on the Program, and copy and
    distribute such modifications or work under the terms of Section 1
    above, provided that you also meet all of these conditions:
    
        a) You must cause the modified files to carry prominent notices
        stating that you changed the files and the date of any change.
    
        b) You must cause any work that you distribute or publish, that in
        whole or in part contains or is derived from the Program or any
        part thereof, to be licensed as a whole at no charge to all third
        parties under the terms of this License.
    
        c) If the modified program normally reads commands interactively
        when run, you must cause it, when started running for such
        interactive use in the most ordinary way, to print or display an
        announcement including an appropriate copyright notice and a
        notice that there is no warranty (or else, saying that you provide
        a warranty) and that users may redistribute the program under
        these conditions, and telling the user how to view a copy of this
        License.  (Exception: if the Program itself is interactive but
        does not normally print such an announcement, your work based on
        the Program is not required to print an announcement.)
    
    These requirements apply to the modified work as a whole.  If
    identifiable sections of that work are not derived from the Program,
    and can be reasonably considered independent and separate works in
    themselves, then this License, and its terms, do not apply to those
    sections when you distribute them as separate works.  But when you
    distribute the same sections as part of a whole which is a work based
    on the Program, the distribution of the whole must be on the terms of
    this License, whose permissions for other licensees extend to the
    entire whole, and thus to each and every part regardless of who wrote it.
    
    Thus, it is not the intent of this section to claim rights or contest
    your rights to work written entirely by you; rather, the intent is to
    exercise the right to control the distribution of derivative or
    collective works based on the Program.
    
    In addition, mere aggregation of another work not based on the Program
    with the Program (or with a work based on the Program) on a volume of
    a storage or distribution medium does not bring the other work under
    the scope of this License.
    
      3. You may copy and distribute the Program (or a work based on it,
    under Section 2) in object code or executable form under the terms of
    Sections 1 and 2 above provided that you also do one of the following:
    
        a) Accompany it with the complete corresponding machine-readable
        source code, which must be distributed under the terms of Sections
        1 and 2 above on a medium customarily used for software interchange; or,
    
        b) Accompany it with a written offer, valid for at least three
        years, to give any third party, for a charge no more than your
        cost of physically performing source distribution, a complete
        machine-readable copy of the corresponding source code, to be
        distributed under the terms of Sections 1 and 2 above on a medium
        customarily used for software interchange; or,
    
        c) Accompany it with the information you received as to the offer
        to distribute corresponding source code.  (This alternative is
        allowed only for noncommercial distribution and only if you
        received the program in object code or executable form with such
        an offer, in accord with Subsection b above.)
    
    The source code for a work means the preferred form of the work for
    making modifications to it.  For an executable work, complete source
    code means all the source code for all modules it contains, plus any
    associated interface definition files, plus the scripts used to
    control compilation and installation of the executable.  However, as a
    special exception, the source code distributed need not include
    anything that is normally distributed (in either source or binary
    form) with the major components (compiler, kernel, and so on) of the
    operating system on which the executable runs, unless that component
    itself accompanies the executable.
    
    If distribution of executable or object code is made by offering
    access to copy from a designated place, then offering equivalent
    access to copy the source code from the same place counts as
    distribution of the source code, even though third parties are not
    compelled to copy the source along with the object code.
    
      4. You may not copy, modify, sublicense, or distribute the Program
    except as expressly provided under this License.  Any attempt
    otherwise to copy, modify, sublicense or distribute the Program is
    void, and will automatically terminate your rights under this License.
    However, parties who have received copies, or rights, from you under
    this License will not have their licenses terminated so long as such
    parties remain in full compliance.
    
      5. You are not required to accept this License, since you have not
    signed it.  However, nothing else grants you permission to modify or
    distribute the Program or its derivative works.  These actions are
    prohibited by law if you do not accept this License.  Therefore, by
    modifying or distributing the Program (or any work based on the
    Program), you indicate your acceptance of this License to do so, and
    all its terms and conditions for copying, distributing or modifying
    the Program or works based on it.
    
      6. Each time you redistribute the Program (or any work based on the
    Program), the recipient automatically receives a license from the
    original licensor to copy, distribute or modify the Program subject to
    these terms and conditions.  You may not impose any further
    restrictions on the recipients' exercise of the rights granted herein.
    You are not responsible for enforcing compliance by third parties to
    this License.
    
      7. If, as a consequence of a court judgment or allegation of patent
    infringement or for any other reason (not limited to patent issues),
    conditions are imposed on you (whether by court order, agreement or
    otherwise) that contradict the conditions of this License, they do not
    excuse you from the conditions of this License.  If you cannot
    distribute so as to satisfy simultaneously your obligations under this
    License and any other pertinent obligations, then as a consequence you
    may not distribute the Program at all.  For example, if a patent
    license would not permit royalty-free redistribution of the Program by
    all those who receive copies directly or indirectly through you, then
    the only way you could satisfy both it and this License would be to
    refrain entirely from distribution of the Program.
    
    If any portion of this section is held invalid or unenforceable under
    any particular circumstance, the balance of the section is intended to
    apply and the section as a whole is intended to apply in other
    circumstances.
    
    It is not the purpose of this section to induce you to infringe any
    patents or other property right claims or to contest validity of any
    such claims; this section has the sole purpose of protecting the
    integrity of the free software distribution system, which is
    implemented by public license practices.  Many people have made
    generous contributions to the wide range of software distributed
    through that system in reliance on consistent application of that
    system; it is up to the author/donor to decide if he or she is willing
    to distribute software through any other system and a licensee cannot
    impose that choice.
    
    This section is intended to make thoroughly clear what is believed to
    be a consequence of the rest of this License.
    
      8. If the distribution and/or use of the Program is restricted in
    certain countries either by patents or by copyrighted interfaces, the
    original copyright holder who places the Program under this License
    may add an explicit geographical distribution limitation excluding
    those countries, so that distribution is permitted only in or among
    countries not thus excluded.  In such case, this License incorporates
    the limitation as if written in the body of this License.
    
      9. The Free Software Foundation may publish revised and/or new versions
    of the General Public License from time to time.  Such new versions will
    be similar in spirit to the present version, but may differ in detail to
    address new problems or concerns.
    
    Each version is given a distinguishing version number.  If the Program
    specifies a version number of this License which applies to it and "any
    later version", you have the option of following the terms and conditions
    either of that version or of any later version published by the Free
    Software Foundation.  If the Program does not specify a version number of
    this License, you may choose any version ever published by the Free Software
    Foundation.
    
      10. If you wish to incorporate parts of the Program into other free
    programs whose distribution conditions are different, write to the author
    to ask for permission.  For software which is copyrighted by the Free
    Software Foundation, write to the Free Software Foundation; we sometimes
    make exceptions for this.  Our decision will be guided by the two goals
    of preserving the free status of all derivatives of our free software and
    of promoting the sharing and reuse of software generally.
    
    			    NO WARRANTY
    
      11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
    FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
    OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
    PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
    OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
    MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
    TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
    PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
    REPAIR OR CORRECTION.
    
      12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
    WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
    REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
    INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
    OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
    TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
    YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
    PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
    POSSIBILITY OF SUCH DAMAGES.
    
    		     END OF TERMS AND CONDITIONS
    
    ====================================================================
    
          
    CedarBackup3-3.1.6/doc/manual/ch05s03.html0000664000175000017500000004270212657665550021515 0ustar pronovicpronovic00000000000000Setting up a Pool of One

    Setting up a Pool of One

    Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one).

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Tip

    This setup procedure discusses how to set up Cedar Backup in the normal case for a pool of one. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure your writer device.

    Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation.

    Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations.

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Note

    There is no need to set up your CD/DVD device if you have decided not to execute the store action.

    Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers.

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly three times as big as the amount of data that will be backed up on a nightly basis, to allow for the data to be collected, staged, and then placed into an ISO filesystem image on disk. (This is one disadvantage to using Cedar Backup in single-machine pools, but in this day of really large hard drives, it might not be an issue.) Note that if you elect not to purge the staging directory every night, you will need even more space.

    You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                stage/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above) create a configuration file for your machine. Since you are working with a pool of one, you must configure all four action-specific sections: collect, stage, store and purge.

    The usual location for the Cedar Backup config file is /etc/cback3.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback3 script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidential information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback3 validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test your backup.

    Place a valid CD/DVD disc in your drive, and then use the command cback3 --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully.

    Just to be sure that everything worked properly, check the logfile (/var/log/cback3.log) for errors and also mount the CD/DVD disc to be sure it can be read.

    If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. [22] To be safe, always enable the consistency check option in the store configuration section.

    Step 9: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, one way to configure the cron job is to add a line like this to your /etc/crontab file:

    30 00 * * * root  cback3 all
             

    Or, you can create an executable script containing just these lines and place that file in the /etc/cron.daily directory:

    #/bin/sh
    cback3 all
             

    You should consider adding the --output or -O switch to your cback3 command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup3. As installed, this file contains several different settings, all commented out. Uncomment the Single machine (pool of one) entry in the file, and change the line so that the backup goes off when you want it to.

    CedarBackup3-3.1.6/doc/manual/manual.txt0000664000175000017500000104603112657665551021561 0ustar pronovicpronovic00000000000000Cedar Backup 3 Software Manual Kenneth J. Pronovici Copyright 2005-2008,2013-2015 Kenneth J. Pronovici This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation. For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work. This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA ------------------------------------------------------------------------------- Table of Contents Preface Purpose Audience Conventions Used in This Book Typographic Conventions Icons Organization of This Manual Acknowledgments 1. Introduction What is Cedar Backup? Migrating from Version 2 to Version 3 How to Get Support History 2. Basic Concepts General Architecture Data Recovery Cedar Backup Pools The Backup Process The Collect Action The Stage Action The Store Action The Purge Action The All Action The Validate Action The Initialize Action The Rebuild Action Coordination between Master and Clients Managed Backups Media and Device Types Incremental Backups Extensions 3. Installation Background Installing on a Debian System Installing from Source Installing Dependencies Installing the Source Package 4. Command Line Tools Overview The cback3 command Introduction Syntax Switches Actions The cback3-amazons3-sync command Introduction Syntax Switches The cback3-span command Introduction Syntax Switches Using cback3-span Sample run 5. Configuration Overview Configuration File Format Sample Configuration File Reference Configuration Options Configuration Peers Configuration Collect Configuration Stage Configuration Store Configuration Purge Configuration Extensions Configuration Setting up a Pool of One Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure your writer device. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test your backup. Step 9: Modify the backup cron jobs. Setting up a Client Peer Node Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure the master in your backup pool. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test your backup. Step 9: Modify the backup cron jobs. Setting up a Master Peer Node Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure your writer device. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test connectivity to client machines. Step 9: Test your backup. Step 10: Modify the backup cron jobs. Configuring your Writer Device Device Types Devices identified by by device name Devices identified by SCSI id Linux Notes Finding your Linux CD Writer Mac OS X Notes Optimized Blanking Stategy 6. Official Extensions System Information Extension Amazon S3 Extension Subversion Extension MySQL Extension PostgreSQL Extension Mbox Extension Encrypt Extension Split Extension Capacity Extension A. Extension Architecture Interface B. Dependencies C. Data Recovery Finding your Data Recovering Filesystem Data Full Restore Partial Restore Recovering MySQL Data Recovering Subversion Data Recovering Mailbox Data Recovering Data split by the Split Extension D. Securing Password-less SSH Connections E. Copyright Preface Table of Contents Purpose Audience Conventions Used in This Book Typographic Conventions Icons Organization of This Manual Acknowledgments Purpose This software manual has been written to document version 2 of Cedar Backup, originally released in early 2005. Audience This manual has been written for computer-literate administrators who need to use and configure Cedar Backup on their Linux or UNIX-like system. The examples in this manual assume the reader is relatively comfortable with UNIX and command-line interfaces. Conventions Used in This Book This section covers the various conventions used in this manual. Typographic Conventions Term Used for first use of important terms. Command Used for commands, command output, and switches Replaceable Used for replaceable items in code and text Filenames Used for file and directory names Icons Note This icon designates a note relating to the surrounding text. Tip This icon designates a helpful tip relating to the surrounding text. Warning This icon designates a warning relating to the surrounding text. Organization of This Manual Chapter1, Introduction Provides some some general history about Cedar Backup, what needs it is intended to meet, how to get support, and how to migrate from version 2 to version 3. Chapter2, Basic Concepts Discusses the basic concepts of a Cedar Backup infrastructure, and specifies terms used throughout the rest of the manual. Chapter3, Installation Explains how to install the Cedar Backup package either from the Python source distribution or from the Debian package. Chapter4, Command Line Tools Discusses the various Cedar Backup command-line tools, including the primary cback3 command. Chapter5, Configuration Provides detailed information about how to configure Cedar Backup. Chapter6, Official Extensions Describes each of the officially-supported Cedar Backup extensions. AppendixA, Extension Architecture Interface Specifies the Cedar Backup extension architecture interface, through which third party developers can write extensions to Cedar Backup. AppendixB, Dependencies Provides some additional information about the packages which Cedar Backup relies on, including information about how to find documentation and packages on non-Debian systems. AppendixC, Data Recovery Cedar Backup provides no facility for restoring backups, assuming the administrator can handle this infrequent task. This appendix provides some notes for administrators to work from. AppendixD, Securing Password-less SSH Connections Password-less SSH connections are a necessary evil when remote backup processes need to execute without human interaction. This appendix describes some ways that you can reduce the risk to your backup pool should your master machine be compromised. Acknowledgments The structure of this manual and some of the basic boilerplate has been taken from the book Version Control with Subversion. Thanks to the authors (and O'Reilly) for making this excellent reference available under a free and open license. Chapter1.Introduction Table of Contents What is Cedar Backup? Migrating from Version 2 to Version 3 How to Get Support History ?Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it.?? Linus Torvalds, at the release of Linux 2.0.8 in July of 1996. What is Cedar Backup? Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough (and almost all hardware is today), Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Alternately, Cedar Backup can write your backups to the Amazon S3 cloud rather than relying on physical media. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python 3 programming language. There are many different backup software implementations out there in the open source world. Cedar Backup aims to fill a niche: it aims to be a good fit for people who need to back up a limited amount of important data on a regular basis. Cedar Backup isn't for you if you want to back up your huge MP3 collection every night, or if you want to back up a few hundred machines. However, if you administer a small set of machines and you want to run daily incremental backups for things like system configuration, current email, small web sites, Subversion or Mercurial repositories, or small MySQL databases, then Cedar Backup is probably worth your time. Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python 3, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems. To run a Cedar Backup client, you really just need a working Python 3 installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images or talking to the Amazon S3 infrastructure. A full list of dependencies is provided in the section called ?Installing Dependencies?. Migrating from Version 2 to Version 3 The main difference between Cedar Backup version 2 and Cedar Backup version 3 is the targeted Python interpreter. Cedar Backup version 2 was designed for Python 2, while version 3 is a conversion of the original code to Python 3. Other than that, both versions are functionally equivalent. The configuration format is unchanged, and you can mix-and-match masters and clients of different versions in the same backup pool. Both versions will be fully supported until around the time of the Python 2 end-of-life in 2020, but you should plan to migrate sooner than that if possible. A major design goal for version 3 was to facilitate easy migration testing for users, by making it possible to install version 3 on the same server where version 2 was already in use. A side effect of this design choice is that all of the executables, configuration files, and logs changed names in version 3. Where version 2 used "cback", version 3 uses "cback3": cback3.conf instead of cback.conf, cback3.log instead of cback.log, etc. So, while migrating from version 2 to version 3 is relatively straightforward, you will have to make some changes manually. You will need to create a new configuration file (or soft link to the old one), modify your cron jobs to use the new executable name, etc. You can migrate one server at a time in your pool with no ill effects, or even incrementally migrate a single server by using version 2 and version 3 on different days of the week or for different parts of the backup. How to Get Support Cedar Backup is open source software that is provided to you at no cost. It is provided with no warranty, not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. However, that said, someone can usually help you solve whatever problems you might see. If you experience a problem, your best bet is to file an issue in the issue tracker at BitBucket. ^[1] When the source code was hosted at SourceForge, there was a mailing list. However, it was very lightly used in the last years before I abandoned SourceForge, and I have decided not to replace it. If you are not comfortable discussing your problem in public or listing it in a public database, or if you need to send along information that you do not want made public, then you can write . That mail will go directly to me. If you write the support address about a bug, a ?scrubbed? bug report will eventually end up in the public bug database anyway, so if at all possible you should use the public reporting mechanisms. One of the strengths of the open-source software development model is its transparency. Regardless of how you report your problem, please try to provide as much information as possible about the behavior you observed and the environment in which the problem behavior occurred. ^[2] In particular, you should provide: the version of Cedar Backup that you are using; how you installed Cedar Backup (i.e. Debian package, source package, etc.); the exact command line that you executed; any error messages you received, including Python stack traces (if any); and relevant sections of the Cedar Backup log. It would be even better if you could describe exactly how to reproduce the problem, for instance by including your entire configuration file and/or specific information about your system that might relate to the problem. However, please do not provide huge sections of debugging logs unless you are sure they are relevant or unless someone asks for them. Tip Sometimes, the error that Cedar Backup displays can be rather cryptic. This is because under internal error conditions, the text related to an exception might get propogated all of the way up to the user interface. If the message you receive doesn't make much sense, or if you suspect that it results from an internal error, you might want to re-run Cedar Backup with the --stack option. This forces Cedar Backup to dump the entire Python stack trace associated with the error, rather than just printing the last message it received. This is good information to include along with a bug report, as well. History Cedar Backup began life in late 2000 as a set of Perl scripts called kbackup. These scripts met an immediate need (which was to back up skyjammer.com and some personal machines) but proved to be unstable, overly verbose and rather difficult to maintain. In early 2002, work began on a rewrite of kbackup. The goal was to address many of the shortcomings of the original application, as well as to clean up the code and make it available to the general public. While doing research related to code I could borrow or base the rewrite on, I discovered that there was already an existing backup package with the name kbackup, so I decided to change the name to Cedar Backup instead. Because I had become fed up with the prospect of maintaining a large volume of Perl code, I decided to abandon that language in favor of Python. ^[3] At the time, I chose Python mostly because I was interested in learning it, but in retrospect it turned out to be a very good decision. From my perspective, Python has almost all of the strengths of Perl, but few of its inherent weaknesses (I feel that primarily, Python code often ends up being much more readable than Perl code). Around this same time, skyjammer.com and cedar-solutions.com were converted to run Debian GNU/Linux (potato) ^[4] and I entered the Debian new maintainer queue, so I also made it a goal to implement Debian packages along with a Python source distribution for the new release. Version 1.0 of Cedar Backup was released in June of 2002. We immediately began using it to back up skyjammer.com and cedar-solutions.com, where it proved to be much more stable than the original code. In the meantime, I continued to improve as a Python programmer and also started doing a significant amount of professional development in Java. It soon became obvious that the internal structure of Cedar Backup 1.0, while much better than kbackup, still left something to be desired. In November 2003, I began an attempt at cleaning up the codebase. I converted all of the internal documentation to use Epydoc, ^[5] and updated the code to use the newly-released Python logging package ^[6] after having a good experience with Java's log4j. However, I was still not satisfied with the code, which did not lend itself to the automated regression testing I had used when working with junit in my Java code. So, rather than releasing the cleaned-up code, I instead began another ground-up rewrite in May 2004. With this rewrite, I applied everything I had learned from other Java and Python projects I had undertaken over the last few years. I structured the code to take advantage of Python's unique ability to blend procedural code with object-oriented code, and I made automated unit testing a primary requirement. The result was the 2.0 release, which is cleaner, more compact, better focused, and better documented than any release before it. Utility code is less application-specific, and is now usable as a general-purpose library. The 2.0 release also includes a complete regression test suite of over 3000 tests, which will help to ensure that quality is maintained as development continues into the future. ^[7] The 3.0 release of Cedar Backup is a Python 3 conversion of the 2.0 release, with minimal additional functionality. The conversion from Python 2 to Python 3 started in mid-2015, about 5 years before the anticipated deprecation of Python 2 in 2020. Most users should consider transitioning to the 3.0 release. ------------------------------------------------------------------------------- ^[1] See https://bitbucket.org/cedarsolutions/cedar-backup3/issues. ^[2] See Simon Tatham's excellent bug reporting tutorial: http:// www.chiark.greenend.org.uk/~sgtatham/bugs.html . ^[3] See http://www.python.org/ . ^[4] Debian's stable releases are named after characters in the Toy Story movie. ^[5] Epydoc is a Python code documentation tool. See http:// epydoc.sourceforge.net/. ^[6] See http://docs.python.org/lib/module-logging.html . ^[7] Tests are implemented using Python's unit test framework. See http:// docs.python.org/lib/module-unittest.html. Chapter2.Basic Concepts Table of Contents General Architecture Data Recovery Cedar Backup Pools The Backup Process The Collect Action The Stage Action The Store Action The Purge Action The All Action The Validate Action The Initialize Action The Rebuild Action Coordination between Master and Clients Managed Backups Media and Device Types Incremental Backups Extensions General Architecture Cedar Backup is architected as a Python package (library) and a single executable (a Python script). The Python package provides both application-specific code and general utilities that can be used by programs other than Cedar Backup. It also includes modules that can be used by third parties to extend Cedar Backup or provide related functionality. The cback3 script is designed to run as root, since otherwise it's difficult to back up system directories or write to the CD/DVD device. However, pains are taken to use the backup user's effective user id (specified in configuration) when appropriate. Note: this does not mean that cback3 runs setuid^[8] or setgid. However, all files on disk will be owned by the backup user, and and all rsh-based network connections will take place as the backup user. The cback3 script is configured via command-line options and an XML configuration file on disk. The configuration file is normally stored in /etc/ cback3.conf, but this path can be overridden at runtime. See Chapter5, Configuration for more information on how Cedar Backup is configured. Warning You should be aware that backups to CD/DVD media can probably be read by any user which has permissions to mount the CD/DVD writer. If you intend to leave the backup disc in the drive at all times, you may want to consider this when setting up device permissions on your machine. See also the section called ?Encrypt Extension?. Data Recovery Cedar Backup does not include any facility to restore backups. Instead, it assumes that the administrator (using the procedures and references in AppendixC, Data Recovery) can handle the task of restoring their own system, using the standard system tools at hand. If I were to maintain recovery code in Cedar Backup, I would almost certainly end up in one of two situations. Either Cedar Backup would only support simple recovery tasks, and those via an interface a lot like that of the underlying system tools; or Cedar Backup would have to include a hugely complicated interface to support more specialized (and hence useful) recovery tasks like restoring individual files as of a certain point in time. In either case, I would end up trying to maintain critical functionality that would be rarely used, and hence would also be rarely tested by end-users. I am uncomfortable asking anyone to rely on functionality that falls into this category. My primary goal is to keep the Cedar Backup codebase as simple and focused as possible. I hope you can understand how the choice of providing documentation, but not code, seems to strike the best balance between managing code complexity and providing the functionality that end-users need. Cedar Backup Pools There are two kinds of machines in a Cedar Backup pool. One machine (the master ) has a CD or DVD writer on it and writes the backup to disc. The others ( clients) collect data to be written to disc by the master. Collectively, the master and client machines in a pool are called peer machines. Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one) and in fact more users seem to use it like this than any other way. The Backup Process The Cedar Backup backup process is structured in terms of a set of decoupled actions which execute independently (based on a schedule in cron) rather than through some highly coordinated flow of control. This design decision has both positive and negative consequences. On the one hand, the code is much simpler and can choose to simply abort or log an error if its expectations are not met. On the other hand, the administrator must coordinate the various actions during initial set-up. See the section called ?Coordination between Master and Clients? (later in this chapter) for more information on this subject. A standard backup run consists of four steps (actions), some of which execute on the master machine, and some of which execute on one or more client machines. These actions are: collect, stage, store and purge. In general, more than one action may be specified on the command-line. If more than one action is specified, then actions will be taken in a sensible order (generally collect, stage, store, purge). A special all action is also allowed, which implies all of the standard actions in the same sensible order. The cback3 command also supports several actions that are not part of the standard backup run and cannot be executed along with any other actions. These actions are validate, initialize and rebuild. All of the various actions are discussed further below. See Chapter5, Configuration for more information on how a backup run is configured. Flexibility Cedar Backup was designed to be flexible. It allows you to decide for yourself which backup steps you care about executing (and when you execute them), based on your own situation and your own priorities. As an example, I always back up every machine I own. I typically keep 7-10 days of staging directories around, but switch CD/DVD media mostly every week. That way, I can periodically take a disc off-site in case the machine gets stolen or damaged. If you're not worried about these risks, then there's no need to write to disc. In fact, some users prefer to use their master machine as a simple ? consolidation point?. They don't back up any data on the master, and don't write to disc at all. They just use Cedar Backup to handle the mechanics of moving backed-up data to a central location. This isn't quite what Cedar Backup was written to do, but it is flexible enough to meet their needs. The Collect Action The collect action is the first action in a standard backup run. It executes on both master and client nodes. Based on configuration, this action traverses the peer's filesystem and gathers files to be backed up. Each configured high-level directory is collected up into its own tar file in the collect directory. The tarfiles can either be uncompressed (.tar) or compressed with either gzip (.tar.gz) or bzip2 (.tar.bz2). There are three supported collect modes: daily, weekly and incremental. Directories configured for daily backups are backed up every day. Directories configured for weekly backups are backed up on the first day of the week. Directories configured for incremental backups are traversed every day, but only the files which have changed (based on a saved-off SHA hash) are actually backed up. Collect configuration also allows for a variety of ways to filter files and directories out of the backup. For instance, administrators can configure an ignore indicator file ^[9] or specify absolute paths or filename patterns ^[10] to be excluded. You can even configure a backup ?link farm? rather than explicitly listing files and directories in configuration. This action is optional on the master. You only need to configure and execute the collect action on the master if you have data to back up on that machine. If you plan to use the master only as a ?consolidation point? to collect data from other machines, then there is no need to execute the collect action there. If you run the collect action on the master, it behaves the same there as anywhere else, and you have to stage the master's collected data just like any other client (typically by configuring a local peer in the stage action). The Stage Action The stage action is the second action in a standard backup run. It executes on the master peer node. The master works down the list of peers in its backup pool and stages (copies) the collected backup files from each of them into a daily staging directory by peer name. For the purposes of this action, the master node can be configured to treat itself as a client node. If you intend to back up data on the master, configure the master as a local peer. Otherwise, just configure each of the clients as a remote peer. Local and remote client peers are treated differently. Local peer collect directories are assumed to be accessible via normal copy commands (i.e. on a mounted filesystem) while remote peer collect directories are accessed via an RSH-compatible command such as ssh. If a given peer is not ready to be staged, the stage process will log an error, abort the backup for that peer, and then move on to its other peers. This way, one broken peer cannot break a backup for other peers which are up and running. Keep in mind that Cedar Backup is flexible about what actions must be executed as part of a backup. If you would prefer, you can stop the backup process at this step, and skip the store step. In this case, the staged directories will represent your backup rather than a disc. Note Directories ?collected? by another process can be staged by Cedar Backup. If the file cback.collect exists in a collect directory when the stage action is taken, then that directory will be staged. The Store Action The store action is the third action in a standard backup run. It executes on the master peer node. The master machine determines the location of the current staging directory, and then writes the contents of that staging directory to disc. After the contents of the directory have been written to disc, an optional validation step ensures that the write was successful. If the backup is running on the first day of the week, if the drive does not support multisession discs, or if the --full option is passed to the cback3 command, the disc will be rebuilt from scratch. Otherwise, a new ISO session will be added to the disc each day the backup runs. This action is entirely optional. If you would prefer to just stage backup data from a set of peers to a master machine, and have the staged directories represent your backup rather than a disc, this is fine. Warning The store action is not supported on the Mac OS X (darwin) platform. On that platform, the ?automount? function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality works on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully. Current Staging Directory The store action tries to be smart about finding the current staging directory. It first checks the current day's staging directory. If that directory exists, and it has not yet been written to disc (i.e. there is no store indicator), then it will be used. Otherwise, the store action will look for an unused staging directory for either the previous day or the next day, in that order. A warning will be written to the log under these circumstances (controlled by the configuration value). This behavior varies slightly when the --full option is in effect. Under these circumstances, any existing store indicator will be ignored. Also, the store action will always attempt to use the current day's staging directory, ignoring any staging directories for the previous day or the next day. This way, running a full store action more than once concurrently will always produce the same results. (You might imagine a use case where a person wants to make several copies of the same full backup.) The Purge Action The purge action is the fourth and final action in a standard backup run. It executes both on the master and client peer nodes. Configuration specifies how long to retain files in certain directories, and older files and empty directories are purged. Typically, collect directories are purged daily, and stage directories are purged weekly or slightly less often (if a disc gets corrupted, older backups may still be available on the master). Some users also choose to purge the configured working directory (which is used for temporary files) to eliminate any leftover files which might have resulted from changes to configuration. The All Action The all action is a pseudo-action which causes all of the actions in a standard backup run to be executed together in order. It cannot be combined with any other actions on the command line. Extensions cannot be executed as part of the all action. If you need to execute an extended action, you must specify the other actions you want to run individually on the command line. ^[11] The all action does not have its own configuration. Instead, it relies on the individual configuration sections for all of the other actions. The Validate Action The validate action is used to validate configuration on a particular peer node, either master or client. It cannot be combined with any other actions on the command line. The validate action checks that the configuration file can be found, that the configuration file is valid, and that certain portions of the configuration file make sense (for instance, making sure that specified users exist, directories are readable and writable as necessary, etc.). The Initialize Action The initialize action is used to initialize media for use with Cedar Backup. This is an optional step. By default, Cedar Backup does not need to use initialized media and will write to whatever media exists in the writer device. However, if the ?check media? store configuration option is set to true, Cedar Backup will check the media before writing to it and will error out if the media has not been initialized. Initializing the media consists of writing a mostly-empty image using a known media label (the media label will begin with ?CEDAR BACKUP?). Note that only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup. You can still configure Cedar Backup to check non-rewritable media; in this case, the check will also pass if the media is apparently unused (i.e. has no media label). The Rebuild Action The rebuild action is an exception-handling action that is executed independent of a standard backup run. It cannot be combined with any other actions on the command line. The rebuild action attempts to rebuild ?this week's? disc from any remaining unpurged staging directories. Typically, it is used to make a copy of a backup, replace lost or damaged media, or to switch to new media mid-week for some other reason. To decide what data to write to disc again, the rebuild action looks back and finds the first day of the current week. Then, it finds any remaining staging directories between that date and the current date. If any staging directories are found, they are all written to disc in one big ISO session. The rebuild action does not have its own configuration. It relies on configuration for other other actions, especially the store action. Coordination between Master and Clients Unless you are using Cedar Backup to manage a ?pool of one?, you will need to set up some coordination between your clients and master to make everything work properly. This coordination isn't difficult ? it mostly consists of making sure that operations happen in the right order ? but some users are suprised that it is required and want to know why Cedar Backup can't just ?take care of it for me?. Essentially, each client must finish collecting all of its data before the master begins staging it, and the master must finish staging data from a client before that client purges its collected data. Administrators may need to experiment with the time between the collect and purge entries so that the master has enough time to stage data before it is purged. Managed Backups Cedar Backup also supports an optional feature called the ?managed backup?. This feature is intended for use with remote clients where cron is not available. When managed backups are enabled, managed clients must still be configured as usual. However, rather than using a cron job on the client to execute the collect and purge actions, the master executes these actions on the client via a remote shell. To make this happen, first set up one or more managed clients in Cedar Backup configuration. Then, invoke Cedar Backup with the --managed command-line option. Whenever Cedar Backup invokes an action locally, it will invoke the same action on each of the managed clients. Technically, this feature works for any client, not just clients that don't have cron available. Used this way, it can simplify the setup process, because cron only has to be configured on the master. For some users, that may be motivation enough to use this feature all of the time. However, please keep in mind that this feature depends on a stable network. If your network connection drops, your backup will be interrupted and will not be complete. It is even possible that some of the Cedar Backup metadata (like incremental backup state) will be corrupted. The risk is not high, but it is something you need to be aware of if you choose to use this optional feature. Media and Device Types Cedar Backup is focused around writing backups to CD or DVD media using a standard SCSI or IDE writer. In Cedar Backup terms, the disc itself is referred to as the media, and the CD/DVD drive is referred to as the device or sometimes the backup device. ^[12] When using a new enough backup device, a new ?multisession? ISO image ^[13] is written to the media on the first day of the week, and then additional multisession images are added to the media each day that Cedar Backup runs. This way, the media is complete and usable at the end of every backup run, but a single disc can be used all week long. If your backup device does not support multisession images ? which is really unusual today ? then a new ISO image will be written to the media each time Cedar Backup runs (and you should probably confine yourself to the ?daily? backup mode to avoid losing data). Cedar Backup currently supports four different kinds of CD media: cdr-74 74-minute non-rewritable CD media cdrw-74 74-minute rewritable CD media cdr-80 80-minute non-rewritable CD media cdrw-80 80-minute rewritable CD media I have chosen to support just these four types of CD media because they seem to be the most ?standard? of the various types commonly sold in the U.S. as of this writing (early 2005). If you regularly use an unsupported media type and would like Cedar Backup to support it, send me information about the capacity of the media in megabytes (MB) and whether it is rewritable. Cedar Backup also supports two kinds of DVD media: dvd+r Single-layer non-rewritable DVD+R media dvd+rw Single-layer rewritable DVD+RW media The underlying growisofs utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type. Incremental Backups Cedar Backup supports three different kinds of backups for individual collect directories. These are daily, weekly and incremental backups. Directories using the daily mode are backed up every day. Directories using the weekly mode are only backed up on the first day of the week, or when the --full option is used. Directories using the incremental mode are always backed up on the first day of the week (like a weekly backup), but after that only the files which have changed are actually backed up on a daily basis. In Cedar Backup, incremental backups are not based on date, but are instead based on saved checksums, one for each backed-up file. When a full backup is run, Cedar Backup gathers a checksum value ^[14] for each backed-up file. The next time an incremental backup is run, Cedar Backup checks its list of file/ checksum pairs for each file that might be backed up. If the file's checksum value does not match the saved value, or if the file does not appear in the list of file/checksum pairs, then it will be backed up and a new checksum value will be placed into the list. Otherwise, the file will be ignored and the checksum value will be left unchanged. Cedar Backup stores the file/checksum pairs in .sha files in its working directory, one file per configured collect directory. The mappings in these files are reset at the start of the week or when the --full option is used. Because these files are used for an entire week, you should never purge the working directory more frequently than once per week. Extensions Imagine that there is a third party developer who understands how to back up a certain kind of database repository. This third party might want to integrate his or her specialized backup into the Cedar Backup process, perhaps thinking of the database backup as a sort of ?collect? step. Prior to Cedar Backup version 2, any such integration would have been completely independent of Cedar Backup itself. The ?external? backup functionality would have had to maintain its own configuration and would not have had access to any Cedar Backup configuration. Starting with version 2, Cedar Backup allows extensions to the backup process. An extension is an action that isn't part of the standard backup process (i.e. not collect, stage, store or purge), but can be executed by Cedar Backup when properly configured. Extension authors implement an ?action process? function with a certain interface, and are allowed to add their own sections to the Cedar Backup configuration file, so that all backup configuration can be centralized. Then, the action process function is associated with an action name which can be executed from the cback3 command line like any other action. Hopefully, as the Cedar Backup user community grows, users will contribute their own extensions back to the community. Well-written general-purpose extensions will be accepted into the official codebase. Note Users should see Chapter5, Configuration for more information on how extensions are configured, and Chapter6, Official Extensions for details on all of the officially-supported extensions. Developers may be interested in AppendixA, Extension Architecture Interface. ------------------------------------------------------------------------------- ^[8] See http://en.wikipedia.org/wiki/Setuid ^[9] Analagous to .cvsignore in CVS ^[10] In terms of Python regular expressions ^[11] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. I am not planning to change the way this works. ^[12] My original backup device was an old Sony CRX140E 4X CD-RW drive. It has since died, and I currently develop using a Lite-On 1673S DVDRW drive. ^[13] An ISO image is the standard way of creating a filesystem to be copied to a CD or DVD. It is essentially a ?filesystem-within-a-file? and many UNIX operating systems can actually mount ISO image files just like hard drives, floppy disks or actual CDs. See Wikipedia for more information: http:// en.wikipedia.org/wiki/ISO_image. ^[14] The checksum is actually an SHA cryptographic hash. See Wikipedia for more information: http://en.wikipedia.org/wiki/SHA-1. Chapter3.Installation Table of Contents Background Installing on a Debian System Installing from Source Installing Dependencies Installing the Source Package Background There are two different ways to install Cedar Backup. The easiest way is to install the pre-built Debian packages. This method is painless and ensures that all of the correct dependencies are available, etc. If you are running a Linux distribution other than Debian or you are running some other platform like FreeBSD or Mac OS X, then you must use the Python source distribution to install Cedar Backup. When using this method, you need to manage all of the dependencies yourself. Non-Linux Platforms Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python 3, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems. To run a Cedar Backup client, you really just need a working Python 3 installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images. A full list of dependencies is provided further on in this chapter. Installing on a Debian System The easiest way to install Cedar Backup onto a Debian system is by using a tool such as apt-get or aptitude. If you are running a Debian release which contains Cedar Backup, you can use your normal Debian mirror as an APT data source. (The Debian ?jessie? release is the first release to contain Cedar Backup 3.) Otherwise, you need to install from the Cedar Solutions APT data source. ^[15] To do this, add the Cedar Solutions APT data source to your /etc/apt/sources.list file. After you have configured the proper APT data source, install Cedar Backup using this set of commands: $ apt-get update $ apt-get install cedar-backup3 cedar-backup3-doc Several of the Cedar Backup dependencies are listed as ?recommended? rather than required. If you are installing Cedar Backup on a master machine, you must install some or all of the recommended dependencies, depending on which actions you intend to execute. The stage action normally requires ssh, and the store action requires eject and either cdrecord/mkisofs or dvd+rw-tools. Clients must also install some sort of ssh server if a remote master will collect backups from them. If you would prefer, you can also download the .deb files and install them by hand with a tool such as dpkg. You can find these files files in the Cedar Solutions APT source. In either case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration. Note The Debian package-management tools must generally be run as root. It is safe to install Cedar Backup to a non-standard location and run it as a non-root user. However, to do this, you must install the source distribution instead of the Debian package. Installing from Source On platforms other than Debian, Cedar Backup is installed from a Python source distribution. ^[16] You will have to manage dependencies on your own. Tip Many UNIX-like distributions provide an automatic or semi-automatic way to install packages like the ones Cedar Backup requires (think RPMs for Mandrake or RedHat, Gentoo's Portage system, the Fink project for Mac OS X, or the BSD ports system). If you are not sure how to install these packages on your system, you might want to check out AppendixB, Dependencies. This appendix provides links to ?upstream? source packages, plus as much information as I have been able to gather about packages for non-Debian platforms. Installing Dependencies Cedar Backup requires a number of external packages in order to function properly. Before installing Cedar Backup, you must make sure that these dependencies are met. Cedar Backup is written in Python 3 and requires version 3.4 or greater of the language. Additionally, remote client peer nodes must be running an RSH-compatible server, such as the ssh server, and master nodes must have an RSH-compatible client installed if they need to connect to remote peer machines. Master machines also require several other system utilities, most having to do with writing and validating CD/DVD media. On master machines, you must make sure that these utilities are available if you want to to run the store action: * mkisofs * eject * mount * unmount * volname Then, you need this utility if you are writing CD media: * cdrecord or these utilities if you are writing DVD media: * growisofs All of these utilities are common and are easy to find for almost any UNIX-like operating system. Installing the Source Package Python source packages are fairly easy to install. They are distributed as .tar.gz files which contain Python source code, a manifest and an installation script called setup.py. Once you have downloaded the source package from the Cedar Solutions website, ^ [15] untar it: $ zcat CedarBackup3-3.0.0.tar.gz | tar xvf - This will create a directory called (in this case) CedarBackup3-3.0.0. The version number in the directory will always match the version number in the filename. If you have root access and want to install the package to the ?standard? Python location on your system, then you can install the package in two simple steps: $ cd CedarBackup3-3.0.0 $ python3 setup.py install Make sure that you are using Python 3.4 or better to execute setup.py. You may also wish to run the unit tests before actually installing anything. Run them like so: python3 util/test.py If any unit test reports a failure on your system, please email me the output from the unit test, so I can fix the problem. ^[17] This is particularly important for non-Linux platforms where I do not have a test system available to me. Some users might want to choose a different install location or change other install parameters. To get more information about how setup.py works, use the --help option: $ python3 setup.py --help $ python3 setup.py install --help In any case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration. ------------------------------------------------------------------------------- ^[15] See http://cedar-solutions.com/debian.html ^[16] See http://docs.python.org/lib/module-distutils.html . ^[17] Chapter4.Command Line Tools Table of Contents Overview The cback3 command Introduction Syntax Switches Actions The cback3-amazons3-sync command Introduction Syntax Switches The cback3-span command Introduction Syntax Switches Using cback3-span Sample run Overview Cedar Backup comes with three command-line programs: cback3, cback3-amazons3-sync, and cback3-span. The cback3 command is the primary command line interface and the only Cedar Backup program that most users will ever need. The cback3-amazons3-sync tool is used for synchronizing entire directories of files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar Backup process. Users who have a lot of data to back up ? more than will fit on a single CD or DVD ? can use the interactive cback3-span tool to split their data between multiple discs. The cback3 command Introduction Cedar Backup's primary command-line interface is the cback3 command. It controls the entire backup process. Syntax The cback3 command has the following syntax: Usage: cback3 [switches] action(s) The following switches are accepted: -h, --help Display this usage/help listing -V, --version Display version information -b, --verbose Print verbose output as well as logging to disk -q, --quiet Run quietly (display no output to the screen) -c, --config Path to config file (default: /etc/cback3.conf) -f, --full Perform a full backup, regardless of configuration -M, --managed Include managed clients when executing actions -N, --managed-only Include ONLY managed clients when executing actions -l, --logfile Path to logfile (default: /var/log/cback3.log) -o, --owner Logfile ownership, user:group (default: root:adm) -m, --mode Octal logfile permissions mode (default: 640) -O, --output Record some sub-command (i.e. cdrecord) output to the log -d, --debug Write debugging information to the log (implies --output) -s, --stack Dump a Python stack trace instead of swallowing exceptions -D, --diagnostics Print runtime diagnostics to the screen and exit The following actions may be specified: all Take all normal actions (collect, stage, store, purge) collect Take the collect action stage Take the stage action store Take the store action purge Take the purge action rebuild Rebuild "this week's" disc if possible validate Validate configuration only initialize Initialize media for use with Cedar Backup You may also specify extended actions that have been defined in configuration. You must specify at least one action to take. More than one of the "collect", "stage", "store" or "purge" actions and/or extended actions may be specified in any arbitrary order; they will be executed in a sensible order. The "all", "rebuild", "validate", and "initialize" actions may not be combined with other actions. Note that the all action only executes the standard four actions. It never executes any of the configured extensions. ^[18] Switches -h, --help Display usage/help listing. -V, --version Display version information. -b, --verbose Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. -q, --quiet Run quietly (display no output to the screen). -c, --config Specify the path to an alternate configuration file. The default configuration file is /etc/cback3.conf. -f, --full Perform a full backup, regardless of configuration. For the collect action, this means that any existing information related to incremental backups will be ignored and rewritten; for the store action, this means that a new disc will be started. -M, --managed Include managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client after executing the action locally. -N, --managed-only Include only managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client ? but do not execute the action locally. -l, --logfile Specify the path to an alternate logfile. The default logfile file is /var/ log/cback3.log. -o, --owner Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback3 command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. -m, --mode Specify the permissions for the logfile, using the numeric mode as in chmod (1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback3 command is executed, it will retain its existing ownership and mode. -O, --output Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. -d, --debug Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well. -s, --stack Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. -D, --diagnostics Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report. Actions You can find more information about the various actions in the section called ?The Backup Process? (in Chapter2, Basic Concepts). In general, you may specify any combination of the collect, stage, store or purge actions, and the specified actions will be executed in a sensible order. Or, you can specify one of the all, rebuild, validate, or initialize actions (but these actions may not be combined with other actions). If you have configured any Cedar Backup extensions, then the actions associated with those extensions may also be specified on the command line. If you specify any other actions along with an extended action, the actions will be executed in a sensible order per configuration. The all action never executes extended actions, however. The cback3-amazons3-sync command Introduction The cback3-amazons3-sync tool is used for synchronizing entire directories of files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar Backup process. This might be a good option for some types of data, as long as you understand the limitations around retrieving previous versions of objects that get modified or deleted as part of a sync. S3 does support versioning, but it won't be quite as easy to get at those previous versions as with an explicit incremental backup like cback3 provides. Cedar Backup does not provide any tooling that would help you retrieve previous versions. The underlying functionality relies on the AWS CLI toolset. Before you use this extension, you need to set up your Amazon S3 account and configure AWS CLI as detailed in Amazons's setup guide. The aws command will be executed as the same user that is executing the cback3-amazons3-sync command, so make sure you configure it as the proper user. (This is different than the amazons3 extension, which is designed to execute as root and switches over to the configured backup user to execute AWS CLI commands.) Syntax The cback3-amazons3-sync command has the following syntax: Usage: cback3-amazons3-sync [switches] sourceDir s3bucketUrl Cedar Backup Amazon S3 sync tool. This Cedar Backup utility synchronizes a local directory to an Amazon S3 bucket. After the sync is complete, a validation step is taken. An error is reported if the contents of the bucket do not match the source directory, or if the indicated size for any file differs. This tool is a wrapper over the AWS CLI command-line tool. The following arguments are required: sourceDir The local source directory on disk (must exist) s3BucketUrl The URL to the target Amazon S3 bucket The following switches are accepted: -h, --help Display this usage/help listing -V, --version Display version information -b, --verbose Print verbose output as well as logging to disk -q, --quiet Run quietly (display no output to the screen) -l, --logfile Path to logfile (default: /var/log/cback3.log) -o, --owner Logfile ownership, user:group (default: root:adm) -m, --mode Octal logfile permissions mode (default: 640) -O, --output Record some sub-command (i.e. aws) output to the log -d, --debug Write debugging information to the log (implies --output) -s, --stack Dump Python stack trace instead of swallowing exceptions -D, --diagnostics Print runtime diagnostics to the screen and exit -v, --verifyOnly Only verify the S3 bucket contents, do not make changes -w, --ignoreWarnings Ignore warnings about problematic filename encodings Typical usage would be something like: cback3-amazons3-sync /home/myuser s3://example.com-backup/myuser This will sync the contents of /home/myuser into the indicated bucket. Switches -h, --help Display usage/help listing. -V, --version Display version information. -b, --verbose Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. -q, --quiet Run quietly (display no output to the screen). -l, --logfile Specify the path to an alternate logfile. The default logfile file is /var/ log/cback3.log. -o, --owner Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback3-amazons3-sync command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. -m, --mode Specify the permissions for the logfile, using the numeric mode as in chmod (1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback3-amazons3-sync command is executed, it will retain its existing ownership and mode. -O, --output Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. -d, --debug Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well. -s, --stack Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. -D, --diagnostics Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report. -v, --verifyOnly Only verify the S3 bucket contents against the directory on disk. Do not make any changes to the S3 bucket or transfer any files. This is intended as a quick check to see whether the sync is up-to-date. Although no files are transferred, the tool will still execute the source filename encoding check, discussed below along with --ignoreWarnings. -w, --ignoreWarnings The AWS CLI S3 sync process is very picky about filename encoding. Files that the Linux filesystem handles with no problems can cause problems in S3 if the filename cannot be encoded properly in your configured locale. As of this writing, filenames like this will cause the sync process to abort without transferring all files as expected. To avoid confusion, the cback3-amazons3-sync tries to guess which files in the source directory will cause problems, and refuses to execute the AWS CLI S3 sync if any problematic files exist. If you'd rather proceed anyway, use --ignoreWarnings. If problematic files are found, then you have basically two options: either correct your locale (i.e. if you have set LANG=C) or rename the file so it can be encoded properly in your locale. The error messages will tell you the expected encoding (from your locale) and the actual detected encoding for the filename. The cback3-span command Introduction Cedar Backup was designed ? and is still primarily focused ? around weekly backups to a single CD or DVD. Most users who back up more data than fits on a single disc seem to stop their backup process at the stage step, using Cedar Backup as an easy way to collect data. However, some users have expressed a need to write these large kinds of backups to disc ? if not every day, then at least occassionally. The cback3-span tool was written to meet those needs. If you have staged more data than fits on a single CD or DVD, you can use cback3-span to split that data between multiple discs. cback3-span is not a general-purpose disc-splitting tool. It is a specialized program that requires Cedar Backup configuration to run. All it can do is read Cedar Backup configuration, find any staging directories that have not yet been written to disc, and split the files in those directories between discs. cback3-span accepts many of the same command-line options as cback3, but must be run interactively. It cannot be run from cron. This is intentional. It is intended to be a useful tool, not a new part of the backup process (that is the purpose of an extension). In order to use cback3-span, you must configure your backup such that the largest individual backup file can fit on a single disc. The command will not split a single file onto more than one disc. All it can do is split large directories onto multiple discs. Files in those directories will be arbitrarily split up so that space is utilized most efficiently. Syntax The cback3-span command has the following syntax: Usage: cback3-span [switches] Cedar Backup 'span' tool. This Cedar Backup utility spans staged data between multiple discs. It is a utility, not an extension, and requires user interaction. The following switches are accepted, mostly to set up underlying Cedar Backup functionality: -h, --help Display this usage/help listing -V, --version Display version information -b, --verbose Print verbose output as well as logging to disk -c, --config Path to config file (default: /etc/cback3.conf) -l, --logfile Path to logfile (default: /var/log/cback3.log) -o, --owner Logfile ownership, user:group (default: root:adm) -m, --mode Octal logfile permissions mode (default: 640) -O, --output Record some sub-command (i.e. cdrecord) output to the log -d, --debug Write debugging information to the log (implies --output) -s, --stack Dump a Python stack trace instead of swallowing exceptions Switches -h, --help Display usage/help listing. -V, --version Display version information. -b, --verbose Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. -c, --config Specify the path to an alternate configuration file. The default configuration file is /etc/cback3.conf. -l, --logfile Specify the path to an alternate logfile. The default logfile file is /var/ log/cback3.log. -o, --owner Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback3 command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. -m, --mode Specify the permissions for the logfile, using the numeric mode as in chmod (1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback3 command is executed, it will retain its existing ownership and mode. -O, --output Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD/DVD recorder and its media. -d, --debug Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well. -s, --stack Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. Using cback3-span As discussed above, the cback3-span is an interactive command. It cannot be run from cron. You can typically use the default answer for most questions. The only two questions that you may not want the default answer for are the fit algorithm and the cushion percentage. The cushion percentage is used by cback3-span to determine what capacity to shoot for when splitting up your staging directories. A 650 MB disc does not fit fully 650 MB of data. It's usually more like 627 MB of data. The cushion percentage tells cback3-span how much overhead to reserve for the filesystem. The default of 4% is usually OK, but if you have problems you may need to increase it slightly. The fit algorithm tells cback3-span how it should determine which items should be placed on each disc. If you don't like the result from one algorithm, you can reject that solution and choose a different algorithm. The four available fit algorithms are: worst The worst-fit algorithm. The worst-fit algorithm proceeds through a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing. best The best-fit algorithm. The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not unusual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms. first The first-fit algorithm. The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting. alternate A hybrid algorithm that I call alternate-fit. This algorithm tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slightly fewer items. Sample run Below is a log showing a sample cback3-span run. ================================================ Cedar Backup 'span' tool ================================================ This the Cedar Backup span tool. It is used to split up staging data when that staging data does not fit onto a single disc. This utility operates using Cedar Backup configuration. Configuration specifies which staging directory to look at and which writer device and media type to use. Continue? [Y/n]: === Cedar Backup store configuration looks like this: Source Directory...: /tmp/staging Media Type.........: cdrw-74 Device Type........: cdwriter Device Path........: /dev/cdrom Device SCSI ID.....: None Drive Speed........: None Check Data Flag....: True No Eject Flag......: False Is this OK? [Y/n]: === Please wait, indexing the source directory (this may take a while)... === The following daily staging directories have not yet been written to disc: /tmp/staging/2007/02/07 /tmp/staging/2007/02/08 /tmp/staging/2007/02/09 /tmp/staging/2007/02/10 /tmp/staging/2007/02/11 /tmp/staging/2007/02/12 /tmp/staging/2007/02/13 /tmp/staging/2007/02/14 The total size of the data in these directories is 1.00 GB. Continue? [Y/n]: === Based on configuration, the capacity of your media is 650.00 MB. Since estimates are not perfect and there is some uncertainly in media capacity calculations, it is good to have a "cushion", a percentage of capacity to set aside. The cushion reduces the capacity of your media, so a 1.5% cushion leaves 98.5% remaining. What cushion percentage? [4.00]: === The real capacity, taking into account the 4.00% cushion, is 627.25 MB. It will take at least 2 disc(s) to store your 1.00 GB of data. Continue? [Y/n]: === Which algorithm do you want to use to span your data across multiple discs? The following algorithms are available: first....: The "first-fit" algorithm best.....: The "best-fit" algorithm worst....: The "worst-fit" algorithm alternate: The "alternate-fit" algorithm If you don't like the results you will have a chance to try a different one later. Which algorithm? [worst]: === Please wait, generating file lists (this may take a while)... === Using the "worst-fit" algorithm, Cedar Backup can split your data into 2 discs. Disc 1: 246 files, 615.97 MB, 98.20% utilization Disc 2: 8 files, 412.96 MB, 65.84% utilization Accept this solution? [Y/n]: n === Which algorithm do you want to use to span your data across multiple discs? The following algorithms are available: first....: The "first-fit" algorithm best.....: The "best-fit" algorithm worst....: The "worst-fit" algorithm alternate: The "alternate-fit" algorithm If you don't like the results you will have a chance to try a different one later. Which algorithm? [worst]: alternate === Please wait, generating file lists (this may take a while)... === Using the "alternate-fit" algorithm, Cedar Backup can split your data into 2 discs. Disc 1: 73 files, 627.25 MB, 100.00% utilization Disc 2: 181 files, 401.68 MB, 64.04% utilization Accept this solution? [Y/n]: y === Please place the first disc in your backup device. Press return when ready. === Initializing image... Writing image to disc... ------------------------------------------------------------------------------- ^[18] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in ?surprising? behavior. Better to be definitive than confusing. Chapter5.Configuration Table of Contents Overview Configuration File Format Sample Configuration File Reference Configuration Options Configuration Peers Configuration Collect Configuration Stage Configuration Store Configuration Purge Configuration Extensions Configuration Setting up a Pool of One Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure your writer device. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test your backup. Step 9: Modify the backup cron jobs. Setting up a Client Peer Node Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure the master in your backup pool. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test your backup. Step 9: Modify the backup cron jobs. Setting up a Master Peer Node Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure your writer device. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test connectivity to client machines. Step 9: Test your backup. Step 10: Modify the backup cron jobs. Configuring your Writer Device Device Types Devices identified by by device name Devices identified by SCSI id Linux Notes Finding your Linux CD Writer Mac OS X Notes Optimized Blanking Stategy Overview Configuring Cedar Backup is unfortunately somewhat complicated. The good news is that once you get through the initial configuration process, you'll hardly ever have to change anything. Even better, the most typical changes (i.e. adding and removing directories from a backup) are easy. First, familiarize yourself with the concepts in Chapter2, Basic Concepts. In particular, be sure that you understand the differences between a master and a client. (If you only have one machine, then your machine will act as both a master and a client, and we'll refer to your setup as a pool of one.) Then, install Cedar Backup per the instructions in Chapter3, Installation. Once everything has been installed, you are ready to begin configuring Cedar Backup. Look over the section called ?The cback3 command? (in Chapter4, Command Line Tools) to become familiar with the command line interface. Then, look over the section called ?Configuration File Format? (below) and create a configuration file for each peer in your backup pool. To start with, create a very simple configuration file, then expand it later. Decide now whether you will store the configuration file in the standard place (/etc/cback3.conf) or in some other location. After you have all of the configuration files in place, configure each of your machines, following the instructions in the appropriate section below (for master, client or pool of one). Since the master and client(s) must communicate over the network, you won't be able to fully configure the master without configuring each client and vice-versa. The instructions are clear on what needs to be done. Which Platform? Cedar Backup has been designed for use on all UNIX-like systems. However, since it was developed on a Debian GNU/Linux system, and because I am a Debian developer, the packaging is prettier and the setup is somewhat simpler on a Debian system than on a system where you install from source. The configuration instructions below have been generalized so they should work well regardless of what platform you are running (i.e. RedHat, Gentoo, FreeBSD, etc.). If instructions vary for a particular platform, you will find a note related to that platform. I am always open to adding more platform-specific hints and notes, so write me if you find problems with these instructions. Configuration File Format Cedar Backup is configured through an XML ^[19] configuration file, usually called /etc/cback3.conf. The configuration file contains the following sections: reference, options, collect, stage, store, purge and extensions. All configuration files must contain the two general configuration sections, the reference section and the options section. Besides that, administrators need only configure actions they intend to use. For instance, on a client machine, administrators will generally only configure the collect and purge sections, while on a master machine they will have to configure all four action-related sections. ^[20] The extensions section is always optional and can be omitted unless extensions are in use. Note Even though the Mac OS X (darwin) filesystem is not case-sensitive, Cedar Backup configuration is generally case-sensitive on that platform, just like on all other platforms. For instance, even though the files ?Ken? and ?ken? might be the same on the Mac OS X filesystem, an exclusion in Cedar Backup configuration for ?ken? will only match the file if it is actually on the filesystem with a lower-case ?k? as its first letter. This won't surprise the typical UNIX user, but might surprise someone who's gotten into the ?Mac Mindset?. Sample Configuration File Both the Python source distribution and the Debian package come with a sample configuration file. The Debian package includes its sample in /usr/share/doc/ cedar-backup3/examples/cback3.conf.sample. This is a sample configuration file similar to the one provided in the source package. Documentation below provides more information about each of the individual configuration sections. Kenneth J. Pronovici 1.3 Sample tuesday /opt/backup/tmp backup group /usr/bin/scp -B debian local /opt/backup/collect /opt/backup/collect daily targz .cbignore /etc incr /home/root/.profile weekly /opt/backup/staging /opt/backup/staging cdrw-74 cdwriter /dev/cdrw 0,0,0 4 Y Y Y /opt/backup/stage 7 /opt/backup/collect 0 Reference Configuration The reference configuration section contains free-text elements that exist only for reference.. The section itself is required, but the individual elements may be left blank if desired. This is an example reference configuration section: Kenneth J. Pronovici Revision 1.3 Sample Yet to be Written Config Tool (tm) The following elements are part of the reference configuration section: author Author of the configuration file. Restrictions: None revision Revision of the configuration file. Restrictions: None description Description of the configuration file. Restrictions: None generator Tool that generated the configuration file, if any. Restrictions: None Options Configuration The options configuration section contains configuration options that are not specific to any one action. This is an example options configuration section: tuesday /opt/backup/tmp backup backup /usr/bin/scp -B /usr/bin/ssh /usr/bin/cback collect, purge cdrecord /opt/local/bin/cdrecord mkisofs /opt/local/bin/mkisofs collect echo "I AM A PRE-ACTION HOOK RELATED TO COLLECT" collect echo "I AM A POST-ACTION HOOK RELATED TO COLLECT" The following elements are part of the options configuration section: starting_day Day that starts the week. Cedar Backup is built around the idea of weekly backups. The starting day of week is the day that media will be rebuilt from scratch and that incremental backup information will be cleared. Restrictions: Must be a day of the week in English, i.e. monday, tuesday, etc. The validation is case-sensitive. working_dir Working (temporary) directory to use for backups. This directory is used for writing temporary files, such as tar file or ISO filesystem images as they are being built. It is also used to store day-to-day information about incremental backups. The working directory should contain enough free space to hold temporary tar files (on a client) or to build an ISO filesystem image (on a master). Restrictions: Must be an absolute path backup_user Effective user that backups should run as. This user must exist on the machine which is being configured and should not be root (although that restriction is not enforced). This value is also used as the default remote backup user for remote peers. Restrictions: Must be non-empty backup_group Effective group that backups should run as. This group must exist on the machine which is being configured, and should not be root or some other ?powerful? group (although that restriction is not enforced). Restrictions: Must be non-empty rcp_command Default rcp-compatible copy command for staging. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This value is used as the default value for all remote peers. Technically, this value is not needed by clients, but we require it for all config files anyway. Restrictions: Must be non-empty rsh_command Default rsh-compatible command to use for remote shells. The rsh command should be the exact command used for remote shells, including any required options. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Restrictions: Must be non-empty cback_command Default cback-compatible command to use on managed remote clients. The cback command should be the exact command used for for executing cback on a remote managed client, including any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration. Restrictions: Must be non-empty managed_actions Default set of actions that are managed on remote clients. This is a comma-separated list of actions that the master will manage on behalf of remote clients. Typically, it would include only collect-like actions and purge. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Restrictions: Must be non-empty. override Command to override with a customized path. This is a subsection which contains a command to override with a customized path. This functionality would be used if root's $PATH does not include a particular required command, or if there is a need to use a version of a command that is different than the one listed on the $PATH. Most users will only use this section when directed to, in order to fix a problem. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: command Name of the command to be overridden, i.e. ?cdrecord?. Restrictions: Must be a non-empty string. abs_path The absolute path where the overridden command can be found. Restrictions: Must be an absolute path. pre_action_hook Hook configuring a command to be executed before an action. This is a subsection which configures a command to be executed immediately before a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: action Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists. Restrictions: Must be a non-empty string. command Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command. Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands. Restrictions: Must be a non-empty string. post_action_hook Hook configuring a command to be executed after an action. This is a subsection which configures a command to be executed immediately after a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: action Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists. Restrictions: Must be a non-empty string. command Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command. Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands. Restrictions: Must be a non-empty string. Peers Configuration The peers configuration section contains a list of the peers managed by a master. This section is only required on a master. This is an example peers configuration section: machine1 local /opt/backup/collect machine2 remote backup /opt/backup/collect all machine3 remote Y backup /opt/backup/collect /usr/bin/scp /usr/bin/ssh /usr/bin/cback collect, purge The following elements are part of the peers configuration section: peer (local version) Local client peer in a backup pool. This is a subsection which contains information about a specific local client peer managed by a master. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. The local peer subsection must contain the following fields: name Name of the peer, typically a valid hostname. For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a local peer, it must always be local. Restrictions: Must be local. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp). Restrictions: Must be an absolute path. ignore_failures Ignore failure mode for this peer The ignore failure mode indicates whether ?not ready to be staged? errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator. The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup. Restrictions: If set, must be one of "none", "all", "daily", or "weekly". peer (remote version) Remote client peer in a backup pool. This is a subsection which contains information about a specific remote client peer managed by a master. A remote peer is one which can be reached via an rsh-based network call. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. The remote peer subsection must contain the following fields: name Hostname of the peer. For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a remote peer, it must always be remote. Restrictions: Must be remote. managed Indicates whether this peer is managed. A managed peer (or managed client) is a peer for which the master manages all of the backup activites via a remote shell. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command). Restrictions: Must be an absolute path. ignore_failures Ignore failure mode for this peer The ignore failure mode indicates whether ?not ready to be staged? errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator. The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup. Restrictions: If set, must be one of "none", "all", "daily", or "weekly". backup_user Name of backup user on the remote peer. This username will be used when copying files from the remote peer via an rsh-based network connection. This field is optional. if it doesn't exist, the backup will use the default backup user from the options section. Restrictions: Must be non-empty. rcp_command The rcp-compatible copy command for this peer. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section. Restrictions: Must be non-empty. rsh_command The rsh-compatible command for this peer. The rsh command should be the exact command used for remote shells, including any required options. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default rsh command from the options section. Restrictions: Must be non-empty cback_command The cback-compatible command for this peer. The cback command should be the exact command used for for executing cback on the peer as part of a managed backup. This value must include any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default cback command from the options section. Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration. Restrictions: Must be non-empty managed_actions Set of actions that are managed for this peer. This is a comma-separated list of actions that the master will manage on behalf this peer. Typically, it would include only collect-like actions and purge. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default list of managed actions from the options section. Restrictions: Must be non-empty. Collect Configuration The collect configuration section contains configuration options related the the collect action. This section contains a variable number of elements, including an optional exclusion section and a repeating subsection used to specify which directories and/or files to collect. You can also configure an ignore indicator file, which lets users mark their own directories as not backed up. Using a Link Farm Sometimes, it's not very convenient to list directories one by one in the Cedar Backup configuration file. For instance, when backing up your home directory, you often exclude as many directories as you include. The ignore file mechanism can be of some help, but it still isn't very convenient if there are a lot of directories to ignore (or if new directories pop up all of the time). In this situation, one option is to use a link farm rather than listing all of the directories in configuration. A link farm is a directory that contains nothing but a set of soft links to other files and directories. Normally, Cedar Backup does not follow soft links, but you can override this behavior for individual directories using the link_depth and dereference options (see below). When using a link farm, you still have to deal with each backed-up directory individually, but you don't have to modify configuration. Some users find that this works better for them. In order to actually execute the collect action, you must have configured at least one collect directory or one collect file. However, if you are only including collect configuration for use by an extension, then it's OK to leave out these sections. The validation will take place only when the collect action is executed. This is an example collect configuration section: /opt/backup/collect daily targz .cbignore /etc .*\.conf /home/root/.profile /etc /var/log incr /opt weekly /opt/large backup .*tmp The following elements are part of the collect configuration section: collect_dir Directory to collect files into. On a client, this is the directory which tarfiles for individual collect directories are written into. The master then stages files from this directory into its own staging directory. This field is always required. It must contain enough free space to collect all of the backed-up files on the machine in a compressed form. Restrictions: Must be an absolute path collect_mode Default collect mode. The collect mode describes how frequently a directory is backed up. See the section called ?The Collect Action? (in Chapter2, Basic Concepts) for more information. This value is the collect mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Default archive mode for collect files. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This value is the archive mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of tar, targz or tarbz2. ignore_file Default ignore file name. The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration. The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it. This value is the ignore file name that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be non-empty recursion_level Recursion level to use when collecting directories. This is an integer value that Cedar Backup will consider when generating archive files for a configured collect directory. Normally, Cedar Backup generates one archive file per collect directory. So, if you collect /etc you get etc.tar.gz. Most of the time, this is what you want. However, you may sometimes wish to generate multiple archive files for a single collect directory. The most obvious example is for /home. By default, Cedar Backup will generate home.tar.gz. If instead, you want one archive file per home directory you can set a recursion level of 1. Cedar Backup will generate home-user1.tar.gz, home-user2.tar.gz, etc. Higher recursion levels (2, 3, etc.) are legal, and it doesn't matter if the configured recursion level is deeper than the directory tree that is being collected. You can use a negative recursion level (like -1) to specify an infinite level of recursion. This will exhaust the tree in the same way as if the recursion level is set too high. This field is optional. if it doesn't exist, the backup will use the default recursion level of zero. Restrictions: Must be an integer. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of absolute paths and patterns to be excluded across all configured directories. For a given directory, the set of absolute paths and patterns to exclude is built from this list and any list that exists on the directory itself. Directories cannot override or remove entries that are in this list, however. This section is optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: abs_path An absolute path to be recursively excluded from the backup. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be an absolute path. pattern A pattern to be recursively excluded from the backup. The pattern must be a Python regular expression. ^[21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty file A file to be collected. This is a subsection which contains information about a specific file to be collected (backed up). This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed. The collect file subsection contains the following fields: abs_path Absolute path of the file to collect. Restrictions: Must be an absolute path. collect_mode Collect mode for this file The collect mode describes how frequently a file is backed up. See the section called ?The Collect Action? (in Chapter2, Basic Concepts) for more information. This field is optional. If it doesn't exist, the backup will use the default collect mode. Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Archive mode for this file. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This field is optional. if it doesn't exist, the backup will use the default archive mode. Restrictions: Must be one of tar, targz or tarbz2. dir A directory to be collected. This is a subsection which contains information about a specific directory to be collected (backed up). This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed. The collect directory subsection contains the following fields: abs_path Absolute path of the directory to collect. The path may be either a directory, a soft link to a directory, or a hard link to a directory. All three are treated the same at this level. The contents of the directory will be recursively collected. The backup will contain all of the files in the directory, as well as the contents of all of the subdirectories within the directory, etc. Soft links within the directory are treated as files, i.e. they are copied verbatim (as a link) and their contents are not backed up. Restrictions: Must be an absolute path. collect_mode Collect mode for this directory The collect mode describes how frequently a directory is backed up. See the section called ?The Collect Action? (in Chapter2, Basic Concepts) for more information. This field is optional. If it doesn't exist, the backup will use the default collect mode. Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Archive mode for this directory. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This field is optional. if it doesn't exist, the backup will use the default archive mode. Restrictions: Must be one of tar, targz or tarbz2. ignore_file Ignore file name for this directory. The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration. The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it. This field is optional. If it doesn't exist, the backup will use the default ignore file name. Restrictions: Must be non-empty link_depth Link depth value to use for this directory. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links within the collect directory, a depth of 1 follows only links immediately within the collect directory, a depth of 2 follows the links at the next level down, etc. This field is optional. If it doesn't exist, the backup will assume a value of zero, meaning that soft links within the collect directory will never be followed. Restrictions: If set, must be an integer ? 0. dereference Whether to dereference soft links. If this flag is set, links that are being followed will be dereferenced before being added to the backup. The link will be added (as a link), and then the directory or file that the link points at will be added as well. This value only applies to a directory where soft links are being followed (per the link_depth configuration option). It never applies to a configured collect directory itself, only to other directories within the collect directory. This field is optional. If it doesn't exist, the backup will assume that links should never be dereferenced. Restrictions: Must be a boolean (Y or N). exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this collect directory. This list is combined with the program-wide list to build a complete list for the directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: abs_path An absolute path to be recursively excluded from the backup. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be an absolute path. rel_path A relative path to be recursively excluded from the backup. The path is assumed to be relative to the collect directory itself. For instance, if the configured directory is /opt/web a configured relative path of something/else would exclude the path /opt/web/ something/else. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value something/else would exclude any files within something/else as well as files within other directories under something/else. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. ^[21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty Stage Configuration The stage configuration section contains configuration options related the the stage action. The section indicates where date from peers can be staged to. This section can also (optionally) override the list of peers so that not all peers are staged. If you provide any peers in this section, then the list of peers here completely replaces the list of peers in the peers configuration section for the purposes of staging. This is an example stage configuration section for the simple case where the list of peers is taken from peers configuration: /opt/backup/stage This is an example stage configuration section that overrides the default list of peers: /opt/backup/stage machine1 local /opt/backup/collect machine2 remote backup /opt/backup/collect The following elements are part of the stage configuration section: staging_dir Directory to stage files into. This is the directory into which the master stages collected data from each of the clients. Within the staging directory, data is staged into date-based directories by peer name. For instance, peer ?daystrom? backed up on 19 Feb 2005 would be staged into something like 2005/02/19/daystrom relative to the staging directory itself. This field is always required. The directory must contain enough free space to stage all of the files collected from all of the various machines in a backup pool. Many administrators set up purging to keep staging directories around for a week or more, which requires even more space. Restrictions: Must be an absolute path peer (local version) Local client peer in a backup pool. This is a subsection which contains information about a specific local client peer to be staged (backed up). A local peer is one whose collect directory can be reached without requiring any rsh-based network calls. It is possible that a remote peer might be staged as a local peer if its collect directory is mounted to the master via NFS, AFS or some other method. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration. The local peer subsection must contain the following fields: name Name of the peer, typically a valid hostname. For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a local peer, it must always be local. Restrictions: Must be local. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp). Restrictions: Must be an absolute path. peer (remote version) Remote client peer in a backup pool. This is a subsection which contains information about a specific remote client peer to be staged (backed up). A remote peer is one whose collect directory can only be reached via an rsh-based network call. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration. The remote peer subsection must contain the following fields: name Hostname of the peer. For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a remote peer, it must always be remote. Restrictions: Must be remote. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command). Restrictions: Must be an absolute path. backup_user Name of backup user on the remote peer. This username will be used when copying files from the remote peer via an rsh-based network connection. This field is optional. if it doesn't exist, the backup will use the default backup user from the options section. Restrictions: Must be non-empty. rcp_command The rcp-compatible copy command for this peer. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section. Restrictions: Must be non-empty. Store Configuration The store configuration section contains configuration options related the the store action. This section contains several optional fields. Most fields control the way media is written using the writer device. This is an example store configuration section: /opt/backup/stage cdrw-74 cdwriter /dev/cdrw 0,0,0 4 Y Y Y N 15 2 weekly 1.3 The following elements are part of the store configuration section: source_dir Directory whose contents should be written to media. This directory must be a Cedar Backup staging directory, as configured in the staging configuration section. Only certain data from that directory (typically, data from the current day) will be written to disc. Restrictions: Must be an absolute path device_type Type of the device used to write the media. This field controls which type of writer device will be used by Cedar Backup. Currently, Cedar Backup supports CD writers (cdwriter) and DVD writers (dvdwriter). This field is optional. If it doesn't exist, the cdwriter device type is assumed. Restrictions: If set, must be either cdwriter or dvdwriter. media_type Type of the media in the device. Unless you want to throw away a backup disc every week, you are probably best off using rewritable media. You must choose a media type that is appropriate for the device type you chose above. For more information on media types, see the section called ?Media and Device Types? (in Chapter2, Basic Concepts). Restrictions: Must be one of cdr-74, cdrw-74, cdr-80 or cdrw-80 if device type is cdwriter; or one of dvd+r or dvd+rw if device type is dvdwriter. target_device Filesystem device name for writer device. This value is required for both CD writers and DVD writers. This is the UNIX device name for the writer drive, for instance /dev/scd0 or a symlink like /dev/cdrw. In some cases, this device name is used to directly write to media. This is true all of the time for DVD writers, and is true for CD writers when a SCSI id (see below) has not been specified. Besides this, the device name is also needed in order to do several pre-write checks (such as whether the device might already be mounted) as well as the post-write consistency check, if enabled. Note: some users have reported intermittent problems when using a symlink as the target device on Linux, especially with DVD media. If you experience problems, try using the real device name rather than the symlink. Restrictions: Must be an absolute path. target_scsi_id SCSI id for the writer device. This value is optional for CD writers and is ignored for DVD writers. If you have configured your CD writer hardware to work through the normal filesystem device path, then you can leave this parameter unset. Cedar Backup will just use the target device (above) when talking to cdrecord. Otherwise, if you have SCSI CD writer hardware or you have configured your non-SCSI hardware to operate like a SCSI device, then you need to provide Cedar Backup with a SCSI id it can use when talking with cdrecord. For the purposes of Cedar Backup, a valid SCSI identifier must either be in the standard SCSI identifier form scsibus,target,lun or in the specialized-method form :scsibus,target,lun. An example of a standard SCSI identifier is 1,6,2. Today, the two most common examples of the specialized-method form are ATA:scsibus,target,lun and ATAPI:scsibus,target,lun, but you may occassionally see other values (like OLDATAPI in some forks of cdrecord). See the section called ?Configuring your Writer Device? for more information on writer devices and how they are configured. Restrictions: If set, must be a valid SCSI identifier. drive_speed Speed of the drive, i.e. 2 for a 2x device. This field is optional. If it doesn't exist, the underlying device-related functionality will use the default drive speed. For DVD writers, it is best to leave this value unset, so growisofs can pick an appropriate speed. For CD writers, since media can be speed-sensitive, it is probably best to set a sensible value based on your specific writer and media. Restrictions: If set, must be an integer ? 1. check_data Whether the media should be validated. This field indicates whether a resulting image on the media should be validated after the write completes, by running a consistency check against it. If this check is enabled, the contents of the staging directory are directly compared to the media, and an error is reported if there is a mismatch. Practice shows that some drives can encounter an error when writing a multisession disc, but not report any problems. This consistency check allows us to catch the problem. By default, the consistency check is disabled, but most users should choose to enable it unless they have a good reason not to. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). check_media Whether the media should be checked before writing to it. By default, Cedar Backup does not check its media before writing to it. It will write to any media in the backup device. If you set this flag to Y, Cedar Backup will make sure that the media has been initialized before writing to it. (Rewritable media is initialized using the initialize action.) If the configured media is not rewritable (like CD-R), then this behavior is modified slightly. For this kind of media, the check passes either if the media has been initialized or if the media appears unused. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). warn_midnite Whether to generate warnings for crossing midnite. This field indicates whether warnings should be generated if the store operation has to cross a midnite boundary in order to find data to write to disc. For instance, a warning would be generated if valid store data was only found in the day before or day after the current day. Configuration for some users is such that the store operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something ?strange? might have happened. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). no_eject Indicates that the writer device should not be ejected. Under some circumstances, Cedar Backup ejects (opens and closes) the writer device. This is done because some writer devices need to re-load the media before noticing a media state change (like a new session). For most writer devices this is safe, because they have a tray that can be opened and closed. If your writer device does not have a tray and Cedar Backup does not properly detect this, then set this flag. Cedar Backup will not ever issue an eject command to your writer. Note: this could cause problems with your backup. For instance, with many writers, the check data step may fail if the media is not reloaded first. If this happens to you, you may need to get a different writer device. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). refresh_media_delay Number of seconds to delay after refreshing media This field is optional. If it doesn't exist, no delay will occur. Some devices seem to take a little while to stablize after refreshing the media (i.e. closing and opening the tray). During this period, operations on the media may fail. If your device behaves like this, you can try setting a delay of 10-15 seconds. Restrictions: If set, must be an integer ? 1. eject_delay Number of seconds to delay after ejecting the tray This field is optional. If it doesn't exist, no delay will occur. If your system seems to have problems opening and closing the tray, one possibility is that the open/close sequence is happening too quickly ? either the tray isn't fully open when Cedar Backup tries to close it, or it doesn't report being open. To work around that problem, set an eject delay of a few seconds. Restrictions: If set, must be an integer ? 1. blank_behavior Optimized blanking strategy. For more information about Cedar Backup's optimized blanking strategy, see the section called ?Optimized Blanking Stategy?. This entire configuration section is optional. However, if you choose to provide it, you must configure both a blanking mode and a blanking factor. blank_mode Blanking mode. Restrictions:Must be one of "daily" or "weekly". blank_factor Blanking factor. Restrictions:Must be a floating point number ? 0. Purge Configuration The purge configuration section contains configuration options related the the purge action. This section contains a set of directories to be purged, along with information about the schedule at which they should be purged. Typically, Cedar Backup should be configured to purge collect directories daily (retain days of 0). If you are tight on space, staging directories can also be purged daily. However, if you have space to spare, you should consider purging about once per week. That way, if your backup media is damaged, you will be able to recreate the week's backup using the rebuild action. You should also purge the working directory periodically, once every few weeks or once per month. This way, if any unneeded files are left around, perhaps because a backup was interrupted or because configuration changed, they will eventually be removed. The working directory should not be purged any more frequently than once per week, otherwise you will risk destroying data used for incremental backups. This is an example purge configuration section: /opt/backup/stage 7 /opt/backup/collect 0 The following elements are part of the purge configuration section: dir A directory to purge within. This is a subsection which contains information about a specific directory to purge within. This section can be repeated as many times as is necessary. At least one purge directory must be configured. The purge directory subsection contains the following fields: abs_path Absolute path of the directory to purge within. The contents of the directory will be purged based on age. The purge will remove any files that were last modified more than ?retain days? days ago. Empty directories will also eventually be removed. The purge directory itself will never be removed. The path may be either a directory, a soft link to a directory, or a hard link to a directory. Soft links within the directory (if any) are treated as files. Restrictions: Must be an absolute path. retain_days Number of days to retain old files. Once it has been more than this many days since a file was last modified, it is a candidate for removal. Restrictions: Must be an integer ? 0. Extensions Configuration The extensions configuration section is used to configure third-party extensions to Cedar Backup. If you don't intend to use any extensions, or don't know what extensions are, then you can safely leave this section out of your configuration file. It is optional. Extensions configuration is used to specify ?extended actions? implemented by code external to Cedar Backup. An administrator can use this section to map command-line Cedar Backup actions to third-party extension functions. Each extended action has a name, which is mapped to a Python function within a particular module. Each action also has an index associated with it. This index is used to properly order execution when more than one action is specified on the command line. The standard actions have predefined indexes, and extended actions are interleaved into the normal order of execution using those indexes. The collect action has index 100, the stage index has action 200, the store action has index 300 and the purge action has index 400. Warning Extended actions should always be configured to run before the standard action they are associated with. This is because of the way indicator files are used in Cedar Backup. For instance, the staging process considers the collect action to be complete for a peer if the file cback.collect can be found in that peer's collect directory. If you were to run the standard collect action before your other collect-like actions, the indicator file would be written after the collect action completes but before all of the other actions even run. Because of this, there's a chance the stage process might back up the collect directory before the entire set of collect-like actions have completed ? and you would get no warning about this in your email! So, imagine that a third-party developer provided a Cedar Backup extension to back up a certain kind of database repository, and you wanted to map that extension to the ?database? command-line action. You have been told that this function is called ?foo.bar()?. You think of this backup as a ?collect? kind of action, so you want it to be performed immediately before the collect action. To configure this extension, you would list an action with a name ?database?, a module ?foo?, a function name ?bar? and an index of ?99?. This is how the hypothetical action would be configured: database foo bar 99 The following elements are part of the extensions configuration section: action This is a subsection that contains configuration related to a single extended action. This section can be repeated as many times as is necessary. The action subsection contains the following fields: name Name of the extended action. Restrictions: Must be a non-empty string consisting of only lower-case letters and digits. module Name of the Python module associated with the extension function. Restrictions: Must be a non-empty string and a valid Python identifier. function Name of the Python extension function within the module. Restrictions: Must be a non-empty string and a valid Python identifier. index Index of action, for execution ordering. Restrictions: Must be an integer ? 0. Setting up a Pool of One Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one). Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. Tip This setup procedure discusses how to set up Cedar Backup in the ?normal case? for a pool of one. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Warning Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be ?confused? until the next week begins, or until you re-run the backup using the --full flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure your writer device. Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation. Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations. See the section called ?Configuring your Writer Device? for more information on writer devices and how they are configured. Note There is no need to set up your CD/DVD device if you have decided not to execute the store action. Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers. Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a ?ready made? backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Note Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly three times as big as the amount of data that will be backed up on a nightly basis, to allow for the data to be collected, staged, and then placed into an ISO filesystem image on disk. (This is one disadvantage to using Cedar Backup in single-machine pools, but in this day of really large hard drives, it might not be an issue.) Note that if you elect not to purge the staging directory every night, you will need even more space. You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ stage/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. Note You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my ?dumping ground? for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more ?standard? location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in the section called ?Configuration File Format? (above) create a configuration file for your machine. Since you are working with a pool of one, you must configure all four action-specific sections: collect, stage, store and purge. The usual location for the Cedar Backup config file is /etc/cback3.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback3 script at the correct config file (using the --config option). Warning Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidential information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback3 validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is ?opened? must be ?closed? appropriately. Step 8: Test your backup. Place a valid CD/DVD disc in your drive, and then use the command cback3 --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/ cback3.log) for errors and also mount the CD/DVD disc to be sure it can be read. If Cedar Backup ever completes ?normally? but the disc that is created is not usable, please report this as a bug. ^[22] To be safe, always enable the consistency check option in the store configuration section. Step 9: Modify the backup cron jobs. Since Cedar Backup should be run as root, one way to configure the cron job is to add a line like this to your /etc/crontab file: 30 00 * * * root cback3 all Or, you can create an executable script containing just these lines and place that file in the /etc/cron.daily directory: #/bin/sh cback3 all You should consider adding the --output or -O switch to your cback3 command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. Note For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/ cron.d/cedar-backup3. As installed, this file contains several different settings, all commented out. Uncomment the ?Single machine (pool of one)? entry in the file, and change the line so that the backup goes off when you want it to. Setting up a Client Peer Node Cedar Backup has been designed to backup entire ?pools? of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a client is a little simpler than configuring a master. Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own. Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. Note See AppendixD, Securing Password-less SSH Connections for some important notes on how to optionally further secure password-less SSH connections to your clients. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Warning Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be ?confused? until the next week begins, or until you re-run the backup using the --full flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure the master in your backup pool. You will not be able to complete the client configuration until at least step 3 of the master's configuration has been completed. In particular, you will need to know the master's public SSH identity to fully configure a client. To find the master's public SSH identity, log in as the backup user on the master and cat the public identity file ~/.ssh/id_rsa.pub: user@machine> cat ~/.ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA0vOKjlfwohPg1oPRdrmwHk75l3mI9Tb/WRZfVnu2Pw69 uyphM9wBLRo6QfOC2T8vZCB8o/ZIgtAM3tkM0UgQHxKBXAZ+H36TOgg7BcI20I93iGtzpsMA/uXQy8kH HgZooYqQ9pw+ZduXgmPcAAv2b5eTm07wRqFt/U84k6bhTzs= user@machine Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a "ready made" backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Note Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa: user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa Generating public/private rsa key pair. Created directory '/home/user/.ssh'. Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: 11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable only by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644). Finally, take the master's public SSH identity (which you found in step 2) and cut-and-paste it into the file ~/.ssh/authorized_keys. Make sure the identity value is pasted into the file all on one line, and that the authorized_keys file is owned by your backup user and has permissions 600. If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly as big as the amount of data that will be backed up on a nightly basis (more if you elect not to purge it all every night). You should create a collect directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. Note You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my ?dumping ground? for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more "standard" location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in the section called ?Configuration File Format? (above), create a configuration file for your machine. Since you are working with a client, you must configure all action-specific sections for the collect and purge actions. The usual location for the Cedar Backup config file is /etc/cback3.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback3 script at the correct config file (using the --config option). Warning Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback3 validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems. This command only validates configuration on the one client, not the master or any other clients in a pool. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is ?opened? must be ?closed? appropriately. Step 8: Test your backup. Use the command cback3 --full collect purge. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback3.log) for errors. Step 9: Modify the backup cron jobs. Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file: 30 00 * * * root cback3 collect 30 06 * * * root cback3 purge You should consider adding the --output or -O switch to your cback3 command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. You will need to coordinate the collect and purge actions on the client so that the collect action completes before the master attempts to stage, and so that the purge action does not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. ^[23] Note For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/ cron.d/cedar-backup3. As installed, this file contains several different settings, all commented out. Uncomment the ?Client machine? entries in the file, and change the lines so that the backup goes off when you want it to. Setting up a Master Peer Node Cedar Backup has been designed to backup entire ?pools? of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a master is somewhat more complicated than configuring a client. Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own. Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or whichever other user receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. Tip This setup procedure discusses how to set up Cedar Backup in the ?normal case? for a master. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Keep in mind that you do not necessarily have to run the collect action on the master. See notes further below for more information. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Warning Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be ?confused? until the next week begins, or until you re-run the backup using the --full flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure your writer device. Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation. Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations. See the section called ?Configuring your Writer Device? for more information on writer devices and how they are configured. Note There is no need to set up your CD/DVD device if you have decided not to execute the store action. Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers. Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a ?ready made? backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Note Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa: user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa Generating public/private rsa key pair. Created directory '/home/user/.ssh'. Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: 11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644). If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly large enough hold twice as much data as will be backed up from the entire pool on a given night, plus space for whatever is collected on the master itself. This will allow for all three operations - collect, stage and store - to have enough space to complete. Note that if you elect not to purge the staging directory every night, you will need even more space. You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ stage/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. Note You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my ?dumping ground? for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more ?standard? location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in the section called ?Configuration File Format? (above), create a configuration file for your machine. Since you are working with a master machine, you would typically configure all four action-specific sections: collect, stage, store and purge. Note Note that the master can treat itself as a ?client? peer for certain actions. As an example, if you run the collect action on the master, then you will stage that data by configuring a local peer representing the master. Something else to keep in mind is that you do not really have to run the collect action on the master. For instance, you may prefer to just use your master machine as a ?consolidation point? machine that just collects data from the other client machines in a backup pool. In that case, there is no need to collect data on the master itself. The usual location for the Cedar Backup config file is /etc/cback3.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback3 script at the correct config file (using the --config option). Warning Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback3 validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. This command only validates configuration on the master, not any clients that the master might be configured to connect to. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is ?opened? must be ?closed? appropriately. Step 8: Test connectivity to client machines. This step must wait until after your client machines have been at least partially configured. Once the backup user(s) have been configured on the client machine(s) in a pool, attempt an SSH connection to each client. Log in as the backup user on the master, and then use the command ssh user@machine where user is the name of backup user on the client machine, and machine is the name of the client machine. If you are able to log in successfully to each client without entering a password, then things have been configured properly. Otherwise, double-check that you followed the user setup instructions for the master and the clients. Step 9: Test your backup. Make sure that you have configured all of the clients in your backup pool. On all of the clients, execute cback3 --full collect. (You will probably have already tested this command on each of the clients, so it should succeed.) When all of the client backups have completed, place a valid CD/DVD disc in your drive, and then use the command cback3 --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/ cback3.log) on the master and each of the clients, and also mount the CD/DVD disc on the master to be sure it can be read. You may also want to run cback3 purge on the master and each client once you have finished validating that everything worked. If Cedar Backup ever completes ?normally? but the disc that is created is not usable, please report this as a bug. ^[22] To be safe, always enable the consistency check option in the store configuration section. Step 10: Modify the backup cron jobs. Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file: 30 00 * * * root cback3 collect 30 02 * * * root cback3 stage 30 04 * * * root cback3 store 30 06 * * * root cback3 purge You should consider adding the --output or -O switch to your cback3 command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. You will need to coordinate the collect and purge actions on clients so that their collect actions complete before the master attempts to stage, and so that their purge actions do not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. ^[23] Note For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/ cron.d/cedar-backup3. As installed, this file contains several different settings, all commented out. Uncomment the ?Master machine? entries in the file, and change the lines so that the backup goes off when you want it to. Configuring your Writer Device Device Types In order to execute the store action, you need to know how to identify your writer device. Cedar Backup supports two kinds of device types: CD writers and DVD writers. DVD writers are always referenced through a filesystem device name (i.e. /dev/dvd). CD writers can be referenced either through a SCSI id, or through a filesystem device name. Which you use depends on your operating system and hardware. Devices identified by by device name For all DVD writers, and for CD writers on certain platforms, you will configure your writer device using only a device name. If your writer device works this way, you should just specify in configuration. You can either leave blank or remove it completely. The writer device will be used both to write to the device and for filesystem operations ? for instance, when the media needs to be mounted to run the consistency check. Devices identified by SCSI id Cedar Backup can use devices identified by SCSI id only when configured to use the cdwriter device type. In order to use a SCSI device with Cedar Backup, you must know both the SCSI id and the device name . The SCSI id will be used to write to media using cdrecord; and the device name will be used for other filesystem operations. A true SCSI device will always have an address scsibus,target,lun (i.e. 1,6,2). This should hold true on most UNIX-like systems including Linux and the various BSDs (although I do not have a BSD system to test with currently). The SCSI address represents the location of your writer device on the one or more SCSI buses that you have available on your system. On some platforms, it is possible to reference non-SCSI writer devices (i.e. an IDE CD writer) using an emulated SCSI id. If you have configured your non-SCSI writer device to have an emulated SCSI id, provide the filesystem device path in and the SCSI id in , just like for a real SCSI device. You should note that in some cases, an emulated SCSI id takes the same form as a normal SCSI id, while in other cases you might see a method name prepended to the normal SCSI id (i.e. ?ATA:1,1,1?). Linux Notes On a Linux system, IDE writer devices often have a emulated SCSI address, which allows SCSI-based software to access the device through an IDE-to-SCSI interface. Under these circumstances, the first IDE writer device typically has an address 0,0,0. However, support for the IDE-to-SCSI interface has been deprecated and is not well-supported in newer kernels (kernel 2.6.x and later). Newer Linux kernels can address ATA or ATAPI drives without SCSI emulation by prepending a ?method? indicator to the emulated device address. For instance, ATA:0,0,0 or ATAPI:0,0,0 are typical values. However, even this interface is deprecated as of late 2006, so with relatively new kernels you may be better off using the filesystem device path directly rather than relying on any SCSI emulation. Finding your Linux CD Writer Here are some hints about how to find your Linux CD writer hardware. First, try to reference your device using the filesystem device path: cdrecord -prcap dev=/dev/cdrom Running this command on my hardware gives output that looks like this (just the top few lines): Device type : Removable CD-ROM Version : 0 Response Format: 2 Capabilities : Vendor_info : 'LITE-ON ' Identification : 'DVDRW SOHW-1673S' Revision : 'JS02' Device seems to be: Generic mmc2 DVD-R/DVD-RW. Drive capabilities, per MMC-3 page 2A: If this works, and the identifying information at the top of the output looks like your CD writer device, you've probably found a working configuration. Place the device path into and leave blank. If this doesn't work, you should try to find an ATA or ATAPI device: cdrecord -scanbus dev=ATA cdrecord -scanbus dev=ATAPI On my development system, I get a result that looks something like this for ATA: scsibus1: 1,0,0 100) 'LITE-ON ' 'DVDRW SOHW-1673S' 'JS02' Removable CD-ROM 1,1,0 101) * 1,2,0 102) * 1,3,0 103) * 1,4,0 104) * 1,5,0 105) * 1,6,0 106) * 1,7,0 107) * Again, if you get a result that you recognize, you have again probably found a working configuraton. Place the associated device path (in my case, /dev/cdrom) into and put the emulated SCSI id (in this case, ATA:1,0,0) into . Any further discussion of how to configure your CD writer hardware is outside the scope of this document. If you have tried the hints above and still can't get things working, you may want to reference the Linux CDROM HOWTO (http:// www.tldp.org/HOWTO/CDROM-HOWTO) or the ATA RAID HOWTO (http://www.tldp.org/ HOWTO/ATA-RAID-HOWTO/index.html) for more information. Mac OS X Notes On a Mac OS X (darwin) system, things get strange. Apple has abandoned traditional SCSI device identifiers in favor of a system-wide resource id. So, on a Mac, your writer device will have a name something like IOCompactDiscServices (for a CD writer) or IODVDServices (for a DVD writer). If you have multiple drives, the second drive probably has a number appended, i.e. IODVDServices/2 for the second DVD writer. You can try to figure out what the name of your device is by grepping through the output of the command ioreg -l.^ [24] Unfortunately, even if you can figure out what device to use, I can't really support the store action on this platform. In OS X, the ?automount? function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality does work on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully. Optimized Blanking Stategy When the optimized blanking strategy has not been configured, Cedar Backup uses a simplistic approach: rewritable media is blanked at the beginning of every week, period. Since rewritable media can be blanked only a finite number of times before becoming unusable, some users ? especially users of rewritable DVD media with its large capacity ? may prefer to blank the media less often. If the optimized blanking strategy is configured, Cedar Backup will use a blanking factor and attempt to determine whether future backups will fit on the current media. If it looks like backups will fit, then the media will not be blanked. This feature will only be useful (assuming single disc is used for the whole week's backups) if the estimated total size of the weekly backup is considerably smaller than the capacity of the media (no more than 50% of the total media capacity), and only if the size of the backup can be expected to remain fairly constant over time (no frequent rapid growth expected). There are two blanking modes: daily and weekly. If the weekly blanking mode is set, Cedar Backup will only estimate future capacity (and potentially blank the disc) once per week, on the starting day of the week. If the daily blanking mode is set, Cedar Backup will estimate future capacity (and potentially blank the disc) every time it is run. You should only use the daily blanking mode in conjunction with daily collect configuration, otherwise you will risk losing data. If you are using the daily blanking mode, you can typically set the blanking value to 1.0. This will cause Cedar Backup to blank the media whenever there is not enough space to store the current day's backup. If you are using the weekly blanking mode, then finding the correct blanking factor will require some experimentation. Cedar Backup estimates future capacity based on the configured blanking factor. The disc will be blanked if the following relationship is true: bytes available / (1 + bytes required) ? blanking factor Another way to look at this is to consider the blanking factor as a sort of (upper) backup growth estimate: Total size of weekly backup / Full backup size at the start of the week This ratio can be estimated using a week or two of previous backups. For instance, take this example, where March 10 is the start of the week and March 4 through March 9 represent the incremental backups from the previous week: /opt/backup/staging# du -s 2007/03/* 3040 2007/03/01 3044 2007/03/02 6812 2007/03/03 3044 2007/03/04 3152 2007/03/05 3056 2007/03/06 3060 2007/03/07 3056 2007/03/08 4776 2007/03/09 6812 2007/03/10 11824 2007/03/11 In this case, the ratio is approximately 4: 6812 + (3044 + 3152 + 3056 + 3060 + 3056 + 4776) / 6812 = 3.9571 To be safe, you might choose to configure a factor of 5.0. Setting a higher value reduces the risk of exceeding media capacity mid-week but might result in blanking the media more often than is necessary. If you run out of space mid-week, then the solution is to run the rebuild action. If this happens frequently, a higher blanking factor value should be used. ------------------------------------------------------------------------------- ^[19] See http://www.xml.com/pub/a/98/10/guide0.html for a basic introduction to XML. ^[20] See the section called ?The Backup Process?, in Chapter2, Basic Concepts . ^[21] See http://docs.python.org/lib/re-syntax.html ^[22] See https://bitbucket.org/cedarsolutions/cedar-backup3/issues. ^[23] See the section called ?Coordination between Master and Clients? in Chapter2, Basic Concepts. ^[24] Thanks to the file README.macosX in the cdrtools-2.01+01a01 source tree for this information Chapter6.Official Extensions Table of Contents System Information Extension Amazon S3 Extension Subversion Extension MySQL Extension PostgreSQL Extension Mbox Extension Encrypt Extension Split Extension Capacity Extension System Information Extension The System Information Extension is a simple Cedar Backup extension used to save off important system recovery information that might be useful when reconstructing a ?broken? system. It is intended to be run either immediately before or immediately after the standard collect action. This extension saves off the following information to the configured Cedar Backup collect directory. Saved off data is always compressed using bzip2. * Currently-installed Debian packages via dpkg --get-selections * Disk partition information via fdisk -l * System-wide mounted filesystem contents, via ls -laR The Debian-specific information is only collected on systems where /usr/bin/ dpkg exists. To enable this extension, add the following section to the Cedar Backup configuration file: sysinfo CedarBackup3.extend.sysinfo executeAction 99 This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, but requires no new configuration of its own. Amazon S3 Extension The Amazon S3 extension writes data to Amazon S3 cloud storage rather than to physical media. It is intended to replace the store action, but you can also use it alongside the store action if you'd prefer to backup your data in more than one place. This extension must be run after the stage action. The underlying functionality relies on the AWS CLI toolset. Before you use this extension, you need to set up your Amazon S3 account and configure AWS CLI as detailed in Amazons's setup guide. The extension assumes that the backup is being executed as root, and switches over to the configured backup user to run the aws program. So, make sure you configure the AWS CLI tools as the backup user and not root. (This is different than the amazons3 sync tool extension, which executes AWS CLI command as the same user that is running the tool.) When using physical media via the standard store action, there is an implicit limit to the size of a backup, since a backup must fit on a single disc. Since there is no physical media, no such limit exists for Amazon S3 backups. This leaves open the possibility that Cedar Backup might construct an unexpectedly-large backup that the administrator is not aware of. Over time, this might become expensive, either in terms of network bandwidth or in terms of Amazon S3 storage and I/O charges. To mitigate this risk, set a reasonable maximum size using the configuration elements shown below. If the backup fails, you have a chance to review what made the backup larger than you expected, and you can either correct the problem (i.e. remove a large temporary directory that got inadvertently included in the backup) or change configuration to take into account the new "normal" maximum size. You can optionally configure Cedar Backup to encrypt data before sending it to S3. To do that, provide a complete command line using the ${input} and $ {output} variables to represent the original input file and the encrypted output file. This command will be executed as the backup user. For instance, you can use something like this with GPG: /usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input} The GPG mechanism depends on a strong passphrase for security. One way to generate a strong passphrase is using your system random number generator, i.e.: dd if=/dev/urandom count=20 bs=1 | xxd -ps (See StackExchange for more details about that advice.) If you decide to use encryption, make sure you save off the passphrase in a safe place, so you can get at your backup data later if you need to. And obviously, make sure to set permissions on the passphrase file so it can only be read by the backup user. To enable this extension, add the following section to the Cedar Backup configuration file: amazons3 CedarBackup3.extend.amazons3 executeAction 201 This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own amazons3 configuration section. This is an example configuration section with encryption disabled: example.com-backup/staging The following elements are part of the Amazon S3 configuration section: warn_midnite Whether to generate warnings for crossing midnite. This field indicates whether warnings should be generated if the Amazon S3 operation has to cross a midnite boundary in order to find data to write to the cloud. For instance, a warning would be generated if valid data was only found in the day before or day after the current day. Configuration for some users is such that the amazons3 operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something ?strange? might have happened. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). s3_bucket The name of the Amazon S3 bucket that data will be written to. This field configures the S3 bucket that your data will be written to. In S3, buckets are named globally. For uniqueness, you would typically use the name of your domain followed by some suffix, such as example.com-backup. If you want, you can specify a subdirectory within the bucket, such as example.com-backup/staging. Restrictions: Must be non-empty. encrypt Command used to encrypt backup data before upload to S3 If this field is provided, then data will be encrypted before it is uploaded to Amazon S3. You must provide the entire command used to encrypt a file, including the ${input} and ${output} variables. An example GPG command is shown above, but you can use any mechanism you choose. The command will be run as the configured backup user. Restrictions: If provided, must be non-empty. full_size_limit Maximum size of a full backup If this field is provided, then a size limit will be applied to full backups. If the total size of the selected staging directory is greater than the limit, then the backup will fail. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are ?10240?, ?250 MB? or ?1.1 GB?. Restrictions: Must be a value as described above, greater than zero. incr_size_limit Maximum size of an incremental backup If this field is provided, then a size limit will be applied to incremental backups. If the total size of the selected staging directory is greater than the limit, then the backup will fail. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are ?10240?, ?250 MB? or ?1.1 GB?. Restrictions: Must be a value as described above, greater than zero. Subversion Extension The Subversion Extension is a Cedar Backup extension used to back up Subversion ^[25] version control repositories via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. Each configured Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2. There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). This extension backs up both kinds of repositories in the same way, using svnadmin dump in an incremental mode. It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do the backup that way, then use the normal collect action rather than this extension. If you decide to do that, be sure to consult the Subversion documentation and make sure you understand the limitations of this kind of backup. To enable this extension, add the following section to the Cedar Backup configuration file: subversion CedarBackup3.extend.subversion executeAction 99 This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own subversion configuration section. This is an example Subversion configuration section: incr bzip2 /opt/public/svn/docs /opt/public/svn/web gzip /opt/private/svn daily The following elements are part of the Subversion configuration section: collect_mode Default collect mode. The collect mode describes how frequently a Subversion repository is backed up. The Subversion extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts). This value is the collect mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. compress_mode Default compress mode. Subversion repositories backups are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. This value is the compress mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of none, gzip or bzip2. repository A Subversion repository be collected. This is a subsection which contains information about a specific Subversion repository to be backed up. This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured. The repository subsection contains the following fields: collect_mode Collect mode for this repository. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this repository. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the Subversion repository to back up. Restrictions: Must be an absolute path. repository_dir A Subversion parent repository directory be collected. This is a subsection which contains information about a Subversion parent repository directory to be backed up. Any subdirectory immediately within this directory is assumed to be a Subversion repository, and will be backed up. This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured. The repository_dir subsection contains the following fields: collect_mode Collect mode for this repository. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this repository. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the Subversion repository to back up. Restrictions: Must be an absolute path. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this subversion parent directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: rel_path A relative path to be excluded from the backup. The path is assumed to be relative to the subversion parent directory itself. For instance, if the configured subversion parent directory is /opt/svn a configured relative path of software would exclude the path /opt/svn/software. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. ^[21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). This field can be repeated as many times as is necessary. Restrictions: Must be non-empty MySQL Extension The MySQL Extension is a Cedar Backup extension used to back up MySQL ^[26] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. Note This extension always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I will update this extension or provide another. The backup is done via the mysqldump command included with the MySQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that all configured databases can be backed up by a single user. Often, the ?root? database user will be used. An alternative is to create a separate MySQL ?backup? user and grant that user rights to read (but not write) various databases as needed. This second option is probably your best choice. Warning The extension accepts a username and password in configuration. However, you probably do not want to list those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to mysqldump via the command-line --user and --password switches, which will be visible to other users in the process listing. Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in /root/.my.cnf: [mysqldump] user = root password = Of course, if you are executing the backup as a user other than root, then you would create the file in that user's home directory instead. As a side note, it is also possible to configure .my.cnf such that Cedar Backup can back up a remote database server: [mysqldump] host = remote.host For this to work, you will also need to grant privileges properly for the user which is executing the backup. See your MySQL documentation for more information about how this can be done. Regardless of whether you are using ~/.my.cnf or /etc/cback3.conf to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode 0600). To enable this extension, add the following section to the Cedar Backup configuration file: mysql CedarBackup3.extend.mysql executeAction 99 This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mysql configuration section. This is an example MySQL configuration section: bzip2 Y If you have decided to configure login information in Cedar Backup rather than using MySQL configuration, then you would add the username and password fields to configuration: root password bzip2 Y The following elements are part of the MySQL configuration section: user Database user. The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. Typically, this would be root (i.e. the database root user, not the system root user). This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above. Restrictions: If provided, must be non-empty. password Password associated with the database user. This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above. Restrictions: If provided, must be non-empty. compress_mode Compress mode. MySQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. Restrictions: Must be one of none, gzip or bzip2. all Indicates whether to back up all databases. If this value is Y, then all MySQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below). If you choose this option, the entire database backup will go into one big dump file. Restrictions: Must be a boolean (Y or N). database Named database to be backed up. If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file. This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y. Restrictions: Must be non-empty. PostgreSQL Extension Community-contributed Extension This is a community-contributed extension provided by Antoine Beaupre ("The Anarcat"). I have added regression tests around the configuration parsing code and I will maintain this section in the user manual based on his source code documentation. Unfortunately, I don't have any PostgreSQL databases with which to test the functional code. While I have code-reviewed the code and it looks both sensible and safe, I have to rely on the author to ensure that it works properly. The PostgreSQL Extension is a Cedar Backup extension used to back up PostgreSQL ^[27] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. The backup is done via the pg_dump or pg_dumpall commands included with the PostgreSQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the pg_dump client. This can be accomplished using appropriate configuration in the pg_hda.conf file. This extension always produces a full backup. There is currently no facility for making incremental backups. Warning Once you place PostgreSQL configuration into the Cedar Backup configuration file, you should be careful about who is allowed to see that information. This is because PostgreSQL configuration will contain information about available PostgreSQL databases and usernames. Typically, you might want to lock down permissions so that only the file owner can read the file contents (i.e. use mode 0600). To enable this extension, add the following section to the Cedar Backup configuration file: postgresql CedarBackup3.extend.postgresql executeAction 99 This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own postgresql configuration section. This is an example PostgreSQL configuration section: bzip2 username Y If you decide to back up specific databases, then you would list them individually, like this: bzip2 username N db1 db2 The following elements are part of the PostgreSQL configuration section: user Database user. The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. This value is optional. Consult your PostgreSQL documentation for information on how to configure a default database user outside of Cedar Backup, and for information on how to specify a database password when you configure a user within Cedar Backup. You will probably want to modify pg_hda.conf. Restrictions: If provided, must be non-empty. compress_mode Compress mode. PostgreSQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. Restrictions: Must be one of none, gzip or bzip2. all Indicates whether to back up all databases. If this value is Y, then all PostgreSQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below). If you choose this option, the entire database backup will go into one big dump file. Restrictions: Must be a boolean (Y or N). database Named database to be backed up. If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file. This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y. Restrictions: Must be non-empty. Mbox Extension The Mbox Extension is a Cedar Backup extension used to incrementally back up UNIX-style ?mbox? mail folders via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. Mbox mail folders are not well-suited to being backed up by the normal Cedar Backup incremental backup process. This is because active folders are typically appended to on a daily basis. This forces the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large mail folders. What the mbox extension does is leverage the grepmail utility to back up only email messages which have been received since the last incremental backup. This way, even if a folder is added to every day, only the recently-added messages are backed up. This can potentially save a lot of space. Each configured mbox file or directory can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2. To enable this extension, add the following section to the Cedar Backup configuration file: mbox CedarBackup3.extend.mbox executeAction 99 This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mbox configuration section. This is an example mbox configuration section: incr gzip /home/user1/mail/greylist daily /home/user2/mail /home/user3/mail spam .*debian.* Configuration is much like the standard collect action. Differences come from the fact that mbox directories are not collected recursively. Unlike collect configuration, exclusion information can only be configured at the mbox directory level (there are no global exclusions). Another difference is that no absolute exclusion paths are allowed ? only relative path exclusions and patterns. The following elements are part of the mbox configuration section: collect_mode Default collect mode. The collect mode describes how frequently an mbox file or directory is backed up. The mbox extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts). This value is the collect mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. compress_mode Default compress mode. Mbox file or directory backups are just text, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. This value is the compress mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of none, gzip or bzip2. file An individual mbox file to be collected. This is a subsection which contains information about an individual mbox file to be backed up. This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured. The file subsection contains the following fields: collect_mode Collect mode for this file. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this file. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the mbox file to back up. Restrictions: Must be an absolute path. dir An mbox directory to be collected. This is a subsection which contains information about an mbox directory to be backed up. An mbox directory is a directory containing mbox files. Every file in an mbox directory is assumed to be an mbox file. Mbox directories are not collected recursively. Only the files immediately within the configured directory will be backed-up and any subdirectories will be ignored. This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured. The dir subsection contains the following fields: collect_mode Collect mode for this file. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this file. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the mbox directory to back up. Restrictions: Must be an absolute path. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this mbox directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: rel_path A relative path to be excluded from the backup. The path is assumed to be relative to the mbox directory itself. For instance, if the configured mbox directory is /home/user2/mail a configured relative path of SPAM would exclude the path /home/ user2/mail/SPAM. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. ^[21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). This field can be repeated as many times as is necessary. Restrictions: Must be non-empty Encrypt Extension The Encrypt Extension is a Cedar Backup extension used to encrypt backups. It does this by encrypting the contents of a master's staging directory each day after the stage action is run. This way, backed-up data is encrypted both when sitting on the master and when written to disc. This extension must be run before the standard store action, otherwise unencrypted data will be written to disc. There are several differents ways encryption could have been built in to or layered on to Cedar Backup. I asked the mailing list for opinions on the subject in January 2007 and did not get a lot of feedback, so I chose the option that was simplest to understand and simplest to implement. If other encryption use cases make themselves known in the future, this extension can be enhanced or replaced. Currently, this extension supports only GPG. However, it would be straightforward to support other public-key encryption mechanisms, such as OpenSSL. Warning If you decide to encrypt your backups, be absolutely sure that you have your GPG secret key saved off someplace safe ? someplace other than on your backup disc. If you lose your secret key, your backup will be useless. I suggest that before you rely on this extension, you should execute a dry run and make sure you can successfully decrypt the backup that is written to disc. Before configuring the Encrypt extension, you must configure GPG. Either create a new keypair or use an existing one. Determine which user will execute your backup (typically root) and have that user import and lsign the public half of the keypair. Then, save off the secret half of the keypair someplace safe, apart from your backup (i.e. on a floppy disk or USB drive). Make sure you know the recipient name associated with the public key because you'll need it to configure Cedar Backup. (If you can run gpg -e -r "Recipient Name" file.txt and it executes cleanly with no user interaction required, you should be OK.) An encrypted backup has the same file structure as a normal backup, so all of the instructions in AppendixC, Data Recovery apply. The only difference is that encrypted files will have an additional .gpg extension (so for instance file.tar.gz becomes file.tar.gz.gpg). To recover decrypted data, simply log on as a user which has access to the secret key and decrypt the .gpg file that you are interested in. Then, recover the data as usual. Note: I am being intentionally vague about how to configure and use GPG, because I do not want to encourage neophytes to blindly use this extension. If you do not already understand GPG well enough to follow the two paragraphs above, do not use this extension. Instead, before encrypting your backups, check out the excellent GNU Privacy Handbook at http://www.gnupg.org/gph/en/ manual.html and gain an understanding of how encryption can help you or hurt you. To enable this extension, add the following section to the Cedar Backup configuration file: encrypt CedarBackup3.extend.encrypt executeAction 301 This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own encrypt configuration section. This is an example Encrypt configuration section: gpg Backup User The following elements are part of the Encrypt configuration section: encrypt_mode Encryption mode. This value specifies which encryption mechanism will be used by the extension. Currently, only the GPG public-key encryption mechanism is supported. Restrictions: Must be gpg. encrypt_target Encryption target. The value in this field is dependent on the encryption mode. For the gpg mode, this is the name of the recipient whose public key will be used to encrypt the backup data, i.e. the value accepted by gpg -r. Split Extension The Split Extension is a Cedar Backup extension used to split up large files within staging directories. It is probably only useful in combination with the cback3-span command, which requires individual files within staging directories to each be smaller than a single disc. You would normally run this action immediately after the standard stage action, but you could also choose to run it by hand immediately before running cback3-span. The split extension uses the standard UNIX split tool to split the large files up. This tool simply splits the files on bite-size boundaries. It has no knowledge of file formats. Note: this means that in order to recover the data in your original large file, you must have every file that the original file was split into. Think carefully about whether this is what you want. It doesn't sound like a huge limitation. However, cback3-span might put an indivdual file on any disc in a set ? the files split from one larger file will not necessarily be together. That means you will probably need every disc in your backup set in order to recover any data from the backup set. To enable this extension, add the following section to the Cedar Backup configuration file: split CedarBackup3.extend.split executeAction 299 This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own split configuration section. This is an example Split configuration section: 250 MB 100 MB The following elements are part of the Split configuration section: size_limit Size limit. Files with a size strictly larger than this limit will be split by the extension. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are ?10240?, ?250 MB? or ?1.1 GB?. Restrictions: Must be a size as described above. split_size Split size. This is the size of the chunks that a large file will be split into. The final chunk may be smaller if the split size doesn't divide evenly into the file size. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are ?10240?, ?250 MB? or ?1.1 GB?. Restrictions: Must be a size as described above. Capacity Extension The capacity extension checks the current capacity of the media in the writer and prints a warning if the media exceeds an indicated capacity. The capacity is indicated either by a maximum percentage utilized or by a minimum number of bytes that must remain unused. This action can be run at any time, but is probably best run as the last action on any given day, so you get as much notice as possible that your media is full and needs to be replaced. To enable this extension, add the following section to the Cedar Backup configuration file: capacity CedarBackup3.extend.capacity executeAction 299 This extension relies on the options and store configuration sections in the standard Cedar Backup configuration file, and then also requires its own capacity configuration section. This is an example Capacity configuration section that configures the extension to warn if the media is more than 95.5% full: 95.5 This example configures the extension to warn if the media has fewer than 16 MB free: 16 MB The following elements are part of the Capacity configuration section: max_percentage Maximum percentage of the media that may be utilized. You must provide either this value or the min_bytes value. Restrictions: Must be a floating point number between 0.0 and 100.0 min_bytes Minimum number of free bytes that must be available. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are ?10240?, ?250 MB? or ?1.1 GB?. You must provide either this value or the max_percentage value. Restrictions: Must be a byte quantity as described above. ------------------------------------------------------------------------------- ^[25] See http://subversion.org ^[26] See http://www.mysql.com ^[27] See http://www.postgresql.org/ AppendixA.Extension Architecture Interface The Cedar Backup Extension Architecture Interface is the application programming interface used by third-party developers to write Cedar Backup extensions. This appendix briefly specifies the interface in enough detail for someone to succesfully implement an extension. You will recall that Cedar Backup extensions are third-party pieces of code which extend Cedar Backup's functionality. Extensions can be invoked from the Cedar Backup command line and are allowed to place their configuration in Cedar Backup's configuration file. There is a one-to-one mapping between a command-line extended action and an extension function. The mapping is configured in the Cedar Backup configuration file using a section something like this: database foo bar 101 In this case, the action ?database? has been mapped to the extension function foo.bar(). Extension functions can take any actions they would like to once they have been invoked, but must abide by these rules: 1. Extensions may not write to stdout or stderr using functions such as print or sys.write. 2. All logging must take place using the Python logging facility. Flow-of-control logging should happen on the CedarBackup3.log topic. Authors can assume that ERROR will always go to the terminal, that INFO and WARN will always be logged, and that DEBUG will be ignored unless debugging is enabled. 3. Any time an extension invokes a command-line utility, it must be done through the CedarBackup3.util.executeCommand function. This will help keep Cedar Backup safer from format-string attacks, and will make it easier to consistently log command-line process output. 4. Extensions may not return any value. 5. Extensions must throw a Python exception containing a descriptive message if processing fails. Extension authors can use their judgement as to what constitutes failure; however, any problems during execution should result in either a thrown exception or a logged message. 6. Extensions may rely only on Cedar Backup functionality that is advertised as being part of the public interface. This means that extensions cannot directly make use of methods, functions or values starting with with the _ character. Furthermore, extensions should only rely on parts of the public interface that are documented in the online Epydoc documentation. 7. Extension authors are encouraged to extend the Cedar Backup public interface through normal methods of inheritence. However, no extension is allowed to directly change Cedar Backup code in a way that would affect how Cedar Backup itself executes when the extension has not been invoked. For instance, extensions would not be allowed to add new command-line options or new writer types. 8. Extensions must be written to assume an empty locale set (no $LC_* settings) and $LANG=C. For the typical open-source software project, this would imply writing output-parsing code against the English localization (if any). The executeCommand function does sanitize the environment to enforce this configuration. Extension functions take three arguments: the path to configuration on disk, a CedarBackup3.cli.Options object representing the command-line options in effect, and a CedarBackup3.config.Config object representing parsed standard configuration. def function(configPath, options, config): """Sample extension function.""" pass This interface is structured so that simple extensions can use standard configuration without having to parse it for themselves, but more complicated extensions can get at the configuration file on disk and parse it again as needed. The interface to the CedarBackup3.cli.Options and CedarBackup3.config.Config classes has been thoroughly documented using Epydoc, and the documentation is available on the Cedar Backup website. The interface is guaranteed to change only in backwards-compatible ways unless the Cedar Backup major version number is bumped (i.e. from 2 to 3). If an extension needs to add its own configuration information to the Cedar Backup configuration file, this extra configuration must be added in a new configuration section using a name that does not conflict with standard configuration or other known extensions. For instance, our hypothetical database extension might require configuration indicating the path to some repositories to back up. This information might go into a section something like this: /path/to/repo1 /path/to/repo2 In order to read this new configuration, the extension code can either inherit from the Config object and create a subclass that knows how to parse the new database config section, or can write its own code to parse whatever it needs out of the file. Either way, the resulting code is completely independent of the standard Cedar Backup functionality. AppendixB.Dependencies Python 3.4 (or later) +-------------------------------------------------------------------+ | Source | URL | |--------+----------------------------------------------------------| |upstream|http://www.python.org | |--------+----------------------------------------------------------| |Debian |http://packages.debian.org/stable/python/python3.4 | |--------+----------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=python3| +-------------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. RSH Server and Client Although Cedar Backup will technically work with any RSH-compatible server and client pair (such as the classic ?rsh? client), most users should only use an SSH (secure shell) server and client. The defacto standard today is OpenSSH. Some systems package the server and the client together, and others package the server and the client separately. Note that master nodes need an SSH client, and client nodes need to run an SSH server. +-------------------------------------------------------------------+ | Source | URL | |--------+----------------------------------------------------------| |upstream|http://www.openssh.com/ | |--------+----------------------------------------------------------| |Debian |http://packages.debian.org/stable/net/ssh | |--------+----------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=openssh| +-------------------------------------------------------------------+ If you can't find SSH client or server packages for your system, install from the package source, using the ?upstream? link. mkisofs The mkisofs command is used create ISO filesystem images that can later be written to backup media. On Debian platforms, mkisofs is not distributed and genisoimage is used instead. The Debian package takes care of this for you. +-------------------------------------------------------------------+ | Source | URL | |--------+----------------------------------------------------------| |upstream|https://en.wikipedia.org/wiki/Cdrtools | |--------+----------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=mkisofs| +-------------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. cdrecord The cdrecord command is used to write ISO images to CD media in a backup device. On Debian platforms, cdrecord is not distributed and wodim is used instead. The Debian package takes care of this for you. +--------------------------------------------------------------------+ | Source | URL | |--------+-----------------------------------------------------------| |upstream|https://en.wikipedia.org/wiki/Cdrtools | |--------+-----------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=cdrecord| +--------------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. dvd+rw-tools The dvd+rw-tools package provides the growisofs utility, which is used to write ISO images to DVD media in a backup device. +------------------------------------------------------------------------+ | Source | URL | |--------+---------------------------------------------------------------| |upstream|http://fy.chalmers.se/~appro/linux/DVD+RW/ | |--------+---------------------------------------------------------------| |Debian |http://packages.debian.org/stable/utils/dvd+rw-tools | |--------+---------------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=dvd+rw-tools| +------------------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. eject and volname The eject command is used to open and close the tray on a backup device (if the backup device has a tray). Sometimes, the tray must be opened and closed in order to "reset" the device so it notices recent changes to a disc. The volname command is used to determine the volume name of media in a backup device. +-----------------------------------------------------------------+ | Source | URL | |--------+--------------------------------------------------------| |upstream|http://sourceforge.net/projects/eject | |--------+--------------------------------------------------------| |Debian |http://packages.debian.org/stable/utils/eject | |--------+--------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=eject| +-----------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. mount and umount The mount and umount commands are used to mount and unmount CD/DVD media after it has been written, in order to run a consistency check. +-----------------------------------------------------------------+ | Source | URL | |--------+--------------------------------------------------------| |upstream|https://www.kernel.org/pub/linux/utils/util-linux/ | |--------+--------------------------------------------------------| |Debian |http://packages.debian.org/stable/base/mount | |--------+--------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=mount| +-----------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. grepmail The grepmail command is used by the mbox extension to pull out only recent messages from mbox mail folders. +--------------------------------------------------------------------+ | Source | URL | |--------+-----------------------------------------------------------| |upstream|http://sourceforge.net/projects/grepmail/ | |--------+-----------------------------------------------------------| |Debian |http://packages.debian.org/stable/mail/grepmail | |--------+-----------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=grepmail| +--------------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. gpg The gpg command is used by the encrypt extension to encrypt files. +-----------------------------------------------------------------+ | Source | URL | |--------+--------------------------------------------------------| |upstream|https://www.gnupg.org/ | |--------+--------------------------------------------------------| |Debian |http://packages.debian.org/stable/utils/gnupg | |--------+--------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=gnupg| +-----------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. split The split command is used by the split extension to split up large files. This command is typically part of the core operating system install and is not distributed in a separate package. AWS CLI AWS CLI is Amazon's official command-line tool for interacting with the Amazon Web Services infrastruture. Cedar Backup uses AWS CLI to copy backup data up to Amazon S3 cloud storage. After you install AWS CLI, you need to configure your connection to AWS with an appropriate access id and access key. Amazon provides a good setup guide. +--------------------------------------------------+ | Source | URL | |--------+-----------------------------------------| |upstream|http://aws.amazon.com/documentation/cli/ | |--------+-----------------------------------------| |Debian |https://packages.debian.org/stable/awscli| +--------------------------------------------------+ The initial implementation of the amazons3 extension was written using AWS CLI 1.4. As of this writing, not all Linux distributions include a package for this version. On these platforms, the easiest way to install it is via PIP: apt-get install python3-pip, and then pip3 install awscli. The Debian package includes an appropriate dependency starting with the jessie release. Chardet The cback3-amazons3-sync command relies on the Chardet Python package to check filename encoding. You only need this package if you are going to use the sync tool. +-----------------------------------------------------------+ | Source | URL | |--------+--------------------------------------------------| |upstream|https://github.com/chardet/chardet | |--------+--------------------------------------------------| |debian |https://packages.debian.org/stable/python3-chardet| +-----------------------------------------------------------+ AppendixC.Data Recovery Table of Contents Finding your Data Recovering Filesystem Data Full Restore Partial Restore Recovering MySQL Data Recovering Subversion Data Recovering Mailbox Data Recovering Data split by the Split Extension Finding your Data The first step in data recovery is finding the data that you want to recover. You need to decide whether you are going to to restore off backup media, or out of some existing staging data that has not yet been purged. The only difference is, if you purge staging data less frequently than once per week, you might have some data available in the staging directories which would not be found on your backup media, depending on how you rotate your media. (And of course, if your system is trashed or stolen, you probably will not have access to your old staging data in any case.) Regardless of the data source you choose, you will find the data organized in the same way. The remainder of these examples will work off an example backup disc, but the contents of the staging directory will look pretty much like the contents of the disc, with data organized first by date and then by backup peer name. This is the root directory of my example disc: root:/mnt/cdrw# ls -l total 4 drwxr-x--- 3 backup backup 4096 Sep 01 06:30 2005/ In this root directory is one subdirectory for each year represented in the backup. In this example, the backup represents data entirely from the year 2005. If your configured backup week happens to span a year boundary, there would be two subdirectories here (for example, one for 2005 and one for 2006). Within each year directory is one subdirectory for each month represented in the backup. root:/mnt/cdrw/2005# ls -l total 2 dr-xr-xr-x 6 root root 2048 Sep 11 05:30 09/ In this example, the backup represents data entirely from the month of September, 2005. If your configured backup week happens to span a month boundary, there would be two subdirectories here (for example, one for August 2005 and one for September 2005). Within each month directory is one subdirectory for each day represented in the backup. root:/mnt/cdrw/2005/09# ls -l total 8 dr-xr-xr-x 5 root root 2048 Sep 7 05:30 07/ dr-xr-xr-x 5 root root 2048 Sep 8 05:30 08/ dr-xr-xr-x 5 root root 2048 Sep 9 05:30 09/ dr-xr-xr-x 5 root root 2048 Sep 11 05:30 11/ Depending on how far into the week your backup media is from, you might have as few as one daily directory in here, or as many as seven. Within each daily directory is a stage indicator (indicating when the directory was staged) and one directory for each peer configured in the backup: root:/mnt/cdrw/2005/09/07# ls -l total 10 dr-xr-xr-x 2 root root 2048 Sep 7 02:31 host1/ -r--r--r-- 1 root root 0 Sep 7 03:27 cback.stage dr-xr-xr-x 2 root root 4096 Sep 7 02:30 host2/ dr-xr-xr-x 2 root root 4096 Sep 7 03:23 host3/ In this case, you can see that my backup includes three machines, and that the backup data was staged on September 7, 2005 at 03:27. Within the directory for a given host are all of the files collected on that host. This might just include tarfiles from a normal Cedar Backup collect run, and might also include files ?collected? from Cedar Backup extensions or by other third-party processes on your system. root:/mnt/cdrw/2005/09/07/host1# ls -l total 157976 -r--r--r-- 1 root root 11206159 Sep 7 02:30 boot.tar.bz2 -r--r--r-- 1 root root 0 Sep 7 02:30 cback.collect -r--r--r-- 1 root root 3199 Sep 7 02:30 dpkg-selections.txt.bz2 -r--r--r-- 1 root root 908325 Sep 7 02:30 etc.tar.bz2 -r--r--r-- 1 root root 389 Sep 7 02:30 fdisk-l.txt.bz2 -r--r--r-- 1 root root 1003100 Sep 7 02:30 ls-laR.txt.bz2 -r--r--r-- 1 root root 19800 Sep 7 02:30 mysqldump.txt.bz2 -r--r--r-- 1 root root 4133372 Sep 7 02:30 opt-local.tar.bz2 -r--r--r-- 1 root root 44794124 Sep 8 23:34 opt-public.tar.bz2 -r--r--r-- 1 root root 30028057 Sep 7 02:30 root.tar.bz2 -r--r--r-- 1 root root 4747070 Sep 7 02:30 svndump-0:782-opt-svn-repo1.txt.bz2 -r--r--r-- 1 root root 603863 Sep 7 02:30 svndump-0:136-opt-svn-repo2.txt.bz2 -r--r--r-- 1 root root 113484 Sep 7 02:30 var-lib-jspwiki.tar.bz2 -r--r--r-- 1 root root 19556660 Sep 7 02:30 var-log.tar.bz2 -r--r--r-- 1 root root 14753855 Sep 7 02:30 var-mail.tar.bz2 As you can see, I back up variety of different things on host1. I run the normal collect action, as well as the sysinfo, mysql and subversion extensions. The resulting backup files are named in a way that makes it easy to determine what they represent. Files of the form *.tar.bz2 represent directories backed up by the collect action. The first part of the name (before ?.tar.bz2?), represents the path to the directory. For example, boot.tar.gz contains data from /boot, and var-lib-jspwiki.tar.bz2 contains data from /var/lib/jspwiki. The fdisk-l.txt.bz2, ls-laR.tar.bz2 and dpkg-selections.tar.bz2 files are produced by the sysinfo extension. The mysqldump.txt.bz2 file is produced by the mysql extension. It represents a system-wide database dump, because I use the ?all? flag in configuration. If I were to configure Cedar Backup to dump individual datbases, then the filename would contain the database name (something like mysqldump-bugs.txt.bz2). Finally, the files of the form svndump-*.txt.bz2 are produced by the subversion extension. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc. Recovering Filesystem Data Filesystem data is gathered by the standard Cedar Backup collect action. This data is placed into files of the form *.tar. The first part of the name (before ?.tar?), represents the path to the directory. For example, boot.tar would contain data from /boot, and var-lib-jspwiki.tar would contain data from /var/ lib/jspwiki. (As a special case, data from the root directory would be placed in -.tar). Remember that your tarfile might have a bzip2 (.bz2) or gzip (.gz) extension, depending on what compression you specified in configuration. If you are using full backups every day, the latest backup data is always within the latest daily directory stored on your backup media or within your staging directory. If you have some or all of your directories configured to do incremental backups, then the first day of the week holds the full backups and the other days represent incremental differences relative to that first day of the week. Where to extract your backup If you are restoring a home directory or some other non-system directory as part of a full restore, it is probably fine to extract the backup directly into the filesystem. If you are restoring a system directory like /etc as part of a full restore, extracting directly into the filesystem is likely to break things, especially if you re-installed a newer version of your operating system than the one you originally backed up. It's better to extract directories like this to a temporary location and pick out only the files you find you need. When doing a partial restore, I suggest always extracting to a temporary location. Doing it this way gives you more control over what you restore, and helps you avoid compounding your original problem with another one (like overwriting the wrong file, oops). Full Restore To do a full system restore, find the newest applicable full backup and extract it. If you have some incremental backups, extract them into the same place as the full backup, one by one starting from oldest to newest. (This way, if a file changed every day you will always get the latest one.) All of the backed-up files are stored in the tar file in a relative fashion, so you can extract from the tar file either directly into the filesystem, or into a temporary location. For example, to restore boot.tar.bz2 directly into /boot, execute tar from your root directory (/): root:/# bzcat boot.tar.bz2 | tar xvf - Of course, use zcat or just cat, depending on what kind of compression is in use. If you want to extract boot.tar.gz into a temporary location like /tmp/boot instead, just change directories first. In this case, you'd execute the tar command from within /tmp instead of /. root:/tmp# bzcat boot.tar.bz2 | tar xvf - Again, use zcat or just cat as appropriate. For more information, you might want to check out the manpage or GNU info documentation for the tar command. Partial Restore Most users will need to do a partial restore much more frequently than a full restore. Perhaps you accidentally removed your home directory, or forgot to check in some version of a file before deleting it. Or, perhaps the person who packaged Apache for your system blew away your web server configuration on upgrade (it happens). The solution to these and other kinds of problems is a partial restore (assuming you've backed up the proper things). The procedure is similar to a full restore. The specific steps depend on how much information you have about the file you are looking for. Where with a full restore, you can confidently extract the full backup followed by each of the incremental backups, this might not be what you want when doing a partial restore. You may need to take more care in finding the right version of a file ? since the same file, if changed frequently, would appear in more than one backup. Start by finding the backup media that contains the file you are looking for. If you rotate your backup media, and your last known ?contact? with the file was a while ago, you may need to look on older media to find it. This may take some effort if you are not sure when the change you are trying to correct took place. Once you have decided to look at a particular piece of backup media, find the correct peer (host), and look for the file in the full backup: root:/tmp# bzcat boot.tar.bz2 | tar tvf - path/to/file Of course, use zcat or just cat, depending on what kind of compression is in use. The tvf tells tar to search for the file in question and just list the results rather than extracting the file. Note that the filename is relative (with no starting /). Alternately, you can omit the path/to/file and search through the output using more or less If you haven't found what you are looking for, work your way through the incremental files for the directory in question. One of them may also have the file if it changed during the course of the backup. Or, move to older or newer media and see if you can find the file there. Once you have found your file, extract it using xvf: root:/tmp# bzcat boot.tar.bz2 | tar xvf - path/to/file Again, use zcat or just cat as appropriate. Inspect the file and make sure it's what you're looking for. Again, you may need to move to older or newer media to find the exact version of your file. For more information, you might want to check out the manpage or GNU info documentation for the tar command. Recovering MySQL Data MySQL data is gathered by the Cedar Backup mysql extension. This extension always creates a full backup each time it runs. This wastes some space, but makes it easy to restore database data. The following procedure describes how to restore your MySQL database from the backup. Warning I am not a MySQL expert. I am providing this information for reference. I have tested these procedures on my own MySQL installation; however, I only have a single database for use by Bugzilla, and I may have misunderstood something with regard to restoring individual databases as a user other than root. If you have any doubts, test the procedure below before relying on it! MySQL experts and/or knowledgable Cedar Backup users: feel free to write me and correct any part of this procedure. First, find the backup you are interested in. If you have specified ?all databases? in configuration, you will have a single backup file, called mysqldump.txt. If you have specified individual databases in configuration, then you will have files with names like mysqldump-database.txt instead. In either case, your file might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. If you are restoring an ?all databases? backup, make sure that you have correctly created the root user and know its password. Then, execute: daystrom:/# bzcat mysqldump.txt.bz2 | mysql -p -u root Of course, use zcat or just cat, depending on what kind of compression is in use. Because the database backup includes CREATE DATABASE SQL statements, this command should take care of creating all of the databases within the backup, as well as populating them. If you are restoring a backup for a specific database, you have two choices. If you have a root login, you can use the same command as above: daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u root Otherwise, you can create the database and its login first (or have someone create it) and then use a database-specific login to execute the restore: daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u user database Again, use zcat or just cat as appropriate. For more information on using MySQL, see the documentation on the MySQL web site, http://mysql.org/, or the manpages for the mysql and mysqldump commands. Recovering Subversion Data Subversion data is gathered by the Cedar Backup subversion extension. Cedar Backup will create either full or incremental backups, but the procedure for restoring is the same for both. Subversion backups are always taken on a per-repository basis. If you need to restore more than one repository, follow the procedures below for each repository you are interested in. First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week. The subversion extension creates files of the form svndump-*.txt. These files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc. Next, if you still have the old Subversion repository around, you might want to just move it off (rename the top-level directory) before executing the restore. Or, you can restore into a temporary directory and rename it later to its real name once you've checked it out. That is what my example below will show. Next, you need to create a new Subversion repository to hold the restored data. This example shows an FSFS repository, but that is an arbitrary choice. You can restore from an FSFS backup into a FSFS repository or a BDB repository. The Subversion dump format is ?backend-agnostic?. root:/tmp# svnadmin create --fs-type=fsfs testrepo Next, load the full backup into the repository: root:/tmp# bzcat svndump-0:782-opt-svn-repo1.txt.bz2 | svnadmin load testrepo Of course, use zcat or just cat, depending on what kind of compression is in use. Follow that with loads for each of the incremental backups: root:/tmp# bzcat svndump-783:785-opt-svn-repo1.txt.bz2 | svnadmin load testrepo root:/tmp# bzcat svndump-786:800-opt-svn-repo1.txt.bz2 | svnadmin load testrepo Again, use zcat or just cat as appropriate. When this is done, your repository will be restored to the point of the last commit indicated in the svndump file (in this case, to revision 800). Note Note: don't be surprised if, when you test this, the restored directory doesn't have exactly the same contents as the original directory. I can't explain why this happens, but if you execute svnadmin dump on both old and new repositories, the results are identical. This means that the repositories do contain the same content. For more information on using Subversion, see the book Version Control with Subversion (http://svnbook.red-bean.com/) or the Subversion FAQ (http:// subversion.tigris.org/faq.html). Recovering Mailbox Data Mailbox data is gathered by the Cedar Backup mbox extension. Cedar Backup will create either full or incremental backups, but both kinds of backups are treated identically when restoring. Individual mbox files and mbox directories are treated a little differently, since individual files are just compressed, but directories are collected into a tar archive. First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week. The mbox extension creates files of the form mbox-*. Backup files for individual mbox files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. Backup files for mbox directories will have a .tar, .tar.gz or .tar.bz2 extension, again depending on what kind of compression you specified in configuration. There is one backup file for each configured mbox file or directory. The backup file name represents the name of the file or directory and the date it was backed up. So, the file mbox-20060624-home-user-mail-greylist represents the backup for /home/user/mail/greylist run on 24 Jun 2006. Likewise, mbox-20060624-home-user-mail.tar represents the backup for the /home/user/mail directory run on that same date. Once you have found the files you are looking for, the restoration procedure is fairly simple. First, concatenate all of the backup files together. Then, use grepmail to eliminate duplicate messages (if any). Here is an example for a single backed-up file: root:/tmp# rm restore.mbox # make sure it's not left over root:/tmp# cat mbox-20060624-home-user-mail-greylist >> restore.mbox root:/tmp# cat mbox-20060625-home-user-mail-greylist >> restore.mbox root:/tmp# cat mbox-20060626-home-user-mail-greylist >> restore.mbox root:/tmp# grepmail -a -u restore.mbox > nodups.mbox At this point, nodups.mbox contains all of the backed-up messages from /home/ user/mail/greylist. Of course, if your backups are compressed, you'll have to use zcat or bzcat rather than just cat. If you are backing up mbox directories rather than individual files, see the filesystem instructions for notes on now to extract the individual files from inside tar archives. Extract the files you are interested in, and then concatenate them together just like shown above for the individual case. Recovering Data split by the Split Extension The Split extension takes large files and splits them up into smaller files. Typically, it would be used in conjunction with the cback3-span command. The split up files are not difficult to work with. Simply find all of the files ? which could be split between multiple discs ? and concatenate them together. root:/tmp# rm usr-src-software.tar.gz # make sure it's not there root:/tmp# cat usr-src-software.tar.gz_00001 >> usr-src-software.tar.gz root:/tmp# cat usr-src-software.tar.gz_00002 >> usr-src-software.tar.gz root:/tmp# cat usr-src-software.tar.gz_00003 >> usr-src-software.tar.gz Then, use the resulting file like usual. Remember, you need to have all of the files that the original large file was split into before this will work. If you are missing a file, the result of the concatenation step will be either a corrupt file or a truncated file (depending on which chunks you did not include). AppendixD.Securing Password-less SSH Connections Cedar Backup relies on password-less public key SSH connections to make various parts of its backup process work. Password-less scp is used to stage files from remote clients to the master, and password-less ssh is used to execute actions on managed clients. Normally, it is a good idea to avoid password-less SSH connections in favor of using an SSH agent. The SSH agent manages your SSH connections so that you don't need to type your passphrase over and over. You get most of the benefits of a password-less connection without the risk. Unfortunately, because Cedar Backup has to execute without human involvement (through a cron job), use of an agent really isn't feasable. We have to rely on true password-less public keys to give the master access to the client peers. Traditionally, Cedar Backup has relied on a ?segmenting? strategy to minimize the risk. Although the backup typically runs as root ? so that all parts of the filesystem can be backed up ? we don't use the root user for network connections. Instead, we use a dedicated backup user on the master to initiate network connections, and dedicated users on each of the remote peers to accept network connections. With this strategy in place, an attacker with access to the backup user on the master (or even root access, really) can at best only get access to the backup user on the remote peers. We still concede a local attack vector, but at least that vector is restricted to an unprivileged user. Some Cedar Backup users may not be comfortable with this risk, and others may not be able to implement the segmentation strategy ? they simply may not have a way to create a login which is only used for backups. So, what are these users to do? Fortunately there is a solution. The SSH authorized keys file supports a way to put a ?filter? in place on an SSH connection. This excerpt is from the AUTHORIZED_KEYS FILE FORMAT section of man 8 sshd: command="command" Specifies that the command is executed whenever this key is used for authentication. The command supplied by the user (if any) is ignored. The command is run on a pty if the client requests a pty; otherwise it is run without a tty. If an 8-bit clean channel is required, one must not request a pty or should specify no-pty. A quote may be included in the command by quoting it with a backslash. This option might be useful to restrict certain public keys to perform just a specific operation. An example might be a key that permits remote backups but nothing else. Note that the client may specify TCP and/or X11 forwarding unless they are explicitly prohibited. Note that this option applies to shell, command or subsystem execution. Essentially, this gives us a way to authenticate the commands that are being executed. We can either accept or reject commands, and we can even provide a readable error message for commands we reject. The filter is applied on the remote peer, to the key that provides the master access to the remote peer. So, let's imagine that we have two hosts: master ?mickey?, and peer ?minnie?. Here is the original ~/.ssh/authorized_keys file for the backup user on minnie (remember, this is all on one line in the file): ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp3MsSpVB9q9iZ+awek120391k;mm0c221=3=km =m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9FtAZ9U+MmpL901231asdkl;ai1-923ma9s=9= 1-2341=-a0sd=-sa0=1z= backup@mickey This line is the public key that minnie can use to identify the backup user on mickey. Assuming that there is no passphrase on the private key back on mickey, the backup user on mickey can get direct access to minnie. To put the filter in place, we add a command option to the key, like this: command="/opt/backup/validate-backup" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp 3MsSpVB9q9iZ+awek120391k;mm0c221=3=km=m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9F tAZ9U+MmpL901231asdkl;ai1-923ma9s=9=1-2341=-a0sd=-sa0=1z= backup@mickey Basically, the command option says that whenever this key is used to successfully initiate a connection, the /opt/backup/validate-backup command will be run instead of the real command that came over the SSH connection. Fortunately, the interface gives the command access to certain shell variables that can be used to invoke the original command if you want to. A very basic validate-backup script might look something like this: #!/bin/bash if [[ "${SSH_ORIGINAL_COMMAND}" == "ls -l" ]] ; then ${SSH_ORIGINAL_COMMAND} else echo "Security policy does not allow command [${SSH_ORIGINAL_COMMAND}]." exit 1 fi This script allows exactly ls -l and nothing else. If the user attempts some other command, they get a nice error message telling them that their command has been disallowed. For remote commands executed over ssh, the original command is exactly what the caller attempted to invoke. For remote copies, the commands are either scp -f file (copy from the peer to the master) or scp -t file (copy to the peer from the master). If you want, you can see what command SSH thinks it is executing by using ssh -v or scp -v. The command will be right at the top, something like this: Executing: program /usr/bin/ssh host mickey, user (unspecified), command scp -v -f .profile OpenSSH_4.3p2 Debian-9, OpenSSL 0.9.8c 05 Sep 2006 debug1: Reading configuration data /home/backup/.ssh/config debug1: Applying options for daystrom debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 Omit the -v and you have your command: scp -f .profile. For a normal, non-managed setup, you need to allow the following commands, where /path/to/collect/ is replaced with the real path to the collect directory on the remote peer: scp -f /path/to/collect/cback.collect scp -f /path/to/collect/* scp -t /path/to/collect/cback.stage If you are configuring a managed client, then you also need to list the exact command lines that the master will be invoking on the managed client. You are guaranteed that the master will invoke one action at a time, so if you list two lines per action (full and non-full) you should be fine. Here's an example for the collect action: /usr/bin/cback3 --full collect /usr/bin/cback3 collect Of course, you would have to list the actual path to the cback3 executable ? exactly the one listed in the configuration option for your managed peer. I hope that there is enough information here for interested users to implement something that makes them comfortable. I have resisted providing a complete example script, because I think everyone's setup will be different. However, feel free to write if you are working through this and you have questions. AppendixE.Copyright Copyright (c) 2004-2011,2013-2015 Kenneth J. Pronovici This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation. For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work. This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA ==================================================================== GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS ==================================================================== CedarBackup3-3.1.6/doc/manual/ch06.html0000664000175000017500000001103512657665550021163 0ustar pronovicpronovic00000000000000Chapter6.Official Extensions

    Chapter6.Official Extensions

    System Information Extension

    The System Information Extension is a simple Cedar Backup extension used to save off important system recovery information that might be useful when reconstructing a broken system. It is intended to be run either immediately before or immediately after the standard collect action.

    This extension saves off the following information to the configured Cedar Backup collect directory. Saved off data is always compressed using bzip2.

    • Currently-installed Debian packages via dpkg --get-selections

    • Disk partition information via fdisk -l

    • System-wide mounted filesystem contents, via ls -laR

    The Debian-specific information is only collected on systems where /usr/bin/dpkg exists.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>sysinfo</name>
          <module>CedarBackup3.extend.sysinfo</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, but requires no new configuration of its own.

    CedarBackup3-3.1.6/doc/manual/ch02s02.html0000664000175000017500000000626212657665550021512 0ustar pronovicpronovic00000000000000Data Recovery

    Data Recovery

    Cedar Backup does not include any facility to restore backups. Instead, it assumes that the administrator (using the procedures and references in AppendixC, Data Recovery) can handle the task of restoring their own system, using the standard system tools at hand.

    If I were to maintain recovery code in Cedar Backup, I would almost certainly end up in one of two situations. Either Cedar Backup would only support simple recovery tasks, and those via an interface a lot like that of the underlying system tools; or Cedar Backup would have to include a hugely complicated interface to support more specialized (and hence useful) recovery tasks like restoring individual files as of a certain point in time. In either case, I would end up trying to maintain critical functionality that would be rarely used, and hence would also be rarely tested by end-users. I am uncomfortable asking anyone to rely on functionality that falls into this category.

    My primary goal is to keep the Cedar Backup codebase as simple and focused as possible. I hope you can understand how the choice of providing documentation, but not code, seems to strike the best balance between managing code complexity and providing the functionality that end-users need.

    CedarBackup3-3.1.6/doc/manual/apcs06.html0000664000175000017500000000601112657665550021515 0ustar pronovicpronovic00000000000000Recovering Data split by the Split Extension

    Recovering Data split by the Split Extension

    The Split extension takes large files and splits them up into smaller files. Typically, it would be used in conjunction with the cback3-span command.

    The split up files are not difficult to work with. Simply find all of the files — which could be split between multiple discs — and concatenate them together.

    root:/tmp# rm usr-src-software.tar.gz  # make sure it's not there
    root:/tmp# cat usr-src-software.tar.gz_00001 >> usr-src-software.tar.gz
    root:/tmp# cat usr-src-software.tar.gz_00002 >> usr-src-software.tar.gz
    root:/tmp# cat usr-src-software.tar.gz_00003 >> usr-src-software.tar.gz
          

    Then, use the resulting file like usual.

    Remember, you need to have all of the files that the original large file was split into before this will work. If you are missing a file, the result of the concatenation step will be either a corrupt file or a truncated file (depending on which chunks you did not include).

    CedarBackup3-3.1.6/doc/manual/ch04.html0000664000175000017500000001144112657665550021162 0ustar pronovicpronovic00000000000000Chapter4.Command Line Tools

    Chapter4.Command Line Tools

    Overview

    Cedar Backup comes with three command-line programs: cback3, cback3-amazons3-sync, and cback3-span.

    The cback3 command is the primary command line interface and the only Cedar Backup program that most users will ever need.

    The cback3-amazons3-sync tool is used for synchronizing entire directories of files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar Backup process.

    Users who have a lot of data to back up — more than will fit on a single CD or DVD — can use the interactive cback3-span tool to split their data between multiple discs.

    CedarBackup3-3.1.6/doc/manual/ch01s02.html0000664000175000017500000000714012657665550021505 0ustar pronovicpronovic00000000000000Migrating from Version 2 to Version 3

    Migrating from Version 2 to Version 3

    The main difference between Cedar Backup version 2 and Cedar Backup version 3 is the targeted Python interpreter. Cedar Backup version 2 was designed for Python 2, while version 3 is a conversion of the original code to Python 3. Other than that, both versions are functionally equivalent. The configuration format is unchanged, and you can mix-and-match masters and clients of different versions in the same backup pool. Both versions will be fully supported until around the time of the Python 2 end-of-life in 2020, but you should plan to migrate sooner than that if possible.

    A major design goal for version 3 was to facilitate easy migration testing for users, by making it possible to install version 3 on the same server where version 2 was already in use. A side effect of this design choice is that all of the executables, configuration files, and logs changed names in version 3. Where version 2 used "cback", version 3 uses "cback3": cback3.conf instead of cback.conf, cback3.log instead of cback.log, etc.

    So, while migrating from version 2 to version 3 is relatively straightforward, you will have to make some changes manually. You will need to create a new configuration file (or soft link to the old one), modify your cron jobs to use the new executable name, etc. You can migrate one server at a time in your pool with no ill effects, or even incrementally migrate a single server by using version 2 and version 3 on different days of the week or for different parts of the backup.

    CedarBackup3-3.1.6/doc/manual/ch02s08.html0000664000175000017500000001023312657665550021511 0ustar pronovicpronovic00000000000000Incremental Backups

    Incremental Backups

    Cedar Backup supports three different kinds of backups for individual collect directories. These are daily, weekly and incremental backups. Directories using the daily mode are backed up every day. Directories using the weekly mode are only backed up on the first day of the week, or when the --full option is used. Directories using the incremental mode are always backed up on the first day of the week (like a weekly backup), but after that only the files which have changed are actually backed up on a daily basis.

    In Cedar Backup, incremental backups are not based on date, but are instead based on saved checksums, one for each backed-up file. When a full backup is run, Cedar Backup gathers a checksum value [14] for each backed-up file. The next time an incremental backup is run, Cedar Backup checks its list of file/checksum pairs for each file that might be backed up. If the file's checksum value does not match the saved value, or if the file does not appear in the list of file/checksum pairs, then it will be backed up and a new checksum value will be placed into the list. Otherwise, the file will be ignored and the checksum value will be left unchanged.

    Cedar Backup stores the file/checksum pairs in .sha files in its working directory, one file per configured collect directory. The mappings in these files are reset at the start of the week or when the --full option is used. Because these files are used for an entire week, you should never purge the working directory more frequently than once per week.



    [14] The checksum is actually an SHA cryptographic hash. See Wikipedia for more information: http://en.wikipedia.org/wiki/SHA-1.

    CedarBackup3-3.1.6/doc/manual/ch06s03.html0000664000175000017500000003640612657665550021522 0ustar pronovicpronovic00000000000000Subversion Extension

    Subversion Extension

    The Subversion Extension is a Cedar Backup extension used to back up Subversion [25] version control repositories via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Each configured Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2.

    There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). This extension backs up both kinds of repositories in the same way, using svnadmin dump in an incremental mode.

    It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do the backup that way, then use the normal collect action rather than this extension. If you decide to do that, be sure to consult the Subversion documentation and make sure you understand the limitations of this kind of backup.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>subversion</name>
          <module>CedarBackup3.extend.subversion</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own subversion configuration section. This is an example Subversion configuration section:

    <subversion>
       <collect_mode>incr</collect_mode>
       <compress_mode>bzip2</compress_mode>
       <repository>
          <abs_path>/opt/public/svn/docs</abs_path>
       </repository>
       <repository>
          <abs_path>/opt/public/svn/web</abs_path>
          <compress_mode>gzip</compress_mode>
       </repository>
       <repository_dir>
          <abs_path>/opt/private/svn</abs_path>
          <collect_mode>daily</collect_mode>
       </repository_dir>
    </subversion>
          

    The following elements are part of the Subversion configuration section:

    collect_mode

    Default collect mode.

    The collect mode describes how frequently a Subversion repository is backed up. The Subversion extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts).

    This value is the collect mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Default compress mode.

    Subversion repositories backups are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    This value is the compress mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of none, gzip or bzip2.

    repository

    A Subversion repository be collected.

    This is a subsection which contains information about a specific Subversion repository to be backed up.

    This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured.

    The repository subsection contains the following fields:

    collect_mode

    Collect mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the Subversion repository to back up.

    Restrictions: Must be an absolute path.

    repository_dir

    A Subversion parent repository directory be collected.

    This is a subsection which contains information about a Subversion parent repository directory to be backed up. Any subdirectory immediately within this directory is assumed to be a Subversion repository, and will be backed up.

    This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured.

    The repository_dir subsection contains the following fields:

    collect_mode

    Collect mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the Subversion repository to back up.

    Restrictions: Must be an absolute path.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this subversion parent directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    rel_path

    A relative path to be excluded from the backup.

    The path is assumed to be relative to the subversion parent directory itself. For instance, if the configured subversion parent directory is /opt/svn a configured relative path of software would exclude the path /opt/svn/software.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    CedarBackup3-3.1.6/doc/manual/ch03.html0000664000175000017500000000770112657665550021165 0ustar pronovicpronovic00000000000000Chapter3.Installation

    Chapter3.Installation

    Background

    There are two different ways to install Cedar Backup. The easiest way is to install the pre-built Debian packages. This method is painless and ensures that all of the correct dependencies are available, etc.

    If you are running a Linux distribution other than Debian or you are running some other platform like FreeBSD or Mac OS X, then you must use the Python source distribution to install Cedar Backup. When using this method, you need to manage all of the dependencies yourself.

    CedarBackup3-3.1.6/doc/manual/pr01.html0000664000175000017500000000512112657665550021204 0ustar pronovicpronovic00000000000000Preface

    Preface

    Purpose

    This software manual has been written to document version 2 of Cedar Backup, originally released in early 2005.

    CedarBackup3-3.1.6/doc/manual/apcs03.html0000664000175000017500000001247112657665550021521 0ustar pronovicpronovic00000000000000Recovering MySQL Data

    Recovering MySQL Data

    MySQL data is gathered by the Cedar Backup mysql extension. This extension always creates a full backup each time it runs. This wastes some space, but makes it easy to restore database data. The following procedure describes how to restore your MySQL database from the backup.

    Warning

    I am not a MySQL expert. I am providing this information for reference. I have tested these procedures on my own MySQL installation; however, I only have a single database for use by Bugzilla, and I may have misunderstood something with regard to restoring individual databases as a user other than root. If you have any doubts, test the procedure below before relying on it!

    MySQL experts and/or knowledgable Cedar Backup users: feel free to write me and correct any part of this procedure.

    First, find the backup you are interested in. If you have specified all databases in configuration, you will have a single backup file, called mysqldump.txt. If you have specified individual databases in configuration, then you will have files with names like mysqldump-database.txt instead. In either case, your file might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration.

    If you are restoring an all databases backup, make sure that you have correctly created the root user and know its password. Then, execute:

    daystrom:/# bzcat mysqldump.txt.bz2 | mysql -p -u root
          

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    Because the database backup includes CREATE DATABASE SQL statements, this command should take care of creating all of the databases within the backup, as well as populating them.

    If you are restoring a backup for a specific database, you have two choices. If you have a root login, you can use the same command as above:

    daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u root
          

    Otherwise, you can create the database and its login first (or have someone create it) and then use a database-specific login to execute the restore:

    daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u user database
          

    Again, use zcat or just cat as appropriate.

    For more information on using MySQL, see the documentation on the MySQL web site, http://mysql.org/, or the manpages for the mysql and mysqldump commands.

    CedarBackup3-3.1.6/doc/manual/ape.html0000664000175000017500000004311612657665550021175 0ustar pronovicpronovic00000000000000AppendixE.Copyright

    AppendixE.Copyright

    
    Copyright (c) 2004-2011,2013-2015
    Kenneth J. Pronovici
    
    This work is free; you can redistribute it and/or modify it under
    the terms of the GNU General Public License (the "GPL"), Version 2,
    as published by the Free Software Foundation.
    
    For the purposes of the GPL, the "preferred form of modification"
    for this work is the original Docbook XML text files.  If you
    choose to distribute this work in a compiled form (i.e. if you
    distribute HTML, PDF or Postscript documents based on the original
    Docbook XML text files), you must also consider image files to be
    "source code" if those images are required in order to construct a
    complete and readable compiled version of the work.
    
    This work is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    
    Copies of the GNU General Public License are available from
    the Free Software Foundation website, http://www.gnu.org/.
    You may also write the Free Software Foundation, Inc., 
    51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA
    
    ====================================================================
    
    		    GNU GENERAL PUBLIC LICENSE
    		       Version 2, June 1991
    
     Copyright (C) 1989, 1991 Free Software Foundation, Inc.
         51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA
     Everyone is permitted to copy and distribute verbatim copies
     of this license document, but changing it is not allowed.
    
    			    Preamble
    
      The licenses for most software are designed to take away your
    freedom to share and change it.  By contrast, the GNU General Public
    License is intended to guarantee your freedom to share and change free
    software--to make sure the software is free for all its users.  This
    General Public License applies to most of the Free Software
    Foundation's software and to any other program whose authors commit to
    using it.  (Some other Free Software Foundation software is covered by
    the GNU Library General Public License instead.)  You can apply it to
    your programs, too.
    
      When we speak of free software, we are referring to freedom, not
    price.  Our General Public Licenses are designed to make sure that you
    have the freedom to distribute copies of free software (and charge for
    this service if you wish), that you receive source code or can get it
    if you want it, that you can change the software or use pieces of it
    in new free programs; and that you know you can do these things.
    
      To protect your rights, we need to make restrictions that forbid
    anyone to deny you these rights or to ask you to surrender the rights.
    These restrictions translate to certain responsibilities for you if you
    distribute copies of the software, or if you modify it.
    
      For example, if you distribute copies of such a program, whether
    gratis or for a fee, you must give the recipients all the rights that
    you have.  You must make sure that they, too, receive or can get the
    source code.  And you must show them these terms so they know their
    rights.
    
      We protect your rights with two steps: (1) copyright the software, and
    (2) offer you this license which gives you legal permission to copy,
    distribute and/or modify the software.
    
      Also, for each author's protection and ours, we want to make certain
    that everyone understands that there is no warranty for this free
    software.  If the software is modified by someone else and passed on, we
    want its recipients to know that what they have is not the original, so
    that any problems introduced by others will not reflect on the original
    authors' reputations.
    
      Finally, any free program is threatened constantly by software
    patents.  We wish to avoid the danger that redistributors of a free
    program will individually obtain patent licenses, in effect making the
    program proprietary.  To prevent this, we have made it clear that any
    patent must be licensed for everyone's free use or not licensed at all.
    
      The precise terms and conditions for copying, distribution and
    modification follow.
    
    		    GNU GENERAL PUBLIC LICENSE
       TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
    
      0. This License applies to any program or other work which contains
    a notice placed by the copyright holder saying it may be distributed
    under the terms of this General Public License.  The "Program", below,
    refers to any such program or work, and a "work based on the Program"
    means either the Program or any derivative work under copyright law:
    that is to say, a work containing the Program or a portion of it,
    either verbatim or with modifications and/or translated into another
    language.  (Hereinafter, translation is included without limitation in
    the term "modification".)  Each licensee is addressed as "you".
    
    Activities other than copying, distribution and modification are not
    covered by this License; they are outside its scope.  The act of
    running the Program is not restricted, and the output from the Program
    is covered only if its contents constitute a work based on the
    Program (independent of having been made by running the Program).
    Whether that is true depends on what the Program does.
    
      1. You may copy and distribute verbatim copies of the Program's
    source code as you receive it, in any medium, provided that you
    conspicuously and appropriately publish on each copy an appropriate
    copyright notice and disclaimer of warranty; keep intact all the
    notices that refer to this License and to the absence of any warranty;
    and give any other recipients of the Program a copy of this License
    along with the Program.
    
    You may charge a fee for the physical act of transferring a copy, and
    you may at your option offer warranty protection in exchange for a fee.
    
      2. You may modify your copy or copies of the Program or any portion
    of it, thus forming a work based on the Program, and copy and
    distribute such modifications or work under the terms of Section 1
    above, provided that you also meet all of these conditions:
    
        a) You must cause the modified files to carry prominent notices
        stating that you changed the files and the date of any change.
    
        b) You must cause any work that you distribute or publish, that in
        whole or in part contains or is derived from the Program or any
        part thereof, to be licensed as a whole at no charge to all third
        parties under the terms of this License.
    
        c) If the modified program normally reads commands interactively
        when run, you must cause it, when started running for such
        interactive use in the most ordinary way, to print or display an
        announcement including an appropriate copyright notice and a
        notice that there is no warranty (or else, saying that you provide
        a warranty) and that users may redistribute the program under
        these conditions, and telling the user how to view a copy of this
        License.  (Exception: if the Program itself is interactive but
        does not normally print such an announcement, your work based on
        the Program is not required to print an announcement.)
    
    These requirements apply to the modified work as a whole.  If
    identifiable sections of that work are not derived from the Program,
    and can be reasonably considered independent and separate works in
    themselves, then this License, and its terms, do not apply to those
    sections when you distribute them as separate works.  But when you
    distribute the same sections as part of a whole which is a work based
    on the Program, the distribution of the whole must be on the terms of
    this License, whose permissions for other licensees extend to the
    entire whole, and thus to each and every part regardless of who wrote it.
    
    Thus, it is not the intent of this section to claim rights or contest
    your rights to work written entirely by you; rather, the intent is to
    exercise the right to control the distribution of derivative or
    collective works based on the Program.
    
    In addition, mere aggregation of another work not based on the Program
    with the Program (or with a work based on the Program) on a volume of
    a storage or distribution medium does not bring the other work under
    the scope of this License.
    
      3. You may copy and distribute the Program (or a work based on it,
    under Section 2) in object code or executable form under the terms of
    Sections 1 and 2 above provided that you also do one of the following:
    
        a) Accompany it with the complete corresponding machine-readable
        source code, which must be distributed under the terms of Sections
        1 and 2 above on a medium customarily used for software interchange; or,
    
        b) Accompany it with a written offer, valid for at least three
        years, to give any third party, for a charge no more than your
        cost of physically performing source distribution, a complete
        machine-readable copy of the corresponding source code, to be
        distributed under the terms of Sections 1 and 2 above on a medium
        customarily used for software interchange; or,
    
        c) Accompany it with the information you received as to the offer
        to distribute corresponding source code.  (This alternative is
        allowed only for noncommercial distribution and only if you
        received the program in object code or executable form with such
        an offer, in accord with Subsection b above.)
    
    The source code for a work means the preferred form of the work for
    making modifications to it.  For an executable work, complete source
    code means all the source code for all modules it contains, plus any
    associated interface definition files, plus the scripts used to
    control compilation and installation of the executable.  However, as a
    special exception, the source code distributed need not include
    anything that is normally distributed (in either source or binary
    form) with the major components (compiler, kernel, and so on) of the
    operating system on which the executable runs, unless that component
    itself accompanies the executable.
    
    If distribution of executable or object code is made by offering
    access to copy from a designated place, then offering equivalent
    access to copy the source code from the same place counts as
    distribution of the source code, even though third parties are not
    compelled to copy the source along with the object code.
    
      4. You may not copy, modify, sublicense, or distribute the Program
    except as expressly provided under this License.  Any attempt
    otherwise to copy, modify, sublicense or distribute the Program is
    void, and will automatically terminate your rights under this License.
    However, parties who have received copies, or rights, from you under
    this License will not have their licenses terminated so long as such
    parties remain in full compliance.
    
      5. You are not required to accept this License, since you have not
    signed it.  However, nothing else grants you permission to modify or
    distribute the Program or its derivative works.  These actions are
    prohibited by law if you do not accept this License.  Therefore, by
    modifying or distributing the Program (or any work based on the
    Program), you indicate your acceptance of this License to do so, and
    all its terms and conditions for copying, distributing or modifying
    the Program or works based on it.
    
      6. Each time you redistribute the Program (or any work based on the
    Program), the recipient automatically receives a license from the
    original licensor to copy, distribute or modify the Program subject to
    these terms and conditions.  You may not impose any further
    restrictions on the recipients' exercise of the rights granted herein.
    You are not responsible for enforcing compliance by third parties to
    this License.
    
      7. If, as a consequence of a court judgment or allegation of patent
    infringement or for any other reason (not limited to patent issues),
    conditions are imposed on you (whether by court order, agreement or
    otherwise) that contradict the conditions of this License, they do not
    excuse you from the conditions of this License.  If you cannot
    distribute so as to satisfy simultaneously your obligations under this
    License and any other pertinent obligations, then as a consequence you
    may not distribute the Program at all.  For example, if a patent
    license would not permit royalty-free redistribution of the Program by
    all those who receive copies directly or indirectly through you, then
    the only way you could satisfy both it and this License would be to
    refrain entirely from distribution of the Program.
    
    If any portion of this section is held invalid or unenforceable under
    any particular circumstance, the balance of the section is intended to
    apply and the section as a whole is intended to apply in other
    circumstances.
    
    It is not the purpose of this section to induce you to infringe any
    patents or other property right claims or to contest validity of any
    such claims; this section has the sole purpose of protecting the
    integrity of the free software distribution system, which is
    implemented by public license practices.  Many people have made
    generous contributions to the wide range of software distributed
    through that system in reliance on consistent application of that
    system; it is up to the author/donor to decide if he or she is willing
    to distribute software through any other system and a licensee cannot
    impose that choice.
    
    This section is intended to make thoroughly clear what is believed to
    be a consequence of the rest of this License.
    
      8. If the distribution and/or use of the Program is restricted in
    certain countries either by patents or by copyrighted interfaces, the
    original copyright holder who places the Program under this License
    may add an explicit geographical distribution limitation excluding
    those countries, so that distribution is permitted only in or among
    countries not thus excluded.  In such case, this License incorporates
    the limitation as if written in the body of this License.
    
      9. The Free Software Foundation may publish revised and/or new versions
    of the General Public License from time to time.  Such new versions will
    be similar in spirit to the present version, but may differ in detail to
    address new problems or concerns.
    
    Each version is given a distinguishing version number.  If the Program
    specifies a version number of this License which applies to it and "any
    later version", you have the option of following the terms and conditions
    either of that version or of any later version published by the Free
    Software Foundation.  If the Program does not specify a version number of
    this License, you may choose any version ever published by the Free Software
    Foundation.
    
      10. If you wish to incorporate parts of the Program into other free
    programs whose distribution conditions are different, write to the author
    to ask for permission.  For software which is copyrighted by the Free
    Software Foundation, write to the Free Software Foundation; we sometimes
    make exceptions for this.  Our decision will be guided by the two goals
    of preserving the free status of all derivatives of our free software and
    of promoting the sharing and reuse of software generally.
    
    			    NO WARRANTY
    
      11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
    FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
    OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
    PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
    OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
    MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
    TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
    PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
    REPAIR OR CORRECTION.
    
      12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
    WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
    REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
    INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
    OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
    TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
    YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
    PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
    POSSIBILITY OF SUCH DAMAGES.
    
    		     END OF TERMS AND CONDITIONS
    
    ====================================================================
    
          
    CedarBackup3-3.1.6/doc/manual/apc.html0000664000175000017500000002334612657665550021176 0ustar pronovicpronovic00000000000000AppendixC.Data Recovery

    AppendixC.Data Recovery

    Finding your Data

    The first step in data recovery is finding the data that you want to recover. You need to decide whether you are going to to restore off backup media, or out of some existing staging data that has not yet been purged. The only difference is, if you purge staging data less frequently than once per week, you might have some data available in the staging directories which would not be found on your backup media, depending on how you rotate your media. (And of course, if your system is trashed or stolen, you probably will not have access to your old staging data in any case.)

    Regardless of the data source you choose, you will find the data organized in the same way. The remainder of these examples will work off an example backup disc, but the contents of the staging directory will look pretty much like the contents of the disc, with data organized first by date and then by backup peer name.

    This is the root directory of my example disc:

    root:/mnt/cdrw# ls -l
    total 4
    drwxr-x---  3 backup backup 4096 Sep 01 06:30 2005/
          

    In this root directory is one subdirectory for each year represented in the backup. In this example, the backup represents data entirely from the year 2005. If your configured backup week happens to span a year boundary, there would be two subdirectories here (for example, one for 2005 and one for 2006).

    Within each year directory is one subdirectory for each month represented in the backup.

    root:/mnt/cdrw/2005# ls -l
    total 2
    dr-xr-xr-x  6 root root 2048 Sep 11 05:30 09/
          

    In this example, the backup represents data entirely from the month of September, 2005. If your configured backup week happens to span a month boundary, there would be two subdirectories here (for example, one for August 2005 and one for September 2005).

    Within each month directory is one subdirectory for each day represented in the backup.

    root:/mnt/cdrw/2005/09# ls -l
    total 8
    dr-xr-xr-x  5 root root 2048 Sep  7 05:30 07/
    dr-xr-xr-x  5 root root 2048 Sep  8 05:30 08/
    dr-xr-xr-x  5 root root 2048 Sep  9 05:30 09/
    dr-xr-xr-x  5 root root 2048 Sep 11 05:30 11/
          

    Depending on how far into the week your backup media is from, you might have as few as one daily directory in here, or as many as seven.

    Within each daily directory is a stage indicator (indicating when the directory was staged) and one directory for each peer configured in the backup:

    root:/mnt/cdrw/2005/09/07# ls -l
    total 10
    dr-xr-xr-x  2 root root 2048 Sep  7 02:31 host1/
    -r--r--r--  1 root root    0 Sep  7 03:27 cback.stage
    dr-xr-xr-x  2 root root 4096 Sep  7 02:30 host2/
    dr-xr-xr-x  2 root root 4096 Sep  7 03:23 host3/
          

    In this case, you can see that my backup includes three machines, and that the backup data was staged on September 7, 2005 at 03:27.

    Within the directory for a given host are all of the files collected on that host. This might just include tarfiles from a normal Cedar Backup collect run, and might also include files collected from Cedar Backup extensions or by other third-party processes on your system.

    root:/mnt/cdrw/2005/09/07/host1# ls -l
    total 157976
    -r--r--r--  1 root root 11206159 Sep  7 02:30 boot.tar.bz2
    -r--r--r--  1 root root        0 Sep  7 02:30 cback.collect
    -r--r--r--  1 root root     3199 Sep  7 02:30 dpkg-selections.txt.bz2
    -r--r--r--  1 root root   908325 Sep  7 02:30 etc.tar.bz2
    -r--r--r--  1 root root      389 Sep  7 02:30 fdisk-l.txt.bz2
    -r--r--r--  1 root root  1003100 Sep  7 02:30 ls-laR.txt.bz2
    -r--r--r--  1 root root    19800 Sep  7 02:30 mysqldump.txt.bz2
    -r--r--r--  1 root root  4133372 Sep  7 02:30 opt-local.tar.bz2
    -r--r--r--  1 root root 44794124 Sep  8 23:34 opt-public.tar.bz2
    -r--r--r--  1 root root 30028057 Sep  7 02:30 root.tar.bz2
    -r--r--r--  1 root root  4747070 Sep  7 02:30 svndump-0:782-opt-svn-repo1.txt.bz2
    -r--r--r--  1 root root   603863 Sep  7 02:30 svndump-0:136-opt-svn-repo2.txt.bz2
    -r--r--r--  1 root root   113484 Sep  7 02:30 var-lib-jspwiki.tar.bz2
    -r--r--r--  1 root root 19556660 Sep  7 02:30 var-log.tar.bz2
    -r--r--r--  1 root root 14753855 Sep  7 02:30 var-mail.tar.bz2
             

    As you can see, I back up variety of different things on host1. I run the normal collect action, as well as the sysinfo, mysql and subversion extensions. The resulting backup files are named in a way that makes it easy to determine what they represent.

    Files of the form *.tar.bz2 represent directories backed up by the collect action. The first part of the name (before .tar.bz2), represents the path to the directory. For example, boot.tar.gz contains data from /boot, and var-lib-jspwiki.tar.bz2 contains data from /var/lib/jspwiki.

    The fdisk-l.txt.bz2, ls-laR.tar.bz2 and dpkg-selections.tar.bz2 files are produced by the sysinfo extension.

    The mysqldump.txt.bz2 file is produced by the mysql extension. It represents a system-wide database dump, because I use the all flag in configuration. If I were to configure Cedar Backup to dump individual datbases, then the filename would contain the database name (something like mysqldump-bugs.txt.bz2).

    Finally, the files of the form svndump-*.txt.bz2 are produced by the subversion extension. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc.

    CedarBackup3-3.1.6/doc/manual/images/0002775000175000017500000000000012657665551021005 5ustar pronovicpronovic00000000000000CedarBackup3-3.1.6/doc/manual/images/note.png0000664000175000017500000000317212657665551022461 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @QAb @QAϛR*Aw0E |x7EeukWmxV`Io$@Q `Lʔ*q"FaSxt%n rO2[22 2 K&Éc0@ `y$:CGW25MJr +É^2@ Pyuފ @7Wn4IODMqknzŸq. ?4='1=)'AaM7] i1 vRiGJ7JzzABz N7/3Y2tVnBNOi21q@D8tM7AJO'"ߏ 0l ˡw>W Ci6(.ߝ!ć{#Datׯ ,%]I68(<G_O -y!.{3 7e1@Kk`N7@'$HNO@Sk.p9$  ux.8=e3h3&"=A&5!ěS{},pd@ˀrH JO'HP򦰸 WADB.NO<I7"7 i}{`tL4=)bIOt .o n' "yj (Ptd@A)zHYD8=M9,<;;\ 2 Al$Mj>Ό.z nSh'"TyHJ7 (~׍3e3ph- 0sk+ۙ᫧DoMx)IOh 9_ )u!YIENDB`CedarBackup3-3.1.6/doc/manual/images/warning.png0000664000175000017500000000301612657665551023156 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @ :93g-YrjR(8YT͎="AD>]x=7nV֝ cCx:w_I uЗW/EGt..0vQ-3gHua?vBv]v8 8ݻBBs 2OHYo#@sǓ'o) ?}c)@tS kdxWn !v Mp@aq?.R+LLAA⎦M||ô ͽ)";ǯ_9 t9"D{00lCKL/WagO>oFv@8Fn6zHEr@!۷# T/x?}; (01AkjZ@;s B8ؾGUU&&^MH JJ:me sr.3 B8n}=$}p-9?-20c`+*;D :|;A@AOfKכJCB|  .GFA3f!͛)BrPTԀ8NO] ӗ.Ah Ap͂D.KI͛\] z8a$@ MO >|d'Zt-3;v76|HI}a2/ojZګ7o  }}\\~-"; ,ĪZZ~:nR۟qvQ;Wm,tt\ w@8),g`XsA-ee@10d)(dUd@@MXdmb*d~qĄnvPZTJʓǏ@XA/% @Ӗ50w9mܴ vs>a tss;XY  0Z 6rs>}i5@l;w㛊atMV@LEXpWEDledKSf1OtqY @a > u[Y%ee]8DxHӯ_Νl.hZ v΍>o@5 ~?~R^D96Ap `Z^əhoilŊ/_ H~={: Z河7ё{u*_xkbNάsܹC-D߿/޹|ƺd'x S3X3@kk_7={ eգO^ͻvmٽuĉ;}BꮓhuРs@OoCIENDB`CedarBackup3-3.1.6/doc/manual/images/info.png0000664000175000017500000000443312657665551022450 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @ :߿~ @Aǻw˗/߿O`d^TŋgϞm~@@e@@-@@D9Q``M\o۶=8jms`tv+v=yo߾B?@@v0eHB7o~:ZP^Qe f30DB@|1偁{nPqJ:"7o@Ç_, DZ0aK/)(8GF'5 0w-i0w,@Bs448ceH@ų!<~ɓ'x@8t۷o!쪪`:?o(/YpBS8D0߿^ `d~?(+.>I'O"kVln\ǀݽ{̱Z @Xvn pt (x<=wcj8 ւh,ܹs L:O>dᄒ8 +&fe f*,AzsFF1cѣ/z]^~5^]ƞ= g^x⠿Sܟ?@wsu \BSښ}_Gtvށ-YAAƺuEAՁ={!3T&"\\|l9Kв/N\1Mf-j֯III  $Ϟ}>!?`χw" `҂r=;""J[J_-j9sfcc#: M3`]la;19Ͷrrr,Y7 Pj{`u<:!J4gP|]zʕ+pA*1&$K(+Twwwgdd  5`2/ $RS#YCnnnC6 L>FO78޽Ϩ7 *}6 kх VcM _]}- drبXl11-Č,@c*O>mkk LJE  ,g+x[ n<R__Ns$%W ^s6P%@ڲ1x o|{f˯@Vξ=@7A CX mZŵo1//Nól@p״ڵ o8ء޿?$= “0%y\ -%?8 t-ZLX- C>|<3yY)y1.Np )3~Ν;22 #hXۺuM A.~ì&&sp,# 9TTQㇹs:::Ĝ8quD#L6mƍ^!:P͛n! E  HFߖ-[mۮ]ûІ/~;TWWwa`K-D85,e˖֦AAAPXY֬Y)7 w9rdՋaؐٳgMR" РA TmGIENDB`CedarBackup3-3.1.6/doc/manual/ch02s05.html0000664000175000017500000000542212657665550021512 0ustar pronovicpronovic00000000000000Coordination between Master and Clients

    Coordination between Master and Clients

    Unless you are using Cedar Backup to manage a pool of one, you will need to set up some coordination between your clients and master to make everything work properly. This coordination isn't difficult — it mostly consists of making sure that operations happen in the right order — but some users are suprised that it is required and want to know why Cedar Backup can't just take care of it for me.

    Essentially, each client must finish collecting all of its data before the master begins staging it, and the master must finish staging data from a client before that client purges its collected data. Administrators may need to experiment with the time between the collect and purge entries so that the master has enough time to stage data before it is purged.

    CedarBackup3-3.1.6/doc/manual/apcs04.html0000664000175000017500000001423212657665550021517 0ustar pronovicpronovic00000000000000Recovering Subversion Data

    Recovering Subversion Data

    Subversion data is gathered by the Cedar Backup subversion extension. Cedar Backup will create either full or incremental backups, but the procedure for restoring is the same for both. Subversion backups are always taken on a per-repository basis. If you need to restore more than one repository, follow the procedures below for each repository you are interested in.

    First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week.

    The subversion extension creates files of the form svndump-*.txt. These files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc.

    Next, if you still have the old Subversion repository around, you might want to just move it off (rename the top-level directory) before executing the restore. Or, you can restore into a temporary directory and rename it later to its real name once you've checked it out. That is what my example below will show.

    Next, you need to create a new Subversion repository to hold the restored data. This example shows an FSFS repository, but that is an arbitrary choice. You can restore from an FSFS backup into a FSFS repository or a BDB repository. The Subversion dump format is backend-agnostic.

    root:/tmp# svnadmin create --fs-type=fsfs testrepo
          

    Next, load the full backup into the repository:

    root:/tmp# bzcat svndump-0:782-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
          

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    Follow that with loads for each of the incremental backups:

    root:/tmp# bzcat svndump-783:785-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
    root:/tmp# bzcat svndump-786:800-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
          

    Again, use zcat or just cat as appropriate.

    When this is done, your repository will be restored to the point of the last commit indicated in the svndump file (in this case, to revision 800).

    Note

    Note: don't be surprised if, when you test this, the restored directory doesn't have exactly the same contents as the original directory. I can't explain why this happens, but if you execute svnadmin dump on both old and new repositories, the results are identical. This means that the repositories do contain the same content.

    For more information on using Subversion, see the book Version Control with Subversion (http://svnbook.red-bean.com/) or the Subversion FAQ (http://subversion.tigris.org/faq.html).

    CedarBackup3-3.1.6/doc/manual/ch05s06.html0000664000175000017500000002642512657665550021524 0ustar pronovicpronovic00000000000000Configuring your Writer Device

    Configuring your Writer Device

    Device Types

    In order to execute the store action, you need to know how to identify your writer device. Cedar Backup supports two kinds of device types: CD writers and DVD writers. DVD writers are always referenced through a filesystem device name (i.e. /dev/dvd). CD writers can be referenced either through a SCSI id, or through a filesystem device name. Which you use depends on your operating system and hardware.

    Devices identified by by device name

    For all DVD writers, and for CD writers on certain platforms, you will configure your writer device using only a device name. If your writer device works this way, you should just specify <target_device> in configuration. You can either leave <target_scsi_id> blank or remove it completely. The writer device will be used both to write to the device and for filesystem operations — for instance, when the media needs to be mounted to run the consistency check.

    Devices identified by SCSI id

    Cedar Backup can use devices identified by SCSI id only when configured to use the cdwriter device type.

    In order to use a SCSI device with Cedar Backup, you must know both the SCSI id <target_scsi_id> and the device name <target_device>. The SCSI id will be used to write to media using cdrecord; and the device name will be used for other filesystem operations.

    A true SCSI device will always have an address scsibus,target,lun (i.e. 1,6,2). This should hold true on most UNIX-like systems including Linux and the various BSDs (although I do not have a BSD system to test with currently). The SCSI address represents the location of your writer device on the one or more SCSI buses that you have available on your system.

    On some platforms, it is possible to reference non-SCSI writer devices (i.e. an IDE CD writer) using an emulated SCSI id. If you have configured your non-SCSI writer device to have an emulated SCSI id, provide the filesystem device path in <target_device> and the SCSI id in <target_scsi_id>, just like for a real SCSI device.

    You should note that in some cases, an emulated SCSI id takes the same form as a normal SCSI id, while in other cases you might see a method name prepended to the normal SCSI id (i.e. ATA:1,1,1).

    Linux Notes

    On a Linux system, IDE writer devices often have a emulated SCSI address, which allows SCSI-based software to access the device through an IDE-to-SCSI interface. Under these circumstances, the first IDE writer device typically has an address 0,0,0. However, support for the IDE-to-SCSI interface has been deprecated and is not well-supported in newer kernels (kernel 2.6.x and later).

    Newer Linux kernels can address ATA or ATAPI drives without SCSI emulation by prepending a method indicator to the emulated device address. For instance, ATA:0,0,0 or ATAPI:0,0,0 are typical values.

    However, even this interface is deprecated as of late 2006, so with relatively new kernels you may be better off using the filesystem device path directly rather than relying on any SCSI emulation.

    Finding your Linux CD Writer

    Here are some hints about how to find your Linux CD writer hardware. First, try to reference your device using the filesystem device path:

    cdrecord -prcap dev=/dev/cdrom
             

    Running this command on my hardware gives output that looks like this (just the top few lines):

    Device type    : Removable CD-ROM
    Version        : 0
    Response Format: 2
    Capabilities   : 
    Vendor_info    : 'LITE-ON '
    Identification : 'DVDRW SOHW-1673S'
    Revision       : 'JS02'
    Device seems to be: Generic mmc2 DVD-R/DVD-RW.
    
    Drive capabilities, per MMC-3 page 2A:
             

    If this works, and the identifying information at the top of the output looks like your CD writer device, you've probably found a working configuration. Place the device path into <target_device> and leave <target_scsi_id> blank.

    If this doesn't work, you should try to find an ATA or ATAPI device:

    cdrecord -scanbus dev=ATA
    cdrecord -scanbus dev=ATAPI
             

    On my development system, I get a result that looks something like this for ATA:

    scsibus1:
            1,0,0   100) 'LITE-ON ' 'DVDRW SOHW-1673S' 'JS02' Removable CD-ROM
            1,1,0   101) *
            1,2,0   102) *
            1,3,0   103) *
            1,4,0   104) *
            1,5,0   105) *
            1,6,0   106) *
            1,7,0   107) *
             

    Again, if you get a result that you recognize, you have again probably found a working configuraton. Place the associated device path (in my case, /dev/cdrom) into <target_device> and put the emulated SCSI id (in this case, ATA:1,0,0) into <target_scsi_id>.

    Any further discussion of how to configure your CD writer hardware is outside the scope of this document. If you have tried the hints above and still can't get things working, you may want to reference the Linux CDROM HOWTO (http://www.tldp.org/HOWTO/CDROM-HOWTO) or the ATA RAID HOWTO (http://www.tldp.org/HOWTO/ATA-RAID-HOWTO/index.html) for more information.

    Mac OS X Notes

    On a Mac OS X (darwin) system, things get strange. Apple has abandoned traditional SCSI device identifiers in favor of a system-wide resource id. So, on a Mac, your writer device will have a name something like IOCompactDiscServices (for a CD writer) or IODVDServices (for a DVD writer). If you have multiple drives, the second drive probably has a number appended, i.e. IODVDServices/2 for the second DVD writer. You can try to figure out what the name of your device is by grepping through the output of the command ioreg -l.[24]

    Unfortunately, even if you can figure out what device to use, I can't really support the store action on this platform. In OS X, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality does work on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully.



    [24] Thanks to the file README.macosX in the cdrtools-2.01+01a01 source tree for this information

    CedarBackup3-3.1.6/doc/cback3.conf.sample0000664000175000017500000001100012555750303021511 0ustar pronovicpronovic00000000000000 Kenneth J. Pronovici 1.3 Sample sysinfo CedarBackup3.extend.sysinfo executeAction 95 mysql CedarBackup3.extend.mysql executeAction 96 postgresql CedarBackup3.extend.postgresql executeAction 97 subversion CedarBackup3.extend.subversion executeAction 98 mbox CedarBackup3.extend.mbox executeAction 99 encrypt CedarBackup3.extend.encrypt executeAction 299 tuesday /opt/backup/tmp backup group /usr/bin/scp -B cdrecord /opt/local/bin/cdrecord mkisofs /opt/local/bin/mkisofs collect echo "I AM A PRE-ACTION HOOK RELATED TO COLLECT" collect echo "I AM A POST-ACTION HOOK RELATED TO COLLECT" /opt/backup/collect daily targz .cbignore /etc incr /home/root/.profile weekly /opt/backup/stage debian local /opt/backup/collect /opt/backup/stage cdrw-74 cdwriter /dev/cdrw 0,0,0 4 Y N weekly 5.1 /opt/backup/stage 7 /opt/backup/collect 0 mlogin bzip2 Y plogin bzip2 N db1 db2 incr bzip2 FSFS /opt/svn/repo1 BDB /opt/svn/repo2 incr bzip2 /home/user1/mail/greylist daily /home/user2/mail gzip gpg Backup User CedarBackup3-3.1.6/PKG-INFO0000664000175000017500000000274612657665551016622 0ustar pronovicpronovic00000000000000Metadata-Version: 1.0 Name: CedarBackup3 Version: 3.1.6 Summary: Implements local and remote backups to CD/DVD media. Home-page: https://bitbucket.org/cedarsolutions/cedar-backup3 Author: Kenneth J. Pronovici Author-email: pronovic@ieee.org License: Copyright (c) 2004-2011,2013-2016 Kenneth J. Pronovici. Licensed under the GNU GPL. Description: Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Alternately, Cedar Backup can write your backups to the Amazon S3 cloud rather than relying on physical media. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python 3 programming language. Keywords: local,remote,backup,scp,CD-R,CD-RW,DVD+R,DVD+RW Platform: UNKNOWN